What are automated reviews, and how do they work?
Automated reviews, unlike manual security reviews, are almost entirely hands-off. You simply pass in a codebase, and the program takes care of the rest. They are usually comprised of three key components: The Parser, The Framework, and The Detectors.
The parser breaks down the provided codebase into its objects, such as separating contracts, functions, modifiers, etc. These objects' attributes are also extracted, such as their declarations, parameters, body, etc. These processes are primarily based on a Grammar set which is a predetermined set of rules that codebases have to follow in order to compile correctly.
The framework provides an interface to query against the information extracted by the parser. The framework dictates the rules and style that the detectors must follow to achieve their intended goal. The framework can be integrated into the parser simply through how the codebase objects are stored and queried, although in LightChaser the parser and framework are separate. In my opinion, the framework is one of the most important parts of a static analysis tool as it largely dictates the complexity ceiling that the program as a whole can achieve. Complexity ceiling is the theoretical limit of the complexity of vulnerability that can be mapped by an automated review tool within a reasonable timeframe.
The detectors are the ‘queries’ that are built on top of the framework and are what essentially tell the framework how to find a particular vulnerability. This is the bread and butter of analysis tools as you can have the best framework in the world but if you have no detectors, you're not going to detect anything xD. There is also the concept of critical mass here, which is the number of detectors required to almost guarantee that your tool finds high-impact/value findings every time it’s run. I’m working to achieve this critical mass in LightChaser-V4 which is taking a lot of time haha, V4 detectors are powerful but relatively time-consuming to program.
Benefits of Using Automated Review Tools for Smart Contracts
One of the key benefits of automated analysis tools is their large coverage, especially for larger codebases. For example, if you have a codebase with 10K+ LoC, it is exceptionally difficult to ensure the codebase doesn’t contain a particular set of vulnerabilities manually. However, for automated tools, it doesn’t matter if your codebase is 500 LoC or 50,000 LoC, either way, all detectors will be run against the code. So if you have a framework with 100s of detectors, it can very reliably find 100+ findings in most medium and large-sized codebases. This is not typically something that can be replicated manually. However, it must be said that manual reviews excel in finding high-impact protocol-specific findings as the complexity ceiling of an experienced manual security researcher is unmatched. So ideally, protocols utilize both automated reviews for their breadth (coverage) and manual reviews for their depth (impact).
Speed is the second greatest advantage of automated reviews. Especially for larger codebases, it is standard for a security review to take weeks to months, which is good. YOU DO NOT WANT MANUAL SECURITY RESEARCHERS RUSHING THE AUDIT! But there are also times when you need a fast feedback loop, especially early on in the protocol's development or audit journey. Automated reviews fill this need quite well as even the most spaghetti code automated tools can provide you a report the same day.
This can allow you to
Get an automated review
Fix found issues
Go back to step one until satisfied.
Continue your development and audit journey.
I also feel that automated reviews increase the quality of future manual reviews. Automated reviews are exceptionally good at finding common vulnerabilities. Thus these automated reports can be used as a Known issues list for future manual reviews which thus heavily incentivize Manual Security Researchers to look for the more interesting high-impact finding as most of the low-hanging fruit has already likely been found by automated tools. I feel this is quite an understated benefit.
How Can Automated Reviews Enhance the Security of Your Smart Contracts?
Solodit alone has over 10,000 vulnerabilities listed and sadly there really isn’t a limit on how many vulnerabilities exist. The number of threat vectors that can exist within a codebase is vast and as I previously mentioned it can be quite daunting for a manual security researcher to catch everything. This also applies to automated reviews; they of course won’t catch everything however they typically have exceptionally wide coverage of bugs and vulnerabilities, LightChaser currently has nearly 1000 detectors. So protocols that utilize automated reports as part of their overall security process can reduce the risk of bugs getting missed. Competitive audits also have amazing coverage due to having often hundreds of eyes on your code. I personally recommend clients to at least go through the following security process:
- Automated Security Review
- Manual Team Audit
- Competitive Audit
- Bug Bounty
I feel this process leverages the unique advantages of each of the audit types thus helping to minimize the risk of a breach. The coverage of automated reviews such as LightChaser will keep improving with time as more detectors are coded and further framework upgrades are being developed. I am exceptionally grateful to my clients as thanks to them I can fully focus on building LightChaser :).
What Vulnerabilities Can Automated Reviews Detect in Smart Contracts?
I don’t personally believe there are any vulnerabilities that can’t be mapped in an automated security tool. The main barriers are time and experience. Time that it takes to build the detector, time it takes to run the detector, and experience to build the detector. What the underlying framework does is that it makes it more likely that you can build a detector to find a particular bug with the time and experience you have. This was a huge driving force behind developing LightChaser-V4, it made it possible to code detectors to find vulnerabilities that previously I would have not believed to be reasonably automatable. I’d imagine coding a high complexity detector without a competent framework feels kind of like coding a AAA game without a game engine.
How Does AI and ML Technology Improve the Accuracy of Smart Contract Audits?
I can’t fully comment on this as I haven’t yet implemented AI-powered detection into LightChaser. Although I will say to be wary of falling into the AI hype, AI definitely will have a place in security research in the future, however forcefully ramming AI into your tooling or development/audit process to capitalize on this hype won’t yield long-term positive results.
Audit Your Smart Contract Today
If you need a Smart Contract Audit or if you have any other Cybersecurity need, you can visit our website or our self-service platform and request the solution that best fits your needs: https://sub7.xyz/sechub
Contributors:
We want to thank https://x.com/ChaseTheLight99 for collaborating with us in writing this article, education and awareness is the first step to being safeguarded against any vulnerabilities.