Featured Webinar: Simplify Compliance Workflows With New C/C++test 2024.2 & AI-Driven Automation Watch Now
Jump to Section
False Positives in Static Code Analysis
A false positive arises when a static analysis tool falsely claims a static rule was breached. This article goes into great detail on false positives and static code analysis.
Jump to Section
Jump to Section
“Too many false positives” is probably the most common excuse for avoiding static analysis. But static analysis doesn’t have to be so noisy.
Years ago, the biggest challenge in software static analysis was trying to find more and more interesting things to check. In Parasoft’s original CodeWizard product back in the early 90s, we had 30-some rules based on items from Scott Meyers’ book, Effective C++. It was what I like to think of as “Scared Straight for programmers.” I mentioned this once to Scott, and while he hadn’t thought of it that way, it did give him a pretty good laugh.
Since then, static analysis researchers have constantly worked to push the envelope of what can be detected, expanding what static analysis can do, and identifying defects rather than just a piece of weak code. But it still suffers from false positives. Static analysis has changed the user’s focus, from hardening the code to searching for bugs, which is great, but now one of the most common hurdles people run into with static code analysis is trying to make sense of the results they get.
Although people do say, “I wish static analysis would catch ____” (name your favorite unfindable bug). It’s far more common to hear, “Wow, I have way too many results!” or “Static analysis is noisy!” or “Static analysis false positives are overwhelming!” So as a software testing organization, it’s our job to continue to solve that problem for our customers —to continue to provide tools and features that help you sort through the results you’re getting and understand which issues represent the most risk.
Static Code Analysis: Benefits & Limitations
Static analysis tools benefit developers by finding bugs, security vulnerabilities, and coding standard deviations that are otherwise tedious to find or enforce.
Benefits
Static analysis and static application security testing (SAST) tools provide dev teams with the following benefits.
- Better code quality. Static analysis helps remove defects early in development, providing better code quality right at the point of creation while also improving downstream quality in later stages of development.
- More secure code. In addition to defect detection, static analysis and SAST improve security by detecting vulnerabilities that lead to security risks. Solutions like Parasoft enforce more secure coding practices and standards than others.
- Compliance with industry and organization coding standards. Enforcing coding standards manually is tedious, if not impossible. Static analysis automates coding standard checking, enforcement, and compliance. This will include industry safety and security coding standards like MISRA and security standards like OWASP Top 10, CWE Top 25, and SEI CERT C/C++.
- Improve productivity. Static analysis testing early during implementation and within the developer’s IDE and/or within the team’s CI/CD workflow promotes code quality and expedites development by shifting testing further left. Additionally, Parasoft incorporates AI/ML to automatically organize identified static analysis issues and further enhance productivity.
- Reduce risk and costs. Applying static analysis substantially reduces field safety and security-reported issues. Organizations also acknowledge an increase in product reliability, which reduces maintenance costs and promotes a high-quality reputation.
Limitations
Static analysis and SAST are not silver bullets. In fact, Parasoft encourages customers to use static analysis in conjunction with other testing methods like unit testing, regression testing, and more. Static analysis solutions like Parasoft have excellent returns on investment, however, it does make sense to understand the limitations.
- False positives. Every static analysis can mistakenly interpret an obscure coding construct and report an error. These errors in reporting are false positives and inevitable with static analysis, as discussed below.
- False negatives. Similar to false positives, tools can miss real errors in source code. These missed defects are false negatives and are covered in more detail below.
- Limited scope and context. The scope of static analysis is limited to the source code that is made available at the time of analysis. The more code, the better the scope and results. Static analysis also lacks context in terms of what the code is designed to do. It analyzes the source on the basis of a set of rules or checkers, which don’t necessarily understand the programmer’s intention.
- Tool performance. Static analysis, depending on the depth of analysis, can take time to analyze large code bases. Although modern server hardware has made this less of a concern, developers usually limit large-scale analysis to software builds. Modern static analysis tools like C/C++test have mitigated this problem to some extent with incremental analysis performed within IDEs, for example. Code violations are reported for remediation only to the engineer developing their code as incremental steps are performed during each code compile and build.
- Cost. Commercial static analysis tools cost money to license. However, static analysis tools have excellent returns on investment for small or large projects and organizations.
What Is a False Positive in Static Analysis?
In the context of static analysis, a false positive occurs when a static analysis tool incorrectly reports that a static analysis rule was violated. Of course, this can be subjective. Sometimes developers fall into the trap of labeling any error message they don’t like as a false positive, but this isn’t really correct.
In many cases, they simply don’t agree with the rule, don’t understand how it applies in the situation, or don’t think it’s important in general. I would call this noise, rather than a false positive. The funny thing I’ve found here is that the cleverer the tool is, the more likely it is to produce a finding that a developer might not understand at first glance.
False Positives in Pattern-Based Analysis
Pattern-based static analysis doesn’t actually have false positives. If the tool reports that a static analysis rule was violated when it actually was not, this indicates a bug in the rule because the rule should not be ambiguous. If the rule doesn’t have a clear pattern to look for, it’s a bad rule.
I’m not saying that every reported rule violation indicates the presence of a defect. A violation means that the pattern was found, indicating a weakness in the code, a susceptibility to having a defect.
When I look at a violation, I ask myself whether or not this rule applies to my code. If it applies, I fix the code. If it doesn’t, I suppress the violation. It’s best to suppress static analysis violations in the code directly so that it’s visible to team members and you won’t end up having to review it a second time. Otherwise, you will constantly be reviewing the same violation over and over again; it’s like trying to spell check but never adding your “special” words to its dictionary.
The beauty of in-code suppression is that it’s independent of the static analysis engine. Anyone can look at the code and see that the code has been reviewed and that this pattern is deemed acceptable in this code. This is particularly useful if you must prove compliance with a coding standard. And if you do indeed need compliance, it’s easy to use an existing configuration for those standards such as CWE, MISRA, IEC 62304, DO-178C, and more.
False Positives in Flow-Based Analysis
With flow-based analysis, false positives are not just inherent to the method, but also relevant — and need to be addressed. Flow analysis cannot avoid false positives for the same reason that unit testing cannot generate perfect unit test cases. The analysis has to make determinations about the expected behavior of the code. Sometimes there are too many options to know what is realistic; sometimes you simply don’t have enough information about what is happening in other parts of the system.
The important thing here is that the true false-positive is something that is just completely wrong. For example, assume that the static analysis tool you’re using says you’re reading a null pointer. If you look at the code and see that it’s actually impossible, then you have a false positive.
On the other hand, if you simply aren’t worried about nulls in this piece of code because they’re handled elsewhere, then the message, while not important to you, is not a false positive. It’s true and happens to be unimportant. The messages from a flow analysis tool range from “true and important” to “true and unimportant” and “true and improbable” to “untrue.” There’s a lot of variation here and each should be handled differently.
There is a common trap here as well. As in the null example above, you may believe that a null value cannot make it to this point, but the tool found a way to make it happen. If it’s important to your application, be certain to check and possibly protect against this.
It’s critical to understand that there is both power and weakness in flow analysis. The power of flow analysis is that it goes through the code and tries to find hot spots and then find problems around the hot spots. The weakness is that it has to make assumptions to try and traverse the code, and the further it traverses, the more likely it is to produce an improbable path.
The real problem is that if you start thinking you’ve cleaned all the code because your flow analysis is clean, you are fooling yourself. Really, you’ve found some errors and you should be grateful for that. The absence of flow analysis errors just means that you haven’t found anything, not that the code is clean. It’s best to make sure you’re using a tool like C/C++test, dotTEST, or Jtest that has both types of static analysis, if you are building safety-critical software
How to Choose a Modern Static Analysis Tool
False Positives vs. False Negatives In Static Code Analysis
Despite any claims to the contrary, every static analysis tool produces false positives and misses real defects—false negatives. There are trade-offs to be made between missing defects and getting too many reports.
Static analysis tools instead need to be evaluated on what they do find—and how well that’s presented—and how many defects they miss. Tools tuned to reduce false positives inevitably have higher false negatives. Is it worth it to miss real bugs in order to have fewer reports?
Here are some things to consider when weighing the balance between false positives and negatives.
- Use the tools’ capabilities to prioritize findings. False positives are inherent in static analysis tools, but most have developed sophisticated tools to deal with the results. Static analysis tools group reports by severity and have the ability to ignore and/or filter unwanted results quickly and easily.
So, first step is to configure your code analysis appropriately and reduce unnecessary noise. Make sure that the right checkers are running against the right code. For example, if you won’t fix an issue from a particular checker, turn it off. Or if you have legacy code that has been out in the field for many years and only a reported bug can force you to make any edits, why bother running analysis on it?
However, if you happen to run static analysis on a large code base and get back an enormous list of violations, don’t get overwhelmed. It doesn’t take long to learn how to prioritize results based on risk and impact and concentrate on the highest priority first. Many complaints about static analysis tools stem from impressions of older generation tools and capabilities.
- False positives don’t last forever. Any results that aren’t of interest can be filtered out, never to be reported again. If the type of error isn’t a high priority to fix, the entire class of error can be removed from future reports. If individual reports are found to be true false positives, they can be marked as such. The key thing is that, with some effort, the noise disappears as developers get more used to using the tools.
- False negatives impact product quality and security. False negatives, on the other hand, are unknowns since they aren’t detected and their existence may not be revealed until a customer has the product in hand. It’s with this lens that developers need to weigh the impact of false positives on their workload: Is it worth it to miss important defects?
Organizations need to make their own risk and associated cost determination. The ideal trade-off is to have as few false positives as possible while minimizing false negatives.
- Building trust in tool capabilities. False positives may seem problematic at first, but finding and fixing real defects builds confidence in the tools over time. On the contrary, missing real errors that are detected later in testing reflects poorly on the tools used during development. Developers want tools they can trust, and emphasizing the reduction of false positives at the cost of missing real defects isn’t worth it.
Runtime Error Detection
One great, but commonly overlooked, way to complement flow analysis is runtime error detection. Runtime error detection helps you find much more complicated problems than flow analysis can detect, and you have the confidence that the condition actually occurred. Runtime error detection doesn’t have false positives in the way that static analysis does. When it finds a defect, it’s because it actually observed it happening during execution — there are no assumptions involved.
Your runtime rule set should closely match your static analysis rule set. The rules can find the same kinds of problems, but the runtime analysis has a massive number of execution paths available to it. This is because at runtime, stubs, setup, initialization, etc are not a problem the way they are for flow analysis. The only limit is that it’s only as good as your test suite because it checks the paths your test suite happens to execute. If you’re programming in C or C++, especially in embedded devices like IoT take a look at Insure++ — it can find more bugs at runtime than any other tool. Instead of getting bogged down by tricky issues like thread problems, memory leaks, and race conditions, you can find them accurately at runtime.
Is It Worth the Time?
My approach to false positives is this: If it takes 3 days to fix a bug, it’s better to spend 20 minutes to look at a false positive…as long as I can tag it and never have to look at it again. It’s a matter of viewing it in the right context. For example, say you have a problem with threads. Problems with threads are dramatically difficult to discover. If you want to find an issue related to threads, it might take you weeks to track it down. I’d prefer to write the code in such a way that problems cannot occur in the first place. In other words, I try to shift my process from detection to prevention.
Static analysis, when deployed properly, doesn’t have to be a noisy unpleasant experience. Take a look at how we do things differently at Parasoft, especially using the full power of Parasoft DTP to manage results with intelligent analytics that keep you focused on the risk in your software rather than chasing unimportant issues.
Getting Started With Static Analysis
“MISRA”, “MISRA C” and the triangle logo are registered trademarks of The MISRA Consortium Limited. ©The MISRA Consortium Limited, 2021. All rights reserved.