I’ve noticed my blog has been getting a lot of traffic lately regarding false positives and custom rules for Fortify. Full disclosure: I am a Fortify consultant. The number one complaint I hear about Fortify’s static analysis product (Fortify SCA) is that it produces too many false positives. To understand why this happens, some context is needed on Fortify’s design and purpose.
When the cavemen wrote code, they would use manual code review to look for bugs and security holes. There is a place for manual code review, but its costly and the effectiveness depends on the security education level of the reviewers. Manual code review usually leads to a high false negative rate, especially in complex applications. To make sure everyone is on the same page here, a false positive is where an issue is found that is really not an issue. A false negative is where an issue is missed, leaving a vulnerability in the code.
Fortify SCA was created to help automate the manual code review process. SCA looks through your source code and finds possible vulnerabilities…emphasis on possible. SCA errs on the side of reducing false negatives. The corollary to that is we produce more false positives. We do that because we would rather produce some false positives to avoid missing real issues. We do our best to understand the application, but let’s face it: static analysis is just one algorithm scanning another. There’s no way for ANY static analysis software to not produce false positives.
I also encounter many developers calling real vulnerabilities false positives because they don’t fully understand the finding. I recently received an email from a developer saying SCA found hundreds of persistent cross site scripting issues and that they were all false positives. He was convinced that since they do input validation, trusting the database was not a problem. We talked some about why the database is not a trusted source and he got the big picture.
For the most part, SCA does a decent job of reducing false positives. Many times the reviewers don’t understand the vulnerability or SCA’s criticality is too high for the given scenario. When it comes down to it, SCA is just a way of identifying potential issues. Its up to the reviewers to decide how they want to handle the output.
If you’re having problems with Fortify, send an email to firstname.lastname@example.org. Our support group is fantastic, so give that a go first. If you need help integrating Fortify into your SDLC, contact your Fortify rep to get a consultant on site.
On the Daily Dave mailing list there’s an interesting discussion about the value of static analysis. For those unaware of what static analysis is, static analysis is analyzing source code to find potential vulnerabilities. Like every technology, static analysis has it’s pros and cons. I don’t actually subscribe to the mail list (I only use RSS), so I’m going to write a little about my views on static analysis.
In the security world, the big fight is between static versus dynamic analysis. By dynamic, most people talk about penetration testing. The results from automated penetration tools usually contain low amounts of false positives, but your test is dependant on the tests ran by the tool. Pen-testing tools can and do miss vulnerabilities. Static analysis on the other scans ALL code. If there are vulnerabilities in the corners of your application that are typically not used, then static analysis has a higher probability of finding these vulnerabilities. In addition to finding obscure vulnerabilities, static analysis can also find more categories of vulnerabilities. Automated pen-test tools are limited because they can only see http responses. Static analysis tools can apply rules that are more focused on your development platforms.
The biggest argument against static analysis is it produces too many false positives. The common misconception here is that the tool is not saying “this is vulnerable”, the tool is actually saying “this is potentially vulnerable and needs to be audited to be sure”. Yes this creates a lot of work, but this argument really only applies to first time scans. Most of the major static analysis applications are rule based and give better results over time. After the initial triage, you suppress false positives and create custom rules to make the scan more context specific. For example, on the mailing list someone referred to static analysis tools producing false positives on custom memory management libraries. This is true, out of the box most scanners are going to flag this because they don’t know what the library does and want human eyes to verify. If you’re using Fortify SCA, you can write a custom rule to eliminate those false positives in the future. Because I’m a Fortify consultant, I know that the more you tailor our static analysis software to your application the better your results set. Static analysis shouldn’t be a one shot scan, it should be used continually throughout development and testing.
In the end, it’s not static analysis versus dynamic analysis. In reality, you should be using BOTH. Static analysis is going to give you a sense of how secure you code is. Penetration testing is going to find easily exploitable vulnerabilities. If you are concerned about false positives with static analysis, check out Fortify Program Trace Analyzer (PTA). PTA does static analysis automatically as you are doing functional testing. The results are extremely conservative. If PTA finds a vulnerability, you can usually take it to the bank.
The company I work for, Fortify Software, is in the news.
Software security really has went from being a “nice to have” to a necessity.