There are two types of security testing that can be performed on Web applications: static analysis and dynamic analysis. In addition, there are two ways of performing security tests: automated and manual.
Dynamic analysis involves performing tests on a running instance of an application and is also known as black box testing. The security test will involve sending requests to the application and observing the responses to see if there was any indication that a security vulnerability may be present. Dynamic analysis can be an effective way to test applications, but it is important to understand some limitations. First of all, because the testing is based on analyzing request and response patterns, the results obtained are really only a guess about the internal state of the application -- the tester typically has no knowledge of the actual application source code and what the actual internal state of the application is. In addition, because the tester is only looking at the observable behavior of the application and cannot know the entire attack surface, there is a chance that areas of the application and components of its functionality will be excluded from the test. Also some responses might not obviously indicate that a security vulnerability is present. These factors lead to the potential for false negatives -– situations where there is a security vulnerability that goes unnoticed and unreported.
Dynamic analysis can either be performed in an automated manner or manually. Web application scanning tools like those from Watchfire and SPI Dynamics are good example of automated dynamic analysis tools. Automated tools are good for finding many common vulnerabilities such as SQL injection and cross-site scripting (XSS). They will often also look for well-known security or configuration problems with the Web and application servers and operating systems of the applications they are testing. Reports from these tools often also flag things such as critical patches that have not been applied. This can lead to the identification of only technical flaws in the application. Automated tools are limited in that they have no understanding of the business logic the applications they are testing. Logical flaws in applications that can be just as common and potentially even more damaging will be overlooked. This is an important point for organizations implementing application security initiatives to take to heart -- even if the scanner says you are clean you still need to look deeper in order to do a credible job of assessing the security of an application.
Manual testing of Web applications is typically performed using a Web browser and a Web proxy tool like Paros or OWASP's WebScarab. The commercial scanning tools also typically come with proxies as well so that analysts using their scanners can augment the scanner results with manual tests. Proxies allow the security analyst to create and send arbitrary requests to the application and inspect the results to look for evidence of security issues. As mentioned above, these manual tests to look for data leakage, failures to authorize activities and so on are required for a credible application security assessment.
Where dynamic analysis is performed against an actually running installation of an application, static analysis involves reviewing application assets like source code, configuration files and so on when they are static -- or at rest. This is also known as source code analysis or white box testing. Static analysis opens up opportunities for a more thorough analysis because the analysis being performed has access to the "ground truth" of the source code. Analysts do not have to observe the behavior of an application and make guesses about the internal state of the system; instead the analyst has access to the actual instructions the software will follow when put into production. This can help to reduce false positives as well as reduce false negatives. One drawback to static analysis is that it can fail to identify security issues that are bound up in the specific configuration of the deployed system -- for example, static analysis will not be able to identify issues that would arise due to administrators failing to install Web server or operating system patches.
Just as with dynamic or black box testing, static analysis can be performed by both automated tools and by manual review. Because non-trivial applications can have tens or hundreds of thousands -- or even millions -- of lines of source code, manual reviews are typically only conducted against a subset of the application source code that is considered to be security critical. Automated static analysis tools such as those from Fortify Software and Ounce Labs have the advantage of being able to be run against large source code bases and the analysis is performed consistently and tirelessly against the entire source code base. Automated static analysis tools can only execute a set of rules that look for general quality and security flaws -- they have no understanding of the context of the application or the business rules the application should be enforcing. For this reason automated static analysis tools have the same blindness to logical flaws in applications that their dynamic analysis counterparts do. They are great at finding flaws like SQL injection, cross-site scripting and buffer overflows, but fall short in other critical areas.
Actual assessments of the security of Web applications often combine one or more of the previously enumerated techniques, and selecting what sort of assessment to perform should be based on several factors such as the resources available to perform the assessment and access to either the source code or a running system that can be used for testing. Running automated scans of either source code or running applications can be a relatively low cost way to get some insight into the security state of the system, but suffer from the critical inability to find logical application flaws outlined above. In many organizations it may be difficult to get access to actual source code for systems because it is considered highly proprietary. In cases such as these, only dynamic analysis could be performed. Conversely, in other organizations it may be unacceptable for various reasons to run tests against live systems, and no suitable pre-deployment instances of the application may be available. In cases such as this static analysis would be the only option. Manual review – of both live applications and source code – can become expensive for large applications and so must be properly targeted. It is critical for organizations to understand the goals of their security assessment and the level of security assurance they need and select an application testing strategy appropriate to their goals and available resources.