Security testing has recently moved beyond the realm of network port scanning to include probing software behavior as a critical aspect of system behavior . Unfortunately, testing software security is a commonly misunderstood task. Security testing done properly goes deeper than simple black-box probing on the presentation layer (the sort performed by so-called application security tools)—and even beyond the functional testing of security apparatus.
Testers must use a risk-based approach, grounded in both the system’s architectural reality and the attacker’s mindset, to gauge software security adequately. By identifying risks in the system and creating tests driven by those risks, a software security tester can properly focus on areas of code in which an attack is likely to succeed. This approach provides a higher level of software security assurance than possible with classical black-box testing.
Testers must use a risk-based approach, grounded in both the system’s architectural reality and the attacker’s mindset, to gauge software security adequately. By identifying risks in the system and creating tests driven by those risks, a software security tester can properly focus on areas of code in which an attack is likely to succeed. This approach provides a higher level of software security assurance than possible with classical black-box testing.
What’s so different about security?
Software security is about making software behave in the presence of a malicious attack, even though in the real world, software failures usually happen spontaneously—that is, without intentional mischief. Not surprisingly, standard software testing literature is only concerned with
what happens when software fails, regardless of intent. The difference between software safety and software security is therefore the presence of an intelligent adversary bent on breaking the system.
Security is always relative to the information and services being protected, the skills and resources of adversaries, and the costs of potential assurance remedies; security is an exercise in risk management. Risk analysis, especially at the design level, can help us identify potential security problems and their impact. Once identified and ranked, software risks can then help guide software security testing.
A vulnerability is an error that an attacker can exploit. Many types of vulnerabilities exist, and computer security researchers have created taxonomies of them.2 Security vulnerabilities in software systems range from local implementation errors (such as use of the gets() function call in C/C++), through inter procedural interface errors (such as a race condition between an access control check and a file operation), to much higher design-level mistakes (such as error handling and recovery systems that fail in an insecure fashion or object-sharing systems that mistakenly include transitive trust issues). Vulnerabilities typically fall into two categories—bugs at the implementation level and flaws at the design level.
Attackers generally don’t care whether a vulnerability is due to a flaw or a bug, although bugs tend to be easier to exploit. Because attacks are now becoming more sophisticated, the notion of which vulnerabilities actually matter is changing. Although timing attacks, including the well-known race condition, were considered exotic just a few years ago, they’re common now. Similarly, two-stage buffer overflow attacks using trampolines were once the domain of software scientists, but now appear in 0day exploits.
Design-level vulnerabilities are the hardest defect category to handle, but they’re also the most prevalent and critical. Unfortunately, ascertaining whether a program has design-level vulnerabilities requires great expertise, which makes finding such flaws not only difficult, but particularly hard to automate.
Examples of design-level problems include error handling in object- oriented systems, object sharing and trust issues, unprotected data channels (both internal and external), incorrect or missing access control mechanisms, lack of auditing/logging or incorrect logging, and ordering and timing errors (especially in multithreaded systems). These sorts of flaws almost always lead to security risk.
what happens when software fails, regardless of intent. The difference between software safety and software security is therefore the presence of an intelligent adversary bent on breaking the system.
Security is always relative to the information and services being protected, the skills and resources of adversaries, and the costs of potential assurance remedies; security is an exercise in risk management. Risk analysis, especially at the design level, can help us identify potential security problems and their impact. Once identified and ranked, software risks can then help guide software security testing.
A vulnerability is an error that an attacker can exploit. Many types of vulnerabilities exist, and computer security researchers have created taxonomies of them.2 Security vulnerabilities in software systems range from local implementation errors (such as use of the gets() function call in C/C++), through inter procedural interface errors (such as a race condition between an access control check and a file operation), to much higher design-level mistakes (such as error handling and recovery systems that fail in an insecure fashion or object-sharing systems that mistakenly include transitive trust issues). Vulnerabilities typically fall into two categories—bugs at the implementation level and flaws at the design level.
Attackers generally don’t care whether a vulnerability is due to a flaw or a bug, although bugs tend to be easier to exploit. Because attacks are now becoming more sophisticated, the notion of which vulnerabilities actually matter is changing. Although timing attacks, including the well-known race condition, were considered exotic just a few years ago, they’re common now. Similarly, two-stage buffer overflow attacks using trampolines were once the domain of software scientists, but now appear in 0day exploits.
Design-level vulnerabilities are the hardest defect category to handle, but they’re also the most prevalent and critical. Unfortunately, ascertaining whether a program has design-level vulnerabilities requires great expertise, which makes finding such flaws not only difficult, but particularly hard to automate.
Examples of design-level problems include error handling in object- oriented systems, object sharing and trust issues, unprotected data channels (both internal and external), incorrect or missing access control mechanisms, lack of auditing/logging or incorrect logging, and ordering and timing errors (especially in multithreaded systems). These sorts of flaws almost always lead to security risk.
No comments:
Post a Comment