Re: [SC-L] Security in QA is more than exploits
On Wed, Feb 4, 2009 at 7:26 PM, Paco Hope wrote: > > Andy also said "I think we lose something when we start saying 'everything > is > relative.'" I think we lose something more important if we try to impose > abolutes: we lose the connection to the business. No business operates on > absolutes and blind imperatives. Few, if any, profit-focused businesses > dogmatically fix all remotely exploitable SQL injections. Every business > looks > pragmatically at these things. Fixing the bug might cause the release of > the > product to slip by 6 weeks or a major customer to buy a competitor's > product > this quarter instead of waiting for the release. It's always a judgment > call > by the business. Even if their goal and their track record is fixing 100% > of > sev 1 issues before release, you know that each sev 1 issue was considered > in > terms of its cost, impact, schedule delay and so on. The ppint here though is that repeatable processes do matter. Having a standard of what constitutes a given severity of bug standardized in a policy statement is a good thing. Sure that is hard as every application is different, but you need a starting place. And so while my standards don't say "XSS always equals P1" they do say "XSS that can be discovered in an external facing application" or even slightly more generically than that. So my bug priority matrix does talk about business impact because that is what matters, but I still have to give real world examples to folks who aren't expert security testers of how to handle a bug when they come across it. And we need to provide clear guidance in standards because every single bug shouldn't require an ad-hoc trage process. > > It is an outstanding idea for infosec guys to provide security test cases, > or > the framework for them, to QA. That beats the heck out of what they usually > do. However, a bunch of test cases for XSS, CSRF, SQL injection and so on > will > not map easily to requirements or to the QA workflow. At what priority do > they > execute? When the business (inevitably) squeezes testing to try to claw > back a > day or two on the slipped schedule, can any of these security tests be left > out? Why or why not? Without hanging them into the QA workflow with clear > traceability, QA will struggle to prioritize them correctly and maintain > them. > Security requirements would make that priority and maintenance > straightforward. At this point I'm not disagreeing with you, but taking > your good approach and extending it a step farther. I undertsand this, but handing security requirements to QA folks without a set of reeatable test cases for doing them isn't going to help much, in mos organizations. James Whittaker doesn't work for me :) . And if you're developing web applications you're probably going to have some set of standardized testing you do. You need to have a repository of test cases for certain things, and I think testing for certain type of attacks is probably a decent starting point. Sure you want QA to own those, but if you're worried about buffer overflows you've going to have a bunch of standard test cases, test scenarios, test data (long input strings, inputs with null bytes in them, etc) that you're going to reuse a bunch of times so that each tester isn't starting from scratch when they see the security requirments - "Application must handle input properly and not crash." I don't think we're far off here in what we're saying, but repeatability is key. Leaving the interface with QA at the level of security requriements in a functional spec isn't going to cut it. And, you're probably going to have some standardized set of security requirements for a whole swath of your applications that you might not want to repeat ad-naseum in every single product/feature spec. This is the place for standards, policies, and testing guidelines so that this becomes just part of the regular QA cycle. -- Andy Steingruebl stein...@gmail.com ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] Security in QA is more than exploits
> For starters I believe you misinterpreted my comments on QA. I was in > no way slamming their abilities. With this in mind comments below. Sorry about that. I am sensitive to the bias. I went to a very small company once (10 people total) and as I looked around I saw offices with big LCDs (I assumed management) and cubicles with multi-core, multi-monitor setups (I assumed developers). Then I saw this old wooden table with 3 5-year-old HP desktops and a 15" tube monitor. I said to my host, the dev manager, "that's the QA workstation." He looked surprised and said it was. He asked how I knew. I said "because it's a piece of junk!" This plays out a lot in industry, both big and little. So I apologize if I misread and found bias where none was intended. I'm ever vigilant against it. > > Before anyone talks about vulnerabilities to test for, we have > > to figure out what the business cares about and why. What could > > go wrong? Who cares? What would the impact be? Answers to those > > questions drive our testing strategy, and ultimately our test plans > > and test cases. > > We absolutely agree here. At the same time an externally exploitable > sql injection needs to get fixed. Let me shock and appall people by saying "not necessarily." It is commonly believed that some bugs are so horrific that we can say, without considering the business context, "they must be fixed." 10 years ago we said this about buffer overflows. "If you find a buffer overflow, you *must* fix it immediately." Then we went into industry and found out that there were times where missing a market window was far more costly than releasing a known buffer overflow. Ditto for the most horrendous web vulnerability you can think of. I resist absolute statements like this. As Andy Steingruebl pointed out "you also prioritize around effort to test and avoid, right?" Of course. We all agree that the cost of the fix is weighed against the benefits of not fixing and an estimate of the impact of successful exploitation. And that's why it's always possible you'll find a bug that sounds horrible, but is released anyways. Andy also said "I think we lose something when we start saying 'everything is relative.'" I think we lose something more important if we try to impose abolutes: we lose the connection to the business. No business operates on absolutes and blind imperatives. Few, if any, profit-focused businesses dogmatically fix all remotely exploitable SQL injections. Every business looks pragmatically at these things. Fixing the bug might cause the release of the product to slip by 6 weeks or a major customer to buy a competitor's product this quarter instead of waiting for the release. It's always a judgment call by the business. Even if their goal and their track record is fixing 100% of sev 1 issues before release, you know that each sev 1 issue was considered in terms of its cost, impact, schedule delay and so on. > In your experience do you find average QA people doing risk > management? Not all of them. Actually our experiences parallel nicely. My point is that any weak QA practitioners we're seeing in the marketplace are not QA folks who are short on security training. We're seeing QA folks who are short on QA training. When I find QA folks who are up-to-date on the state-of-the-practice in modern QA, teaching them a little security is a lot easier. When we go to teach security to folks who are already behind in their basics, we're building a castle on shaky ground. > Actually the main goal of the article is that information security > people need to set appropriate expectations as to what QA cares about > as their primary business function. They need to factor in that the > majority of QA people don't care about security as a primary job > function, and that if infosec wants them to care they had better > be prepared to speak their language and understand their needs So I'll continue to violently agree with you. :) QA is a process of taking inputs in the form of requirements (use cases, stories, etc.) and producing evidence of correct behavior (in both expected and unexpected situations). If infosec wants to give QA something they can consume and use directly, security requirements would be a great artifact. They fit the QA workflow, render explicit the security expectations, and foster traceability and test case development. It is an outstanding idea for infosec guys to provide security test cases, or the framework for them, to QA. That beats the heck out of what they usually do. However, a bunch of test cases for XSS, CSRF, SQL injection and so on will not map easily to requirements or to the QA workflow. At what priority do they execute? When the business (inevitably) squeezes testing to try to claw back a day or two on the slipped schedule, can any of these security tests be left out? Why or why not? Without hanging them into the QA workflow with clear traceability, QA will struggle to prioritize them correctly and maintain th
Re: [SC-L] Security in QA is more than exploits
On Wed, Feb 4, 2009 at 11:17 AM, Paco Hope wrote: > Before anyone talks about vulnerabilities to test for, we have to figure > out what the business cares about and why. What could go wrong? Who cares? > What would the impact be? Answers to those questions drive our testing > strategy, and ultimately our test plans and test cases. Paco, I don't really read what Robert wrote this way. I think what this general "risk management" approach misses is that certain things are always going to be defects, bugs, etc. Sure there are differences per-business and per-application. All bugs aren't created equal. But I think we lose something when we start saying "everything is relative." Each application, each business, each org needs a testing plan, strategy, and a definition of what they care about. At the same time there are going to be common types of tests that everyone performs. All Robert is pointing out is that if certain classes of vulnerabilities are important to you, then you want to have a common testing process for them. Bias #3 is that idea that a bunch of web vulnerabilities are equivalent in > impact to the business. That is, you just toss as many as you can into your > test plan and test for as much as you can. This isn't how testing is > prioritized. Again, I don't think he's saying this at all. Where I work every XSS is absolutely critical, and we get them fixed immediately. this might not be the case elsewhere. Some folks don't really worry about XSS that much. Because I can find differences though doesn't mean that everything is relative. Authentication bypass, SQL Injection, these types of things tend to rate HIGH/P1/Major for almost everyone, and I think. > > You don't organize testing based on which top X vulnerabilities are likely > to affect your organization (as the blog suggests). Likelihood is one part > of the puzzle. Business impact is the part that is missing. You prioritize > security tests by risk severity—that marriage of likelihood and impact to > the business. If I have a whole pile of very likely attacks that are all low > or negligible impact, and I have a few moderately likely attacks that have > high impact, I should prioritize my testing effort around the ones with > greater impact to my business. Again - fair enough. But at the same time you also prioritize around effort to test and avoid, right? Bias #4 is the treatment of testers like second class citizens. In the blog > article, developers are "detail oriented" have a "deep understanding of > flows." Constrast this with QA who merely understand "what is provided to > them." They sound impotent, as if all they can do is what they're told. > Software testing, despite whatever firsthand experience the author may have, > is a mature discipline. It is older and more formalized than "security" as a > discipline. Software testing is older than the Internet or the web. If > software testing as a discipline has adopted security too slowly, given > security's rise to the forefront in the marketplace, that might be a > legitimate criticism. But I don't approve of the slandering QA by implying > that they just take what's given them and execute it. QA is hard and there > are some really bright minds working in that field. I don't think Robert's comments were about the general field/discipline of QA. His commentary was more about the types of QA organizations he has come across. My own experience (albeit limited as well) has found a relative lack of highly skilled QA folks as well. There are people responsible for quality that are at the level you're talking about but I still bet they are more the exception than the rule. Most QA organizations are staffed with people writing relatively simple tests, running through positive functional testing, etc. I think the point here is that you have to tailor expectations to the organization you have. Much in the same way that if you have mostly junior programmers who are lucky to get their code to compile you're probably not going to have a lot of luck training them on formal proofs, rigorous design, etc. -- Andy Steingruebl stein...@gmail.com ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] Security in QA is more than exploits
For starters I believe you misinterpreted my comments on QA. I was in no way slamming their abilities. With this in mind comments below. > Before anyone talks about vulnerabilities to test for, we have to figure ou= > t what the business cares about and why. What could go wrong? Who cares? Wh= > at would the impact be? Answers to those questions drive our testing strate= > gy, and ultimately our test plans and test cases. We absolutely agree here. At the same time an externally exploitable sql injection needs to get fixed. The way qa/development is informed to its impact is through education likely via training. Not a single company with average hiring/skill requirements will have everybody (who needs to) know what sql injection is, and why it is bad. > Bias #3 is that idea that a bunch of web vulnerabilities are equivalent in = > impact to the business. That is, you just toss as many as you can into your= > test plan and test for as much as you can. This isn't how testing is prior= > itized. I said "A better approach in my opinion is to identify the top 10/25/x attacks/weaknesses/vulnerabilities that are likely to affect your own organization" These would be associated to customer or business impacts likely to affect you. Perhaps this could have been articulated better. > As someone who has been training in risk-based security testing for several= > years now, I totally agree with some points, but very much disagree with o= > thers. I agree that the "bug parade" (as we call it) of top X vulnerabiliti= > es to find is the wrong way to teach security testing. Risk management, tho= > ugh, has been a fundamental part of mainstream QA for a very long time. Lik= > ewise, risk management is the same technique that good "security people" us= > e to prioritize their results. Risk management is certainly how the busines= > s is going to make decisions about which issues to remediate and when. Risk= > management is what ties this all together. We agree. > If there's something that QA needs to learn that they're not already learni= > ng, it's the weaving of "security" into the risk management techniques they= > already know how to do. If testers fall short in their ability to apply ri= In your experience do you find average QA people doing risk management? In my experiences a Sr QA person/Team lead identifies what is going to be tested for a given release, and usually are the ones writing/tracking the test plans. > So, in some ways we agree: speak the lingo of QA. But in other ways we disa= > gree. I think the original article fails to give credit to the decades of s= > ubstantial research and practice in QA. In other words, it's a lot more tha= > n speaking the language. It is standing on the shoulders of giants, not the= > ir toes. Actually the main goal of the article is that information security people need to set appropriate expectations as to what QA cares about as their primary business function. They need to factor in that the majority of QA people don't care about security as a primary job function, and that if infosec wants them to care they had better be prepared - to speak their language and understand their needs - to customize and prioritize the security testing they may be doing instead of solely using generic top x lists > Paco Have a fantastic day Paco! Regards, - Robert ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] Security in QA is more than exploits
All, I just read Robert's blog entry about "re-aligning training expectations for QA." (http://bit.ly/157Pc3) It has some useful points that both developers and so-called "security people" need to hear. I disagree with some implicit biases, however, and I think we need to get past some stereotypes that sneak out in the article. Bias #1, obviously, is the focus on the web. Despite its omnipresence, there is more non-web software than web software in the world, and non-web software does more important stuff than all the web software combined. The role of security in _software_ testing is vital, and the presence or absence of web technologies does not change that. Despite writing a recent book on Web Security Testing, I know my place in the universe. Quality assurance and software testing are disciplines far older than the web, and their mission is so much bigger than finding vulnerabilities. Bias #2 is vulnerabilities über alles. By talking about weaving vulnerabilities into security test plans, we've overlooked the first place where security goes into the QA process: test strategy. Look at any of the prominent folks in QA (Jon Bach, Michael Bolton, Rex Black, Cem Kaner), the people I'm privileged to share podiums with at QA conferences like STAR West, STAR East, and Better Software, and you'll see that security is part of the overall risk-based testing strategy. Risk-based testing has been around for a really long time. Longer than the web. Before anyone talks about vulnerabilities to test for, we have to figure out what the business cares about and why. What could go wrong? Who cares? What would the impact be? Answers to those questions drive our testing strategy, and ultimately our test plans and test cases. Bias #3 is that idea that a bunch of web vulnerabilities are equivalent in impact to the business. That is, you just toss as many as you can into your test plan and test for as much as you can. This isn't how testing is prioritized. You don't organize testing based on which top X vulnerabilities are likely to affect your organization (as the blog suggests). Likelihood is one part of the puzzle. Business impact is the part that is missing. You prioritize security tests by risk severity—that marriage of likelihood and impact to the business. If I have a whole pile of very likely attacks that are all low or negligible impact, and I have a few moderately likely attacks that have high impact, I should prioritize my testing effort around the ones with greater impact to my business. Bias #4 is the treatment of testers like second class citizens. In the blog article, developers are "detail oriented" have a "deep understanding of flows." Constrast this with QA who merely understand "what is provided to them." They sound impotent, as if all they can do is what they're told. Software testing, despite whatever firsthand experience the author may have, is a mature discipline. It is older and more formalized than "security" as a discipline. Software testing is older than the Internet or the web. If software testing as a discipline has adopted security too slowly, given security's rise to the forefront in the marketplace, that might be a legitimate criticism. But I don't approve of the slandering QA by implying that they just take what's given them and execute it. QA is hard and there are some really bright minds working in that field. As someone who has been training in risk-based security testing for several years now, I totally agree with some points, but very much disagree with others. I agree that the "bug parade" (as we call it) of top X vulnerabilities to find is the wrong way to teach security testing. Risk management, though, has been a fundamental part of mainstream QA for a very long time. Likewise, risk management is the same technique that good "security people" use to prioritize their results. Risk management is certainly how the business is going to make decisions about which issues to remediate and when. Risk management is what ties this all together. If there's something that QA needs to learn that they're not already learning, it's the weaving of "security" into the risk management techniques they already know how to do. If testers fall short in their ability to apply risk management techniques, then they are falling short against the QA yardstick, there's nothing particularly security-related in this observation. If they are applying mature QA practices with modern risk management, but are not adequately addressing the software-induced business risks facing their stakeholders, then some security training might be in order. That security training should be built on the foundation of modern QA practice, including risk-based testing. So, in some ways we agree: speak the lingo of QA. But in other ways we disagree. I think the original article fails to give credit to the decades of substantial research and practice in QA. In other wo
Re: [SC-L] Security in QA is more than exploits
"Before anyone talks about vulnerabilities to test for, we have to figure out what the business cares about and why. What could go wrong? Who cares? What would the impact be? Answers to those questions drive our testing strategy, and ultimately our test plans and test cases." We have to figure out what the __customer__ cares about and why. Often times, the business areas don't have a clue about their customers. The business areas throw web applications into the webiverse and hope someone will bite. What is going to keep customers? What is going to drive customers away? My 2 cents Dave David Wieneke, MSIT-IS, GSEC IT Security Engineer Security Operations and Administration CUNA Mutual Group dave.wien...@cunamutual.com -Original Message- From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On Behalf Of Paco Hope Sent: Wednesday, February 04, 2009 1:18 PM To: SC-L@securecoding.org Subject: Re: [SC-L] Security in QA is more than exploits All, I just read Robert's blog entry about "re-aligning training expectations for QA." (http://bit.ly/157Pc3) It has some useful points that both developers and so-called "security people" need to hear. I disagree with some implicit biases, however, and I think we need to get past some stereotypes that sneak out in the article. Bias #1, obviously, is the focus on the web. Despite its omnipresence, there is more non-web software than web software in the world, and non-web software does more important stuff than all the web software combined. The role of security in _software_ testing is vital, and the presence or absence of web technologies does not change that. Despite writing a recent book on Web Security Testing, I know my place in the universe. Quality assurance and software testing are disciplines far older than the web, and their mission is so much bigger than finding vulnerabilities. Bias #2 is vulnerabilities über alles. By talking about weaving vulnerabilities into security test plans, we've overlooked the first place where security goes into the QA process: test strategy. Look at any of the prominent folks in QA (Jon Bach, Michael Bolton, Rex Black, Cem Kaner), the people I'm privileged to share podiums with at QA conferences like STAR West, STAR East, and Better Software, and you'll see that security is part of the overall risk-based testing strategy. Risk-based testing has been around for a really long time. Longer than the web. Before anyone talks about vulnerabilities to test for, we have to figure out what the business cares about and why. What could go wrong? Who cares? What would the impact be? Answers to those questions drive our testing strategy, and ultimately our test plans and test cases. Bias #3 is that idea that a bunch of web vulnerabilities are equivalent in impact to the business. That is, you just toss as many as you can into your test plan and test for as much as you can. This isn't how testing is prioritized. You don't organize testing based on which top X vulnerabilities are likely to affect your organization (as the blog suggests). Likelihood is one part of the puzzle. Business impact is the part that is missing. You prioritize security tests by risk severity-that marriage of likelihood and impact to the business. If I have a whole pile of very likely attacks that are all low or negligible impact, and I have a few moderately likely attacks that have high impact, I should prioritize my testing effort around the ones with greater impact to my business. Bias #4 is the treatment of testers like second class citizens. In the blog article, developers are "detail oriented" have a "deep understanding of flows." Constrast this with QA who merely understand "what is provided to them." They sound impotent, as if all they can do is what they're told. Software testing, despite whatever firsthand experience the author may have, is a mature discipline. It is older and more formalized than "security" as a discipline. Software testing is older than the Internet or the web. If software testing as a discipline has adopted security too slowly, given security's rise to the forefront in the marketplace, that might be a legitimate criticism. But I don't approve of the slandering QA by implying that they just take what's given them and execute it. QA is hard and there are some really bright minds working in that field. As someone who has been training in risk-based security testing for several years now, I totally agree with some points, but very much disagree with others. I agree that the "bug parade" (as we call it) of top X vulnerabilities to find is the wrong way to teach security testing. Risk management, though, has been a fundamental part of mainstream QA for a very long time. Likewise, risk management is the same