[SC-L] Java DOS
There's a very interesting vulnerability in Java kicking around. I wrote about it here: http://blog.fortify.com/blog/2011/02/08/Double-Trouble In brief, you can send Java (and some versions of PHP) into an infinite loop if you can provide some malicious input that will be parsed as a double-precision floating point number. This code used to look like the beginnings of some decent input validation: Double.parseDouble(request.getParameter(d)); Now it's the gateway to an easy DOS attack. (At least until you get a patch from your Java vendor, many of whom haven't released patches yet. Oracle has released a patch. Do you have it?) Until a few days ago, all major releases of Tomcat made matters worse by treating part of the Accept-Language header as a double. In other words, you don't need to have any double-precision values in *your* code for your app to be vulnerable. The SC-L corner of the world puts a lot of emphasis on training and on looking for known categories of vulnerabilities. That's all goodness. But this example highlights the fact that we have to build systems and procedures that can quickly adapt to address new risks. Brian ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates ___
Re: [SC-L] What do you like better Web penetration testing or static code analysis?
I like your point Matt. Everybody who's responded thus-far has wanted to turn this into a discussion about what's most effective or what has the most benefit, sort of like we were comparing which icky medicine to take or which overcooked vegetable to eat. Maybe they don't get any pleasure from the work itself. It sounds as though you need to change up your static analysis style. A few years back we ran competitions at BlackHat where we found we could identify and exploit vulnerabilities starting from static analysis just as quickly as from fuzzing. Here¹s an overview: http://reddevnews.com/Blogs/Desmond-File/2008/08/Iron-Chef-Competition-at-Bl ack-Hat-Cooks-Up-Security-Goodness.aspx Interviews with Charlie Miller and Sean Fay: http://blog.fortify.com/blog/2009/05/02/Iron-Chef-Interviews-Part-1-Charlie- Miller-1-2 http://blog.fortify.com/blog/2009/05/02/Iron-Chef-Interviews-Part-2-Sean-Fay Brian On 4/23/10 7:05 AM, Matt Parsons mparsons1...@gmail.com wrote: Gary, I was not stating which was better for security. I was stating what I thought was more fun. I feel that penetration testing is sexier. I find penetration testing like driving a Ferrari and static code analysis like driving a Ford Taurus. I believe with everyone else on this list that software security needs to be integrated early in the development life cycle. I have also read most of your books and agree with your findings. As you would say I don't think that penetration testing is magic security pixie dust but it is fun when you are doing it legally and ethically. My two cents. Matt Matt Parsons, MSM, CISSP 315-559-3588 Blackberry 817-294-3789 Home office Do Good and Fear No Man Fort Worth, Texas A.K.A The Keyboard Cowboy mailto:mparsons1...@gmail.com http://www.parsonsisconsulting.com http://www.o2-ounceopen.com/o2-power-users/ http://www.linkedin.com/in/parsonsconsulting http://parsonsisconsulting.blogspot.com/ http://www.vimeo.com/8939668 http://twitter.com/parsonsmatt -Original Message- From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On Behalf Of Gary McGraw Sent: Thursday, April 22, 2010 2:15 PM To: Peter Neumann; Secure Code Mailing List Subject: Re: [SC-L] What do you like better Web penetration testing or static code analysis? I hereby resonate with my esteemed colleague and mentor pgn. But no puns from me. gem On 4/22/10 1:57 PM, Peter Neumann neum...@csl.sri.com wrote: Matt Parsons wrote: What do you like doing better as application security professionals, web penetration testing or static code analysis? McGovern, James F. (P+C Technology) wrote: Should a security professional have a preference when both have different value propositions? While there is overlap, a static analysis tool can find things that pen testing tools cannot. Likewise, a pen test can report on secure applications deployed insecurely which is not visible to static analysis. So, the best answer is I prefer both... Both is better than either one by itself, but I think Gary McGraw would resonate with my seemingly contrary answer: BOTH penetration testing AND static code analysis are still looking at the WRONG END of the horse AFTER it has left the DEVELOPMENT BARN. Gary and I and many others have for a very long time been advocated security architectures and development practices that greatly enhance INHERENT TRUSTWORTHINESS, long before anyone has to even think about penetration testing and static code analysis. This discussion is somewhat akin to arguments about who has the best malware detection. If system developers (past-Multics) had paid any attention to system architectures and sound system development practices, viruses and worms would be mostly a nonproblem! Please pardon my soapbox. The past survives. The archives have lives, not knives. High fives! (I strive to thrive with jive.) PGN ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates ___ ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. Follow KRvW
Re: [SC-L] BSIMM update (informIT)
At no time did it include corporations who use Ounce Labs or Coverity Bzzzt. False. While there are plenty of Fortify customers represented in BSIMM, there are also plenty of participants who aren't Fortify customers. I don't think there are any hard numbers on market share in this realm, but my hunch is that BSIMM is not far off from a uniform sample in this regard. Brian -Original Message- From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On Behalf Of Kenneth Van Wyk Sent: Wednesday, February 03, 2010 4:08 PM To: Secure Coding Subject: Re: [SC-L] BSIMM update (informIT) On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote: Among other things, David and I discussed the difference between descriptive models like BSIMM and prescriptive models which purport to tell you what you should do. Thought I'd chime in on this a bit, FWIW... From my perspective, I welcome BSIMM and I welcome SAMM. I don't see it in the least as a one or the other debate. A decade(ish) since the first texts on various aspects of software security started appearing, it's great to have a BSIMM that surveys some of the largest software groups on the planet to see what they're doing. What actually works. That's fabulously useful. On the other hand, it is possible that ten thousand lemmings can be wrong. Following the herd isn't always what's best. SAMM, by contrast, was written by some bright, motivated folks, and provides us all with a set of targets to aspire to. Some will work, and some won't, without a doubt. To me, both models are useful as guide posts to help a software group--an SSG if you will--decide what practices will work best in their enterprise. But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves if we consider these to be standards or even maturity models. Any other engineering discipline on the planet would laugh us all out of the room by the mere suggestion. There's value to them, don't get me wrong. But we're still in the larval mode of building an engineering discipline here folks. After all, as a species, we didn't start (successfully) building bridges in a decade. For now, my suggestion is to read up, try things that seem reasonable, and build a set of practices that work for _you_. Cheers, Ken - Kenneth R. van Wyk KRvW Associates, LLC http://www.KRvW.com This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies. ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___ ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] Insecure Java Code Snippets
We keep a big catalog here: http://www.fortify.com/vulncat On 5/6/09 10:41 AM, Brad Andrews andr...@rbacomm.com wrote: Does anyone know of a source of insecure Java snippets? I would like to get some for a monthly meeting of leading technical people. My idea was to have a find the bug like the old C-Lint ads. Does anyone know of a source of something like this. Brad ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___ ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] Positive impact of an SSG
Ben! Thank you! When you talk about sample size, it gives me hope that we¹re on the right track. We can either: 1) Use ideas that ³experts² theorize will work or 2) Gather empirical evidence to judge one idea against another. We in the security crowd often try to hide behind the need for secrecy, and that¹s pushed us toward relying almost entirely on people who have nothing but rhetoric and personal reputation to stand on. BSIMM pretty well shows that, in 2009, we can do better. It¹s a big step forward to collect data and then argue about what it means. I know it¹s already made the rounds, but let¹s have some XKCD to celebrate: http://xkcd.com/552/ I think your question about defining success is an important one. We were loose about it in this first round, and I hope it¹s something we can tighten up in our follow-on work. Here¹s my thinking as of today: software security is not the goal, it¹s one of the many things an organization needs to do in order to meet it¹s objectives. We need to look at how a software security initiative (or lack thereof) effects the organization¹s ability to meet it¹s objectives. This is going to be messy, but it¹s either that or go back to making stuff up. BTW, I checked the BSIMM web site after I read your message. It worked for me. Try this? http://www.downforeveryoneorjustme.com/bsi-mm.com Brian On 3/11/09 10:48 AM, Benjamin Tomhave list-s...@secureconsulting.net wrote: I think it's an interesting leap of faith. Statistically speaking, 9 is a very small sample size. Thus, the proposed model will be viewed skeptically until it is validated with a much larger and more diverse sample. Putting it another way, there's no way I can take this to a small or medium sized org and have them see immediate relevance, because their first reaction is going to be those are 9 large orgs with lots of resources - we don't have that luxury. You quoted we can say with confidence that these activities are commonly found in highly successful programs - how do you define a highly successful program? What's the rule or metric? Is this a rule or metric that can be genericized easily to all development teams? My concern is exactly what you speculate about... organizations are going to look at this and either try to tackle everything (and fail) or decide there's too much to tackle (and quit). In my experience working with maturity models as a consultant, very few people actually understand the concept. Folks are far more tuned-in to a PCI-like prescriptive method. Ironically, the PCI folks say the same thing you did - that it's not meant to be prescriptive, that it's supposed to be based on risk management practices - yet look how it's used. Maybe you've addressed this, but it doesn't sound like it. I'd perhaps be better educated here if the web site wasn't down... ;) -ben Sammy Migues wrote: Hi Pravir, Thanks for clarifying what you're positing. I'm not sure how we could have been more clear in the BSIMM text accompanying the exposition of the collective activities about the need to take this information and work it into your own culture (i.e., do risk management). As a few examples: p. 3: BSIMM is meant as a guide for building and evolving a software security initiative. As you will see when you familiarize yourself with the BSIMM activities, instilling software security into an organization takes careful planning and always involves broad organizational change. By clearly noting objectives and goals and by tracking practices with metrics tailored to your own initiative, you can methodically build software security in to your organization¹s software development practices. p. 47: Choosing which of the 110 BSIMM activities to adopt and in what order can be a challenge. We suggest creating a software security initiative strategy and plan by focusing on goals and objectives first and letting the activities select themselves. Creating a timeline for rollout is often very useful. Of course learning from experience is also a good strategy. p. 47: Of the 110 possible activities in BSIMM, there are ten activities that all of the nine programs we studied carry out. Though we can¹t directly conclude that these ten activities are necessary for all software security initiatives, we can say with confidence that these activities are commonly found in highly successful programs. This suggests that if you are working on an initiative of your own, you should consider these ten activities particularly carefully (not to mention the other 100). p. 48: The chart below shows how many of the nine organizations we studied have adopted various activities. Though you can use this as a rough ³weighting² of activities by prevalence, a software security initiative plan is best approached through goals and objectives. Your words (...BSIMM fails...) imply (to me) that you posit organizations
Re: [SC-L] Some Interesting Topics arising from the SANS/CWE Top 25
In the one sense, we are talking about validating user input, which mostly needs to concern itself with adhering to business requirements. This meaning is not very important for security, but the other one, validating data before something is done with it, is. Yes, two forms of validation are required. If you hang around with the compliers crowd for too long, you¹ll call them syntax validation and semantic validation. Syntax: ³the input must be an integer² Semantics: ³the input must identify an account held in your name.² It¹s often possible and even desirable to perform syntax checking not long after a program accepts its input. You can bottleneck a program and make sure all input runs through a syntax validation layer. Not so with semantic checks. In many cases they are so closely related to the program logic that ripping them out and creating an ³semantic validation layer² would essentially double the length of the program and create a maintenance nightmare. So which form of input validation is security input validation? Both! In most cases you can¹t afford to skip either one. Bad or absent syntax checks lead generic kinds of problems like SQL injection. Bad or absent semantic checks lead to problems that are often more specific to the application at hand. There¹s a lot to say about input validation. Jacob West and I wrote devoted a full chapter to it in Secure Programming with Static Analysis (http://www.amazon.com/dp/0321424778), but we found that the material refused to stay in its cageinput validation got a lot of airtime when we talked about the Web, when we talked about privileged programs, and then again when we got around to the litany of common errors in C/C++ programs. Brian On 1/14/09 2:02 PM, Ivan Ristic ivan.ris...@gmail.com wrote: On Wed, Jan 14, 2009 at 12:41 AM, Greg Beeley greg.bee...@lightsys.org wrote: Steve I agree with you on this one. Both input validation and output encoding are countermeasures to the same basic problem... I'd like to offer a different view for your consideration, which is that input validation and output encoding actually don't have anything to do with security. Those techniques are essential software building blocks. While it is true that omission to use these techniques often causes security issues, that only means such programs are insecure in addition to being defective. I think that it's inherently wrong to associate input validation and output encoding with security. Fix the defects and the security issues will go away. On the other hand, if you only fix the security issues you may be left with a number of defects on your hands. Input validation layers should focus on accepting only valid data (per business requirements), while code that transmits data across system boundaries should focus on using the exchange and communication protocols correctly. Actually, now that I think about it more, I think we are struggling with the term input validation because the term has been overloaded. In the one sense, we are talking about validating user input, which mostly needs to concern itself with adhering to business requirements. This meaning is not very important for security, but the other one, validating data before something is done with it, is. If you take a web application for example, you would ideally verify that all user submitted data adheres to your business requirements. -- Ivan Ristic ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___ ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] top 10 software security surprises
Thanks Ken. For me this has been an incredibly eye-opening project. It can be hard for people to distinguish between ideas that merely look good on paper and ideas that are actually in widespread use. Once we’ve cleaned up the data and gotten approval from the organizations we canvassed, we think there’ll be plenty of ways to apply what we’ve learned. The project Pravir mentioned is one. Brian [Ed. This was from br...@fortify.com, but was dropped by the Mailman server since it's set to ignore emails from non-subscribed addresses (spam...). Issue was resolved re Brian's email address.] On 12/17/08 11:48 AM, Ken van Wyk k...@krvw.com wrote: On Dec 16, 2008, at 1:25 PM, Gary McGraw wrote: Using the software security framework introduced in October (A Software Security Framework: Working Towards a Realistic Maturity Model http://www.informit.com/articles/article.aspx?p=1271382), we interviewed nine executives running top software security programs in order to gather real data from real programs. Wow, this is great stuff. Kudos to Gary, Sammy, and Brian. I have a couple comments/observations on some of your conclusions: - You obviously wrote the top-10 list in C, since it went from 9 to 0. :-) - Not only are there are no magic software security metrics, bad metrics actually hurt. This is an excellent point. I think it's also worth noting that it's important to carefully consider what metrics make sense for an organization _as early as possible_ in the life of their software security efforts. Trying to retro-engineer some metrics into a program after the fact is not a fun thing. - Secure-by-default frameworks can be very helpful, especially if they are presented as middleware classes (but watch out for an over focus on security stuff). Yes yes yes! I've found significantly more traction to prescriptive guidance vs. a don't do this list of bad practices. Plus, it inherently supports a mindset of positive validation instead of negative. It's important to look for common mistakes, but if you really want your devs to follow, give them clear coding guidelines with annotated descriptions of how to follow them. Efforts like OWASP's ESAPI are indeed a great starting point here for plugging in things like strong positive input validation and such. - Web application firewalls are not in wide use, especially not as Web application firewalls. I can't say I'm much surprised by this one. Even with PCI-DSS driving people to WAFs (or do external independent code reviews), I just don't often see them often. But you go on to say, But even these two didn't use them to block application attacks; they used them to monitor Web applications and gather data about attacks.--but you don't come back to this point. One serious benefit to WAFs can be enhancing the ability to do monitoring, especially of legacy apps. Adding one network choke point WAF can quickly add an app-level monitoring capability that few organizations considered when rolling the apps out in the first place. - Though software security often seems to fit an audit role rather naturally, many successful programs evangelize (and provide software security resources) rather than audit even in regulated industries This one too is very encouraging to see. - Architecture analysis is just as hard as we thought, and maybe harder. And this one is very discouraging. I've seen good results in doing architectural risk analyses, but the ones that produce useful results tend to be the more ad hoc ones -- and NOT the ones that follow rigorous processes. - All nine programs we talked to have in-house training curricula, and training is considered the most important software security practice in the two most mature software security initiatives we interviewed. That explains the quarter-million miles in my United account this year alone. :-) Ugh. - Though all of the organizations we talked to do some kind of penetration testing, the role of penetration testing in all nine practices is diminishing over time. Hallelujah! Cheers, Ken - Kenneth R. van Wyk SC-L Moderator KRvW Associates, LLC http://www.KRvW.com ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com ) as a free, non-commercial service to the software security community. ___ smime.p7s Description: S/MIME cryptographic signature ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at -
[SC-L] International Symposium on Engineering Secure Software and Systems (ESSoS)
CALL FOR PAPERS === International Symposium on Engineering Secure Software and Systems (ESSoS) February 04-06, 2009 Leuven, Belgium http://distrinet.cs.kuleuven.be/events/essos2009/ CONTEXT AND MOTIVATION Trustworthy, secure software is a core ingredient of the modern world. Unfortunately, most software developed today runs on a network exposing it to a hostile environment. The Internet can allow vulnerabilities in software to be exploited from anywhere in the world. High-quality security building blocks (e.g., cryptographic components) are necessary, but insufficient to address this. Indeed, the construction of secure software is challenging because of the complexity of applications, the growing security requirements, and the multitude of software technologies and attack vectors. Clearly, a strong need exists for engineering techniques for secure software and systems that scale well and that demonstrably improve the software's security properties. GOAL AND SETUP The goal of this symposium, which will be the first in a series of events, is to bring together researchers and practitioners to advance the states of the art and practice in secure software engineering. Being one of the few conference-level events dedicated to this topic, it explicitly aims to bridge the software engineering and security engineering communities, and promote cross-fertilization. The symposium will feature two days of technical programme as well as one day of tutorials. The technical programme includes an experience track for which the submission of highly informative case studies describing (un)successful secure software project experiences and lessons learned is explicitly encouraged. TOPICS The Symposium seeks submissions on topics related to its goals. This includes a diversity of topics including (but not limited to): -scalable techniques for threat modeling and analysis of vulnerabilities -specification and management of security requirements and policies -security architecture and design for software and systems -model checking for security -specification formalisms for security artifacts -verification techniques for security properties -systematic support for security best practices -security testing -security assurance cases -programming paradigms, models and DLS's for security -program rewriting techniques -processes for the development of secure software and systems -security-oriented software reconfiguration and evolution -security measurement -automated development -trade-off between security and other non-functional requirements -support for assurance, certification and accreditation SUBMISSION AND FORMAT The proceedings of the symposium will be published as a Springer-Verlag volume in the Lecture Notes in Computer Science Series (http://www.springer.com/lncs). Submitted papers must present original, non-published work of high quality that has not been submitted for potential publication in parallel. Submitted papers should follow the formatting instructions of the Springer LNCS Style, and should include maximally 15 pages for research papers and 10 pages for industrial papers (figures and appendices included). Proposals for tutorials are highly welcome as well. Further guidelines will appear on the website of the symposium. IMPORTANT DATES Abstract submission: September 8, 2008 Paper submission: September 15, 2008 Author notification: November 5, 2008 Camera-ready: November 24, 2008 Tutorial submission: October 24, 2008 Tutorial notification: November 21, 2008 STEERING COMMITTEE Jorge Cuellar (Siemens AG) Wouter Joosen (Katholieke Universiteit Leuven) Fabio Massacci (Universit` di Trento) Gary McGraw (Cigital) Bashar Nuseibeh (The Open University) Samuel Redwine (James Madison University) ORGANIZING COMMITTEE General chair: Bart De Win (Katholieke Universiteit Leuven) Program co-chairs: Fabio Massacci (Universit` di Trento) and Samuel Redwine (James Madison University) Publication chair: Nicola Zannone (University of Toronto) Tutorial chair: Riccardo Scandariato (Katholieke Universiteit Leuven) PROGRAM COMMITTEE (preliminary) Matt Bishop, University of California (Davis) - USA Brian Chess, Fortify Software - USA Richard Clayton, Cambridge University - UK Christian Collberg, University of Arizona - USA Bart De Win, Katholieke Universiteit Leuven - BE Juergen Doser, ETH - CH Eduardo Fernandez-Medina, University of Castilla-La Mancha - ES Dieter Gollmann, University of Hamburg - DE Michael Howard, Microsoft - USA Cynthia Irvine, Naval Postgradual School - USA Jan Jurjens, Open University - UK Volkmar Lotz, SAP Labs - FR Antonio Mana, University of Malaga - ES Robert Martin, MITRE - USA Fabio Massacci, Universit` di Trento - IT Mira Mezini, Darmstadt University - DE Mattia Monga, Milan University - IT Andy Ozment, DoD - USA Gunther Pernul, Universitat Regensburg - DE Domenico Presenza, Engineering - IT Samuel Redwine, James Madison
Re: [SC-L] Really dumb questions?
- So when a vendor says that they are focused on quality and not security, and vice versa what exactly does this mean? We spend most of Chapter 2 of Secure Programming with Static Analysis describing the different problems that static analysis tools try to solve, and we show where we think all of the companies you mention (plus a lot of others) fit in. The relative importance of false positives vs false negatives is one important difference, but so is extensibility, rule set (as John mentioned), ability of the tool to prioritize its findings, and the interface the tool presents for exploring the results. From my experience, the vendors do different things in all of these areas, and these differences aren't just a result of dumb luck. They stem from different philosophies about what the tools are supposed to do. Quality vs. Security may be an oversimplification, but the differences between the tools are much more than cosmetic. - Is it reasonable to expect that all of the vendors in this space will have the ability to support COBOL, Ruby and Smalltalk sometime next year so that customers don't have to specifically request it? I don't think so. The way a tool is designed can make it easier or harder to add support for a new language, but unless you're doing a really superficial analysis, adding a new language is always a big deal. Supporting a language requires more than just being able to parse it. The tools often have to do special work to make sure that the meaning of common idioms carries over correctly in the analysis, then there's the small matter of developing a rule set. Someone mentioned that Ruby makes life hard because it lacks static types. While that's true, it compensates in other ways. For example, because of a lack of static types, there are often more bugs to find. There's some really good academic work going on right now around security analysis of scripting languages (mostly PHP). Here's my pick of the week: Sound and Precise Analysis of Web Applications for Injection Vulnerabilities by Gary Wassermann and Zhendong Su http://wwwcsif.cs.ucdavis.edu/~wassermg/research/pldi07.pdf Regards, Brian ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
[SC-L] Secure Programming with Static Analysis
Jacob West and I are proud to announce that our book, Secure Programming with Static Analysis, is now available. http://www.amazon.com/dp/0321424778 The book covers a lot of ground. * It explains why static source code analysis is a critical part of a secure development process. * It shows how static analysis tools work, what makes one tool better than another, and how to integrate static analysis into the SDLC. * It details a tremendous number of vulnerability categories, using real-world examples from programs such as Sendmail, Tomcat, Adobe Acrobat, Mac OSX, and dozens of others. We'd like to thank the many members of the sc-l list who helped us out with the book in one way or another, including: Pravir Chandra Gary McGraw Katrina O'Neil John Steven Ken van Wyk Regards, Brian and Jacob ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] JavaScript Hijacking
Frederik De Keukelaere [EMAIL PROTECTED] writes: Would you mind sharing the different data formats you came across for exchanging data in mashups/Web 2.0? Considering the challenges you recently discovered, it might be good to have such an overview to look at it from a security point of view. Oops, sorry for taking so long to respond. In addition to JSON, I've seen two other uses of JavaScript as a data transport format. 1) JavaScript arrays Example: [ a, b, c ] Technically speaking, this is a subset of JSON, but in these systems there is no notion of an object, only an array. These systems are more vulnerable than systems using JSON because they're guaranteed to always use array syntax. 2) Function calls Example: addRecord(a, b, c); This format is even easier to hijack, just define the named function. This is the worst of the bunch from a confidentiality standpoint. Regards, Brian ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] SC-L Digest, Vol 3, Issue 73
Hi Frederik, You're right that IE does not have the setter methods. You're also right that hijacking the Object() or Array() constructor method would be enough to pull off the attack. The bad (good?) news is that IE doesn't call those methods unless an object is explicitly created with the new keyword. We got this wrong when we looked at it initially, which is why we said the code could be ported to IE. We're going to go back and fix that in the paper. Of course, any JavaScript data transport format that explicitly calls a function is vulnerable in all browsers. Over the last week or two I've been learning that people are moving data around using a lot more than just JSON, though JSON is the clear front-runner. Brian Message: 1 Date: Fri, 6 Apr 2007 11:32:33 +0900 From: Frederik De Keukelaere [EMAIL PROTECTED] Subject: Re: [SC-L] JavaScript Hijacking To: sc-l@securecoding.org Message-ID: [EMAIL PROTECTED] Content-Type: text/plain; charset=us-ascii Hi Brian, Hi Stefano, snip Ok I see the difference. You are taking advantage of a pure json CSRF with a evil script which contains a modified version of the Object prototype. And when the callback function is executed you use a XMLHttpRequest in order to send the information extracted by the instantiated object. In the beginning of the paper there was a comment that the code that was presented was designed for use in Firefox but could be ported to IE or other browsers. However, since IE does not seem to have the setter methods (correct me if I am wrong), I did not quite find a way to achieve this in IE. We tried several things such as replacing Array and Object constructor as well as as overriding eval, neither of which worked. Do you have any suggestions about how to port this attack to IE? Btw, thanks for the papers. Kind Regards, Fred --- Frederik De Keukelaere, Ph.D. Post-Doc Researcher IBM Research, Tokyo Research Laboratory -- next part -- An HTML attachment was scrubbed... URL: http://krvw.com/pipermail/sc-l/attachments/20070406/b9ac46c2/attachment-0001.h tml -- ___ SC-L mailing list SC-L@securecoding.org http://krvw.com/mailman/listinfo/sc-l End of SC-L Digest, Vol 3, Issue 73 *** ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
Re: [SC-L] JavaScript Hijacking
Hi Stefano, Yes, we are aware of your paper, but we intentionally chose to omit the reference because we are quite snobby. I'm joking! I hadn't seen your paper previously. It was a good read. The difference between what you discuss and JavaScript Hijacking is that we do not assume the presence of another defect. JavaScript Hijacking does not require the existence of a cross-site scripting vulnerability or the like. It's a new attack technique (and a new vulnerable code pattern), not a new method for exploiting an existing class of vulnerabilities. Thanks, Brian From: Stefano Di Paola [EMAIL PROTECTED] Date: Mon, 02 Apr 2007 11:11:24 +0200 To: sc-l@securecoding.org sc-l@securecoding.org Cc: Brian Chess [EMAIL PROTECTED] Subject: Re: [SC-L] JavaScript Hijacking Brian, i don't know if you read it but me and Giorgio Fedon presented a paper named Subverting Ajax at 23rd CCC Congress. (4th section XSS Prototype Hijacking) http://events.ccc.de/congress/2006/Fahrplan/attachments/1158-Subverting_Ajax.p df It described a technique called Prototype Hijacking, which is about overriding methods and attributes by using contructors and prototyping. It was described how to override XMLHttprequest object, but it was stated that it could be applied to every prototype. If you didn't read it, please read it and add some reference to your paper. If you read it: - i think we deserve at least reference to our paper. - even if you covered JSON hijacking, the technique is the same and the name (Javascript Hijacking) is quite similar. Regards, Stefano ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
[SC-L] JavaScript Hijacking
I've been getting questions about Ajax/Web 2.0 for a few years now. Most of the time the first question is along these lines: Does Ajax cause any new security problems? Until recently, my answer has been right in line with the answers I've heard from other corners of the world: No. Then I've gone on to explain that Ajax doesn't change the rules of the game, but it does tilt the playing field. For example: - By splitting your code between a client and a server, you increase you opportunity for misplacing input validation logic and access control checks. - Dynamic testing tools tend to have a harder time with Ajax apps. Now my story has changed. We've found a new type of vulnerability that only affects Ajax-style apps. We call the attack JavaScript Hijacking. It enables an attacker to read confidential information from vulnerable sites. The attack works because many Ajax apps have given up on the x in Ajax. Instead of XML, they're using JavaScript as a data transport format. The problem is that web browsers don't protect JavaScript the same way they protect HTML, so a malicious web site can peek into some of the JavaScript returned from a vulnerable Ajax app. We've looked at a lot of Ajax frameworks over the past few weeks, including Google's GWT, Microsoft Atlas, and half a dozen open source frameworks. Almost all of them make it easy for developers to write vulnerable code. Some of them *require* developers to write vulnerable code. Our write-up on the problem, along with our proposed solution, is here: http://www.fortify.com/servlet/downloads/public/JavaScript_Hijacking.pdf Enjoy, Brian ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. ___
[SC-L] Java Open Review Project
Hello all, I'm pleased to announce that we've just launched the Java Open Review Project (http://opensource.fortifysoftware.com). We're reviewing open source Java code all the way from Tomcat down to PetStore looking for bugs and security vulnerabilities. We're using two static analysis tools to do the heavy lifting: FindBugs and Fortify SCA. We can use plenty of human eyes to help sort through the results. We're also soliciting ideas for which projects we should be reviewing next. Please help! Brian ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php
[SC-L] Re: Comparing Scanning Tools
Hi Jerry, as one of the creators of the tool you evaluated, I have to admit I have the urge to comment on your message one line at a time and explain each way in which the presentation you attended did not adequately explain what Fortify does or how we do it. But I don't think the rest of the people on this list would find that to be a very interesting posting, so instead I'm going to try to stick to general comments about a few of the subjects you brought up. False positives: Nobody likes dealing with a pile of false positives, and we work hard to reduce false positives without giving up potentially exploitable vulnerabilities. In some sense, this is where security tools get the raw end of the deal. If you're performing static analysis in order to find general quality problems, you can get away with dropping a potential issue on the floor as soon as you get a hint that your analysis might be off. You can't do that if you are really focused on security. To make matters worse for security tools, when a quality-focused tool can detect just some small subset of some security issue, the create labels it a quality and security tool. Ugh. This rarely flies with a security team, but sometimes it works on non-security folks. Compounding the problem is that, when the static analysis tool does point you at an exploitable vulnerability, it's often not a very memorable occasion. It's just a little goof-up in the code, and often the problem is obvious once the tool points it out. So you fix it, and life goes on. If you aren't acutely aware of how problematic those little goof-ups can be once some researcher announces one of them, it can almost seem like a non-event. All of this can make the hour you spent going through reams of uninteresting results seem more important than the 5 minutes you spent solving what could have become a major problem, even though exactly the opposite is true. Suppression: A suppression system that relies on line numbers wouldn't work very well. When it comes to suppression, the biggest choice you've got to make is whether or not you're going to rely on code annotation. Code annotation can work well if you're reviewing your own code, but if you're reviewing someone else's code and you can't just go adding annotation goo wherever you like, you can't use it, at least not exclusively. Instead, the suppression system needs to be able to match up the salient features of the suppressed issue against the code it is now evaluating. Salient features should include factors like the names of variables and functions, the path or paths required to activate the problem, etc. Customization: Of course the more knowledge you provide the tool, the better a job it can do at telling you things you'd like to know. But in the great majority of cases that I've seen, little or no customization is required in order to derive benefit from any of the commercial static analysis tools I've seen. In the most successful static analysis deployments, the customization process never ends--people keep coming up with additional properties they'd like to check. The static analysis tool becomes a way to share standards and best practices. Regards, Brian ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php
[SC-L] RE: Comparing Scanning Tools
Title: RE: Comparing Scanning Tools McGovern, James F wrote: I have yet to find a large enterprise that has made a significant investment in such tools. Ill give you pointers to two. Theyre two of the three largest software companies in the world. http://news.com.com/2100-1002_3-5220488.html http://news.zdnet.com/2100-3513_22-6002747.html Brian ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php
[SC-L] RE: The role static analysis tools play in uncovering elements of design
Jeff Williams [EMAIL PROTECTED] wrote: I think there's a lot more that static analysis can do than what you're describing. They're not (necessarily) just fancy pattern matchers. Jeff, you raise a important point. Getting good value out of static analysis requires a second component in addition to a fancy pattern matcher. You also need a good set of fancy patterns (aka rules) to match against. Understand that when I say ³fancy pattern², I don¹t mean ³regular expression². More formally, I mean program property. Taint propagation, state transitions, feasible control flow paths, alias analysis, etc. are all important if you¹d like to build a tool, but if you¹re contemplating how to best apply a tool, all of your interaction will be of the form ³Show me places in the program where property X holds² and the goal of the tool is to match the code against the property you've specified. For a good initial user experience, a tool needs to come with a well-constructed default set of rules. For aiding in program understanding, it needs to allow you to easily add new rules of your own construction. The number of false alarms you get from a good static analysis tool is directly related to how aggressive the tool is in finding constructs that might be what you're looking for. As an example, I'll use a question you posed: are there any paths around the encryption? If you're going to bet your life that there aren't any such paths, then you'd like the tool to make conservative assumptions and allow you to manually review anything that might possibly match the pattern you've specified. If you only have a passing interest in the property, then you'd prefer the tool weed out paths that, while not impossible, are not likely to be of interest. Maybe some code will help illustrate my point: if (b) { panic(); else { buf = encrypt(buf); } return buf; Would you like to be warned that buf may be returned unencrypted? This code fragment guarantees that either buf is encrypted or panic() is called. But usually control doesn't return from a function named panic(). The problem is, that's not universally true; sometimes panic() just sets a flag and the system reboot happens shortly thereafter. Whether or not you want to see this path depends on how important it really is to you that encryption is absolutely never bypassed. Your tolerance for noise is dictated by the level of assurance you require. Brian ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php
[SC-L] Re: SC-L Digest, Vol 2, Issue 17
John, I think this has to do with what you want to achieve when you explore code. A static analysis tool is a fancy sort of pattern matcher. If the kinds of patterns you're interested in aren't that fancy, (does the program use function X()?; what is the class hierarchy?) then a fancy pattern matcher is overkill. If your version of code exploration include things like is function A() always called before function B()? or is it possible for this data structure Z to be populated with the result of function X()? then you're in the realm where a static analysis tool might help. Of course, a static analysis tool allows you to take shortcuts, so you may learn less about the code than you would if you had to answer these questions the hard way. Brian Date: Fri, 03 Feb 2006 13:39:36 -0500 From: John Steven [EMAIL PROTECTED] Subject: [SC-L] The role static analysis tools play in uncovering elementsof design To: Jeff Williams [EMAIL PROTECTED],Secure Coding Mailing List SC-L@securecoding.org Message-ID: [EMAIL PROTECTED] Content-Type: text/plain; charset=iso-8859-1 Jeff, An unpopular opinion I¹ve held is that static analysis tools, while very helpful in finding problems, inhibit a reviewer¹s ability to find collect as much information about the structure, flow, and idiom of code¹s design as the reviewer might find if he/she spelunks the code manually. I find it difficult to use tools other than source code navigators (source insight) and scripts to facilitate my code understanding (at the design-level). Perhaps you can give some examples of static analysis library/tool use that overcomes my prejudiceor are you referring to the navigator tools as well? - John Steven Principal, Software Security Group Technical Director, Office of the CTO 703 404 5726 - Direct | 703 727 4034 - Cell Cigital Inc. | [EMAIL PROTECTED] 4772 F7F3 1019 4668 62AD 94B0 AE7F EEF4 62D5 F908 ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php
RE: [SC-L] Bugs and flaws
I spent Phase One of both my academic and professional careers working on hardware fault models and design for testability. In fact, the first static analysis tool I wrote was for hardware: it analyzed Verilog looking for design mistakes that would make it difficult or impossible to perform design verification or to apply adequate manufacturing tests. Some observations: - The hardware guys are indeed ahead. Chip designers budget for test and verification from day one. They also do a fair amount of thinking about what's going to go wrong. Somebody's going to give you 5 volts instead of 3.3 volts. What's going to happen? The transistors are going to switch at a different rate when the chip is cold. What's going to happen? A speck of dust is going to fall on the wafer between the time the metal 2 layer goes down and the time the metal 3 layer goes down. What's going to happen? - The difference between a manufacturing defect and a design defect is not always immediately obvious. Maybe two wires got bridged because a piece of dust fell in exactly the right spot. Maybe two wires got bridged because you made a mistake in your process physics and you need 50 nm of tolerance instead of 0.5 nm. You'd better figure it out before you go into full-swing manufacturing, or big batches of defective chips could kill your profit margins and drive your customers away at the same time. For that reason, diagnosing the cause of failure is an important topic. Brian -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Chris Wysopal Sent: 02 February 2006 21:35 To: Gary McGraw Cc: William Kruse; Wall, Kevin; Secure Coding Mailing List Subject: RE: [SC-L] Bugs and flaws In the manufacturing world, which is far more mature than the software development world, they use the terminology of design defect and manufacturing defect. So this distinction is useful and has stood the test of time. Flaw and defect are synonymous. We should just pick one. You could say that the term for manufacturing software is implementation. So why do we need to change the terms for the software world? Wouldn't design defect and implementation defect be clearer and more in line with the manufacturing quality discipline, which the software quality discipline should be working towards emulating. (When do we get to Six Sigma?) I just don't see the usefulness of calling a design defect a flaw. Flaw by itself is overloaded. And in the software world, bug can mean an implementation or design problem, so bug alone is overloaded for describing an implementation defect. At @stake the Application Center of Excellence used the terminology design flaw and implementation flaw. It well understood by our customers. As Crispin said in an earlier post on the subject, the line is sometimes blurry. I am sure this is the case in manufacturing too. Architecture flaws can be folded into the design flaw category for simplicity. My vote is for a less overloaded and clearer terminology. -Chris P.S. My father managed a non-destructive test lab at a jet engine manufacturer. They had about the highest quality requirements in the world. So for many hours I was regaled with tales about the benefits of performing static analysis on individual components early in the manufacturing cycle. They would dip cast parts in a fluorescent liquid and look at them under ultraviolet light to illuminate cracks caused during casting process. For critical parts which would receive more stress, such as the fan blades, they would x-ray each part to inspect for internal cracks. A more expensive process but warranted due to the increased risk of total system failure for a defect in those parts. The static testing was obviously much cheaper and delivered better quality than just bolting the parts together and doing dynamic testing in a test cell. It's a wonder that it has taken the software security world so long to catch onto the benefits of static testing of implementation. I think we can learn a lot more from the manufacturing world. On Thu, 2 Feb 2006, Gary McGraw wrote: Hi all, When I introduced the bugs and flaws nomenclature into the literature, I did so in an article about the software security workshop I chaired in 2003 (see http://www.cigital.com/ssw/). This was ultimately written up in an On the Horizon paper published by IEEE Security Privacy. Nancy Mead and I queried the SWEBOK and looked around to see if the new usage caused collision. It did not. The reason I think it is important to distinguish the two ends of the rather slippery range (crispy is right about that) is that software security as a field is not paying enough attention to architecture. By identifying flaws as a subcategory of defects (according the the SWEBOK), we can focus some attention on the problem. From the small glossary in Software Security (my new book out tomorrow): Bug-A bug is an implementation-level software problem. Bugs