[SC-L] Java DOS

2011-02-12 Thread Brian Chess
There's a very interesting vulnerability in Java kicking around.  I wrote about 
it here:
  http://blog.fortify.com/blog/2011/02/08/Double-Trouble

In brief, you can send Java (and some versions of PHP) into an infinite loop if 
you can provide some malicious input that will be parsed as a double-precision 
floating point number.

This code used to look like the beginnings of some decent input validation:
   Double.parseDouble(request.getParameter("d"));
Now it's the gateway to an easy DOS attack.  (At least until you get a patch 
from your Java vendor, many of whom haven't released patches yet.  Oracle has 
released a patch.  Do you have it?)

Until a few days ago, all major releases of Tomcat made matters worse by 
treating part of the Accept-Language header as a double.  In other words, you 
don't need to have any double-precision values in *your* code for your app to 
be vulnerable.

The SC-L corner of the world puts a lot of emphasis on training and on looking 
for known categories of vulnerabilities.  That's all goodness.  But this 
example highlights the fact that we have to build systems and procedures that 
can quickly adapt to address new risks.

Brian
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] What do you like better Web penetration testing or static code analysis?

2010-04-24 Thread Brian Chess
I like your point Matt.  Everybody who's responded thus-far has wanted to
turn this into a discussion about what's most effective or what has the most
benefit, sort of like we were comparing which icky medicine to take or which
overcooked vegetable to eat.  Maybe they don't get any pleasure from the
work itself.

It sounds as though you need to change up your static analysis style.  A few
years back we ran competitions at BlackHat where we found we could identify
and exploit vulnerabilities starting from static analysis just as quickly as
from fuzzing.  Here¹s an overview:

http://reddevnews.com/Blogs/Desmond-File/2008/08/Iron-Chef-Competition-at-Bl
ack-Hat-Cooks-Up-Security-Goodness.aspx

Interviews with Charlie Miller and Sean Fay:
http://blog.fortify.com/blog/2009/05/02/Iron-Chef-Interviews-Part-1-Charlie-
Miller-1-2
http://blog.fortify.com/blog/2009/05/02/Iron-Chef-Interviews-Part-2-Sean-Fay

Brian

On 4/23/10 7:05 AM, "Matt Parsons"  wrote:

> Gary,
> I was not stating which was better for security.  I was stating what I
> thought was more fun.   I feel that penetration testing is sexier.  I find
> penetration testing like driving a Ferrari and static code analysis like
> driving a Ford Taurus.   I believe with everyone else on this list that
> software security needs to be integrated early in the development life
> cycle.  I have also read most of your books and agree with your findings.
> As you would say I don't think that penetration testing is magic security
> pixie dust but it is fun when you are doing it legally and ethically.  My
> two cents.
> Matt
> 
> 
> Matt Parsons, MSM, CISSP
> 315-559-3588 Blackberry
> 817-294-3789 Home office
> "Do Good and Fear No Man"
> Fort Worth, Texas
> A.K.A The Keyboard Cowboy
> mailto:mparsons1...@gmail.com
> http://www.parsonsisconsulting.com
> http://www.o2-ounceopen.com/o2-power-users/
> http://www.linkedin.com/in/parsonsconsulting
> http://parsonsisconsulting.blogspot.com/
> http://www.vimeo.com/8939668
> http://twitter.com/parsonsmatt
> 
> 
> 
> 
> 
> 
> 
>  
>  
> 
>  
> 
> -Original Message-
> From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org]
> On Behalf Of Gary McGraw
> Sent: Thursday, April 22, 2010 2:15 PM
> To: Peter Neumann; Secure Code Mailing List
> Subject: Re: [SC-L] What do you like better Web penetration testing or
> static code analysis?
> 
> I hereby resonate with my esteemed colleague and mentor pgn.  But no puns
> from me.
> 
> gem
> 
> 
> On 4/22/10 1:57 PM, "Peter Neumann"  wrote:
> 
> 
> 
> Matt Parsons wrote:
>> What do you like doing better as application security professionals, web
>> penetration testing or static code analysis?
> 
> McGovern, James F. (P+C Technology) wrote:
>> Should a security professional have a preference when both have
>> different value propositions? While there is overlap, a static analysis
>> tool can find things that pen testing tools cannot. Likewise, a pen test
>> can report on secure applications deployed insecurely which is not
>> visible to static analysis.
>> 
>> So, the best answer is I prefer both...
> 
> Both is better than either one by itself, but I think Gary McGraw
> would resonate with my seemingly contrary answer:
> 
>   BOTH penetration testing AND static code analysis are still looking
>   at the WRONG END of the horse AFTER it has left the DEVELOPMENT BARN.
>   Gary and I and many others have for a very long time been advocated
>   security architectures and development practices that greatly enhance
>   INHERENT TRUSTWORTHINESS, long before anyone has to even think about
>   penetration testing and static code analysis.
> 
>   This discussion is somewhat akin to arguments about who has the best
>   malware detection.  If system developers (past-Multics) had paid any
>   attention to system architectures and sound system development
>   practices, viruses and worms would be mostly a nonproblem!
> 
>   Please pardon my soapbox.
> 
> The past survives.
> The archives
> have lives,
> not knives.
> High fives!
> 
> (I strive
> to thrive
> with jive.)
> 
> PGN
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
> ___
> 
> 
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com

Re: [SC-L] BSIMM update (informIT)

2010-02-04 Thread Brian Chess
> At no time did it include corporations who use Ounce Labs or Coverity

Bzzzt.  False.  While there are plenty of Fortify customers represented in
BSIMM, there are also plenty of participants who aren't Fortify customers.
I don't think there are any hard numbers on market share in this realm, but
my hunch is that BSIMM is not far off from a uniform sample in this regard.

Brian


> -Original Message-
> From: sc-l-boun...@securecoding.org [mailto:sc-l-boun...@securecoding.org] On
> Behalf Of Kenneth Van Wyk
> Sent: Wednesday, February 03, 2010 4:08 PM
> To: Secure Coding
> Subject: Re: [SC-L] BSIMM update (informIT)
> 
> On Jan 28, 2010, at 10:34 AM, Gary McGraw wrote:
>> Among other things, David and I discussed the difference between descriptive
>> models like BSIMM and prescriptive models which purport to tell you what you
>> should do. 
> 
> Thought I'd chime in on this a bit, FWIW...  From my perspective, I welcome
> BSIMM and I welcome SAMM.  I don't see it in the least as a "one or the other"
> debate.
> 
> A decade(ish) since the first texts on various aspects of software security
> started appearing, it's great to have a BSIMM that surveys some of the largest
> software groups on the planet to see what they're doing.  What actually works.
> That's fabulously useful.  On the other hand, it is possible that ten thousand
> lemmings can be wrong.  Following the herd isn't always what's best.
> 
> SAMM, by contrast, was written by some bright, motivated folks, and provides
> us all with a set of targets to aspire to.  Some will work, and some won't,
> without a doubt.
> 
> To me, both models are useful as guide posts to help a software group--an SSG
> if you will--decide what practices will work best in their enterprise.
> 
> But as useful as both SAMM and BSIMM are, I think we're all fooling ourselves
> if we consider these to be standards or even maturity models.  Any other
> engineering discipline on the planet would laugh us all out of the room by the
> mere suggestion.  There's value to them, don't get me wrong.  But we're still
> in the larval mode of building an engineering discipline here folks.  After
> all, as a species, we didn't start (successfully) building bridges in a
> decade.
> 
> For now, my suggestion is to read up, try things that seem reasonable, and
> build a set of practices that work for _you_.
> 
> Cheers,
> 
> Ken
> 
> -
> Kenneth R. van Wyk
> KRvW Associates, LLC
> http://www.KRvW.com
> 
> 
> This communication, including attachments, is for the exclusive use of
> addressee and may contain proprietary, confidential and/or privileged
> information.  If you are not the intended recipient, any use, copying,
> disclosure, dissemination or distribution is strictly prohibited.  If you are
> not the intended recipient, please notify the sender immediately by return
> e-mail, delete this communication and destroy all copies.
> 
> 
> 
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] "Checklist Manifesto" applicability to software security

2010-01-07 Thread Brian Chess
I think it's a great analogy.  If you'd like to read more without ordering
the book, here's an article Gawande wrote for the New Yorker in 2007:

http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_gawande

Brian

On 1/7/10 7:11 AM, "Jeremy Epstein"  wrote:

> Greetings,
> 
> I was listening yesterday to an interview [1] on NPR with Dr. Atul
> Gawande, author of "Checklist Manifesto" [2].  He describes the
> problem that medical procedures (e.g., surgery) tend to have lots of
> mistakes, mostly caused because of leaving out important steps.  He
> claims that 2/3 of medical - or maybe surgical - errors can be avoided
> by use of checklists.  Checklists aren't very popular among doctors,
> because they don't like to see themselves as factory workers following
> a procedure, because the human body is extremely complex, and because
> every patient is unique.
> 
> So as I was listening, I was thinking that many of the same things
> could be said about software developers and problems with software
> security - every piece of software is unique, any non-trivial piece of
> software is amazingly complex, developers tend to consider themselves
> as artists creating unique works, etc.
> 
> Has anyone looked into the parallelisms before?  If so, I'd be
> interested in chatting (probably offlist) about your thoughts.
> 
> --Jeremy
> 
> [1] Listen to the interview at http://wamu.org/programs/dr/10/01/06.php#29280
> [2] "The Checklist Manifesto: How to Get Things Right", Atul Gawande,
> Metropolitan Books.
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Insecure Java Code Snippets

2009-05-06 Thread Brian Chess
We keep a big catalog here:
http://www.fortify.com/vulncat


On 5/6/09 10:41 AM, "Brad Andrews"  wrote:

> 
> 
> 
> Does anyone know of a source of insecure Java snippets?  I would like
> to get some for a monthly meeting of leading technical people.  My
> idea was to have a "find the bug" like the old C-Lint ads.
> 
> Does anyone know of a source of something like this.
> 
> Brad
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___
> 

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Positive impact of an SSG

2009-03-11 Thread Brian Chess
tatistics) as well as looking at the
> concept of "risk resiliency" advocated, in particular, by BT. fwiw.
> 
> Anyway...
> 
> On whether the site is up or not, I think DNS is hosed for the domain...
> I tried it from three locations (separate regions, separate providers)
> and got the same results:
> 
> $ host bsi-mm.com
> Host bsi-mm.com not found: 3(NXDOMAIN)
> 
> $ host bsi-mm.com
> ;; connection timed out; no servers could be reached
> 
> freeproxy.us also times out...
> 
> cheers,
> 
> -ben
> 
> Brian Chess wrote:
>> > Ben!  Thank you!  When you talk about sample size, it gives me hope that
>> > we¹re on the right track.  We can either:
>> >
>> > 1) Use ideas that ³experts² theorize will work
>> > or
>> > 2) Gather empirical evidence to judge one idea against another.
>> >
>> > We in the security crowd often try to hide behind the need for secrecy,
>> > and that¹s pushed us toward relying almost entirely on people who have
>> > nothing but rhetoric and personal reputation to stand on.  BSIMM pretty
>> > well shows that, in 2009, we can do better.  It¹s a big step forward to
>> > collect data and then argue about what it means.  I know it¹s already
>> > made the rounds, but let¹s have some XKCD to celebrate:
>> > http://xkcd.com/552/
>> >
>> > I think your question about defining success is an important one.  We
>> > were loose about it in this first round, and I hope it¹s something we
>> > can tighten up in our follow-on work.  Here¹s my thinking as of today:
>> > software security is not the goal, it¹s one of the many things an
>> > organization needs to do in order to meet it¹s objectives.  We need to
>> > look at how a software security initiative (or lack thereof) effects the
>> > organization¹s ability to meet it¹s objectives.  This is going to be
>> > messy, but it¹s either that or go back to making stuff up.
>> >
>> > BTW, I checked the BSIMM web site after I read your message.  It worked
>> > for me.  Try this?
>> > http://www.downforeveryoneorjustme.com/bsi-mm.com
>> >
>> > Brian
>> >
>> > On 3/11/09 10:48 AM, "Benjamin Tomhave" 
>> > wrote:
>> >
>> > I think it's an interesting leap of faith. Statistically speaking, 9 is
>> > a very small sample size. Thus, the proposed model will be viewed
>> > skeptically until it is validated with a much larger and more diverse
>> > sample. Putting it another way, there's no way I can take this to a
>> > small or medium sized org and have them see immediate relevance,
>> because
>> > their first reaction is going to be "those are 9 large orgs with lots
>> of
>> > resources - we don't have that luxury."
>> >
>> > You quoted "we can say with confidence that these activities are
>> > commonly found in highly successful programs" - how do you define a
>> > "highly successful program"? What's the rule or metric? Is this a rule
>> > or metric that can be genericized easily to all development teams?
>> >
>> > My concern is exactly what you speculate about... organizations are
>> > going to look at this and either try to tackle everything (and fail) or
>> > decide there's too much to tackle (and quit). In my experience working
>> > with maturity models as a consultant, very few people actually
>> > understand the concept. Folks are far more tuned-in to a PCI-like
>> > prescriptive method. Ironically, the PCI folks say the same thing you
>> > did - that it's not meant to be prescriptive, that it's supposed to be
>> > based on risk management practices - yet look how it's used.
>> >
>> > Maybe you've addressed this, but it doesn't sound like it. I'd perhaps
>> > be better educated here if the web site wasn't down... ;)
>> >
>> > -ben
>> >
>> > Sammy Migues wrote:
>>> > > Hi Pravir,
>>> > >
>>> > > Thanks for clarifying what you're positing. I'm not sure how we
could
>>> > > have been more clear in the BSIMM text accompanying the exposition
of
>>> > > the collective activities about the need to take this information
and
>>> > > work it into your own culture (i.e., do "risk management"). As a few
>>> >

Re: [SC-L] Positive impact of an SSG

2009-03-11 Thread Brian Chess
Ben!  Thank you!  When you talk about sample size, it gives me hope that
we¹re on the right track.  We can either:

1) Use ideas that ³experts² theorize will work
or
2) Gather empirical evidence to judge one idea against another.

We in the security crowd often try to hide behind the need for secrecy, and
that¹s pushed us toward relying almost entirely on people who have nothing
but rhetoric and personal reputation to stand on.  BSIMM pretty well shows
that, in 2009, we can do better.  It¹s a big step forward to collect data
and then argue about what it means.  I know it¹s already made the rounds,
but let¹s have some XKCD to celebrate:
http://xkcd.com/552/

I think your question about defining success is an important one.  We were
loose about it in this first round, and I hope it¹s something we can tighten
up in our follow-on work.  Here¹s my thinking as of today: software security
is not the goal, it¹s one of the many things an organization needs to do in
order to meet it¹s objectives.  We need to look at how a software security
initiative (or lack thereof) effects the organization¹s ability to meet it¹s
objectives.  This is going to be messy, but it¹s either that or go back to
making stuff up.

BTW, I checked the BSIMM web site after I read your message.  It worked for
me.  Try this?
http://www.downforeveryoneorjustme.com/bsi-mm.com

Brian

On 3/11/09 10:48 AM, "Benjamin Tomhave" 
wrote:

> I think it's an interesting leap of faith. Statistically speaking, 9 is
> a very small sample size. Thus, the proposed model will be viewed
> skeptically until it is validated with a much larger and more diverse
> sample. Putting it another way, there's no way I can take this to a
> small or medium sized org and have them see immediate relevance, because
> their first reaction is going to be "those are 9 large orgs with lots of
> resources - we don't have that luxury."
> 
> You quoted "we can say with confidence that these activities are
> commonly found in highly successful programs" - how do you define a
> "highly successful program"? What's the rule or metric? Is this a rule
> or metric that can be genericized easily to all development teams?
> 
> My concern is exactly what you speculate about... organizations are
> going to look at this and either try to tackle everything (and fail) or
> decide there's too much to tackle (and quit). In my experience working
> with maturity models as a consultant, very few people actually
> understand the concept. Folks are far more tuned-in to a PCI-like
> prescriptive method. Ironically, the PCI folks say the same thing you
> did - that it's not meant to be prescriptive, that it's supposed to be
> based on risk management practices - yet look how it's used.
> 
> Maybe you've addressed this, but it doesn't sound like it. I'd perhaps
> be better educated here if the web site wasn't down... ;)
> 
> -ben
> 
> Sammy Migues wrote:
>> > Hi Pravir,
>> >
>> > Thanks for clarifying what you're positing. I'm not sure how we could
>> > have been more clear in the BSIMM text accompanying the exposition of
>> > the collective activities about the need to take this information and
>> > work it into your own culture (i.e., do "risk management"). As a few
>> > examples:
>> >
>> > p. 3: "BSIMM is meant as a guide for building and evolving a software
>> > security initiative. As you will see when you familiarize yourself
>> > with the BSIMM activities, instilling software security into an
>> > organization takes careful planning and always involves broad
>> > organizational change. By clearly noting objectives and goals and by
>> > tracking practices with metrics tailored to your own initiative, you
>> > can methodically build software security in to your organization¹s
>> > software development practices."
>> >
>> > p. 47: "Choosing which of the 110 BSIMM activities to adopt and in
>> > what order can be a challenge. We suggest creating a software
>> > security initiative strategy and plan by focusing on goals and
>> > objectives first and letting the activities select themselves.
>> > Creating a timeline for rollout is often very useful. Of course
>> > learning from experience is also a good strategy."
>> >
>> > p. 47: "Of the 110 possible activities in BSIMM, there are ten
>> > activities that all of the nine programs we studied carry out. Though
>> > we can¹t directly conclude that these ten activities are necessary
>> > for all software security initiatives, we can say with confidence
>> > that these activities are commonly found in highly successful
>> > programs. This suggests that if you are working on an initiative of
>> > your own, you should consider these ten activities particularly
>> > carefully (not to mention the other 100)."
>> >
>> > p. 48: "The chart below shows how many of the nine organizations we
>> > studied have adopted various activities. Though you can use this as a
>> > rough ³weighting² of activities by prevalence, a software security
>> > initiative plan is best

[SC-L] Call for papers: Programming Languages and Analysis for Security (PLAS)

2009-03-03 Thread Brian Chess
  ACM SIGPLAN Fourth Workshop on
 Programming Languages and Analysis for Security (PLAS 2009)

Dublin, Ireland, June 15, 2009

   Sponsored by ACM SIGPLAN
   Co-located with PLDI '09
   Supported by IBM Research and Microsoft Research

   http://www.cs.stevens.edu/~naumann/plas2009.html

  Submission Deadline: April 3, 2009



Call for Papers

PLAS aims to provide a forum for exploring and evaluating ideas on the
use of programming language and program analysis techniques to improve
the security of software systems. Strongly encouraged are proposals of
new, speculative ideas; evaluations of new or known techniques in
practical settings; and discussions of emerging threats and important
problems.

The scope of PLAS includes, but is not limited to:

* Language-based techniques for security
* Verification of security properties in software
* Automated introduction and/or verification of security
  enforcement mechanisms
* Program analysis techniques for discovering security
  vulnerabilities
* Compiler-based security mechanisms, such as host-based intrusion
  detection and in-line reference monitors
* Specifying and enforcing security policies for information flow
  and access control
* Model-driven approaches to security
* Applications, examples, and implementations of these security
  techniques in domains including web applications, embedded
  software, etc.



Important Dates and Submission Guidelines

  * Submission due date: Friday, April 3, 2009
  * Author notification: Friday, May 1, 2009
  * Revised papers due: Monday, May 18, 2009
  * Student travel grant applications due: Friday, May 29, 2009
  * PLAS 2009 workshop: Monday, June 15, 2009

We invite papers of two kinds: (1) Technical papers about relatively
mature work, for "long" presentations during the workshop, and (2)
papers for "short" presentations about more preliminary work, position
statements, or work that is more exploratory in nature.  Short papers
marked as "Informal Presentation" will have only their abstract
published in the proceedings.  All other papers will be included in
the formal proceedings and must describe original work in compliance
with the SIGPLAN republication policy.  Page limits are 12 pages for
long papers and 6 pages for short papers.



Student Travel Grants

Student attendees of PLAS can apply for a travel grant (in addition to
any PLDI grants), thanks to the generous support of IBM Research and
Microsoft Research. The application forms will be on the workshop web
site.



Program Committee

 * Aslan Askarov, Chalmers University of Technology, Sweden
 * Brian Chess, Fortify Software, USA
 * Stephen Chong, Harvard University, USA (co-chair)
 * Úlfar Erlingsson, Reykjavík University, Iceland
 * Kevin W. Hamlen, University of Texas at Dallas, USA
 * Benjamin Livshits, Microsoft Research, USA
 * Pasquale Malacaria, Queen Mary University of London, UK
 * David Naumann, Stevens Institute of Technology, USA (co-chair)
 * Marco Pistoia, IBM Research, USA
 * François Pottier, INRIA Paris-Rocquencourt, France
 * Tamara Rezk, INRIA Sophia Antipolis-Méditerranée, France
 * Tachio Terauchi, Tohoku University, Japan
 * David Wagner, University of California, Berkeley, USA


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Some Interesting Topics arising from the SANS/CWE Top 25

2009-01-15 Thread Brian Chess
> In the one sense, we are talking about validating user input, which
> mostly needs to concern itself with adhering to business requirements.
> This meaning is not very important for security, but the other one,
> validating data before something is done with it, is.

Yes, two forms of validation are required.  If you hang around with the
compliers crowd for too long, you¹ll call them syntax validation and
semantic validation.
Syntax: ³the input must be an integer²
Semantics: ³the input must identify an account held in your name.²

It¹s often possible and even desirable to perform syntax checking not long
after a program accepts its input.  You can bottleneck a program and make
sure all input runs through a syntax validation layer.  Not so with semantic
checks.  In many cases they are so closely related to the program logic that
ripping them out and creating an ³semantic validation layer² would
essentially double the length of the program and create a maintenance
nightmare.

So which form of input validation is security input validation?  Both!  In
most cases you can¹t afford to skip either one.  Bad or absent syntax checks
lead generic kinds of problems like SQL injection.  Bad or absent semantic
checks lead to problems that are often more specific to the application at
hand.  

There¹s a lot to say about input validation.  Jacob West and I wrote devoted
a full chapter to it in Secure Programming with Static Analysis
(http://www.amazon.com/dp/0321424778), but we found that the material
refused to stay in its cage‹input validation got a lot of airtime when we
talked about the Web, when we talked about privileged programs, and then
again when we got around to the litany of common errors in C/C++ programs.

Brian


On 1/14/09 2:02 PM, "Ivan Ristic"  wrote:

> On Wed, Jan 14, 2009 at 12:41 AM, Greg Beeley 
> wrote:
>> > Steve I agree with you on this one.  Both input validation and output
>> encoding
>> > are countermeasures to the same basic problem...
> 
> I'd like to offer a different view for your consideration, which is
> that input validation and output encoding actually don't have anything
> to do with security. Those techniques are essential software building
> blocks. While it is true that omission to use these techniques often
> causes security issues, that only means such programs are insecure in
> addition to being defective. I think that it's inherently wrong to
> associate input validation and output encoding with security. Fix the
> defects and the security issues will go away. On the other hand, if
> you only fix the security issues you may be left with a number of
> defects on your hands.
> 
> Input validation layers should focus on accepting only valid data (per
> business requirements), while code that transmits data across system
> boundaries should focus on using the exchange and communication
> protocols correctly.
> 
> Actually, now that I think about it more, I think we are struggling
> with the term input validation because the term has been overloaded.
> In the one sense, we are talking about validating user input, which
> mostly needs to concern itself with adhering to business requirements.
> This meaning is not very important for security, but the other one,
> validating data before something is done with it, is. If you take a
> web application for example, you would ideally verify that all user
> submitted data adheres to your business requirements.
> 
> --
> Ivan Ristic
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> ___
> 

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] top 10 software security surprises

2008-12-17 Thread Brian Chess
Thanks Ken.  For me this has been an incredibly eye-opening project.   
It can be hard for people to distinguish between ideas that merely  
look good on paper and ideas that are actually in widespread use.   
Once we’ve cleaned up the data and gotten approval from the  
organizations we canvassed, we think there’ll be plenty of ways to  
apply what we’ve learned.  The project Pravir mentioned is one.


Brian

[Ed. This was from br...@fortify.com, but was dropped by the Mailman  
server since it's set to ignore emails from non-subscribed addresses  
(spam...). Issue was resolved re Brian's email address.]


On 12/17/08 11:48 AM, "Ken van Wyk"  wrote:


On Dec 16, 2008, at 1:25 PM, Gary McGraw wrote:
Using the software security framework introduced in October (A  
Software Security Framework: Working Towards a Realistic Maturity  
Model ),  
we interviewed nine executives running top software security  
programs in order to gather real data from real programs.


Wow, this is great stuff.  Kudos to Gary, Sammy, and Brian.

I have a couple comments/observations on some of your conclusions:

- You obviously wrote the top-10 list in C, since it went from 9 to  
0.  :-)


- "Not only are there are no magic software security metrics, bad  
metrics actually hurt."  This is an excellent point.  I think it's  
also worth noting that it's important to carefully consider what  
metrics make sense for an organization _as early as possible_ in the  
life of their software security efforts.  Trying to retro-engineer  
some metrics into a program after the fact is not a fun thing.


- "Secure-by-default frameworks can be very helpful, especially if  
they are presented as middleware classes (but watch out for an over  
focus on security "stuff"). "  Yes yes yes!  I've found  
significantly more "traction" to prescriptive guidance vs. a "don't  
do this" list of bad practices.  Plus, it inherently supports a  
mindset of positive validation instead of negative.  It's important  
to look for common mistakes, but if you really want your devs to  
follow, give them clear coding guidelines with annotated  
descriptions of how to follow them.  Efforts like OWASP's ESAPI are  
indeed a great starting point here for plugging in things like  
strong positive input validation and such.


- "Web application firewalls are not in wide use, especially not as  
Web application firewalls. "  I can't say I'm much surprised by this  
one.  Even with PCI-DSS driving people to WAFs (or do external  
independent code reviews), I just don't often see them often.  But  
you go on to say, "But even these two didn't use them to block  
application attacks; they used them to monitor Web applications and  
gather data about attacks."--but you don't come back to this point.   
One serious benefit to WAFs can be enhancing the ability to do  
monitoring, especially of legacy apps.  Adding one network choke  
point WAF can quickly add an app-level monitoring capability that  
few organizations considered when rolling the apps out in the first  
place.


- "Though software security often seems to fit an audit role rather  
naturally, many successful programs evangelize (and provide software  
security resources) rather than audit even in regulated industries"   
This one too is very encouraging to see.


- "Architecture analysis is just as hard as we thought, and maybe  
harder." And this one is very discouraging.  I've seen good results  
in doing architectural risk analyses, but the ones that produce  
useful results tend to be the more ad hoc ones -- and NOT the ones  
that follow rigorous processes.


- "All nine programs we talked to have in-house training curricula,  
and training is considered the most important software security  
practice in the two most mature software security initiatives we  
interviewed. "  That explains the quarter-million miles in my United  
account this year alone.  :-) Ugh.


- "Though all of the organizations we talked to do some kind of  
penetration testing, the role of penetration testing in all nine  
practices is diminishing over time. "  Hallelujah!




Cheers,

Ken

-
Kenneth R. van Wyk
SC-L Moderator
KRvW Associates, LLC
http://www.KRvW.com


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com 
)

as a free, non-commercial service to the software security community.
___


smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http:/

[SC-L] International Symposium on Engineering Secure Software and Systems (ESSoS)

2008-06-26 Thread Brian Chess
CALL FOR PAPERS
===
International Symposium on Engineering Secure Software and Systems (ESSoS)
February 04-06, 2009
Leuven, Belgium
http://distrinet.cs.kuleuven.be/events/essos2009/

CONTEXT AND MOTIVATION
Trustworthy, secure software is a core ingredient of the modern world.
Unfortunately, most software developed today runs on a network exposing it
to a hostile environment. The Internet can allow vulnerabilities in software
to be exploited from anywhere in the world. High-quality security building
blocks (e.g., cryptographic components) are necessary, but insufficient to
address this. Indeed, the construction of secure software is challenging
because of the complexity of applications, the growing security
requirements, and the multitude of software technologies and attack vectors.
Clearly, a strong need exists for engineering techniques for secure software
and systems that scale well and that demonstrably improve the software's
security properties.

GOAL AND SETUP
The goal of this symposium, which will be the first in a series of events,
is to bring together researchers and practitioners to advance the states of
the art and practice in secure software engineering. Being one of the few
conference-level events dedicated to this topic, it explicitly aims to
bridge the software engineering and security engineering communities, and
promote cross-fertilization. The symposium will feature two days of
technical programme as well as one day of tutorials. The technical programme
includes an experience track for which the submission of highly informative
case studies describing (un)successful secure software project experiences
and lessons learned is explicitly encouraged.

TOPICS 
The Symposium seeks submissions on topics related to its goals. This
includes a diversity of topics including (but not limited to):
-scalable techniques for threat modeling and analysis of vulnerabilities
-specification and management of security requirements and policies
-security architecture and design for software and systems
-model checking for security
-specification formalisms for security artifacts
-verification techniques for security properties
-systematic support for security best practices
-security testing
-security assurance cases
-programming paradigms, models and DLS's for security
-program rewriting techniques
-processes for the development of secure software and systems
-security-oriented software reconfiguration and evolution
-security measurement
-automated development
-trade-off between security and other non-functional requirements
-support for assurance, certification and accreditation


SUBMISSION AND FORMAT
The proceedings of the symposium will be published as a Springer-Verlag
volume in the Lecture Notes in Computer Science Series
(http://www.springer.com/lncs). Submitted papers must present original,
non-published work of high quality that has not been submitted for potential
publication in parallel. Submitted papers should follow the formatting
instructions of the Springer LNCS Style, and should include maximally 15
pages for research papers and 10 pages for industrial papers (figures and
appendices included). Proposals for tutorials are highly welcome as well.
Further guidelines will appear on the website of the symposium.

IMPORTANT DATES
Abstract submission: September 8, 2008
Paper submission: September 15, 2008
Author notification: November 5, 2008
Camera-ready: November 24, 2008
Tutorial submission: October 24, 2008
Tutorial notification: November 21, 2008

STEERING COMMITTEE
Jorge Cuellar (Siemens AG)
Wouter Joosen (Katholieke Universiteit Leuven)
Fabio Massacci (Universit` di Trento)
Gary McGraw (Cigital)
Bashar Nuseibeh (The Open University)
Samuel Redwine (James Madison University)

ORGANIZING COMMITTEE
General chair: Bart De Win (Katholieke Universiteit Leuven)
Program co-chairs: Fabio Massacci (Universit` di Trento) and Samuel Redwine
(James Madison University)
Publication chair: Nicola Zannone (University of Toronto)
Tutorial chair: Riccardo Scandariato (Katholieke Universiteit Leuven)

PROGRAM COMMITTEE (preliminary)
Matt Bishop, University of California (Davis) - USA
Brian Chess, Fortify Software - USA
Richard Clayton, Cambridge University - UK
Christian Collberg, University of Arizona - USA
Bart De Win, Katholieke Universiteit Leuven - BE
Juergen Doser, ETH - CH
Eduardo Fernandez-Medina, University of Castilla-La Mancha - ES
Dieter Gollmann, University of Hamburg - DE
Michael Howard, Microsoft - USA
Cynthia Irvine, Naval Postgradual School - USA
Jan Jurjens, Open University - UK
Volkmar Lotz, SAP Labs - FR
Antonio Mana, University of Malaga - ES
Robert Martin, MITRE - USA
Fabio Massacci, Universit` di Trento - IT
Mira Mezini, Darmstadt University - DE
Mattia Monga, Milan University - IT
Andy Ozment, DoD - USA
Gunther Pernul, Universitat Regensburg - DE
Domenico Presenza, Engineering - IT
Samuel Redwine, Jam

Re: [SC-L] Really dumb questions?

2007-08-30 Thread Brian Chess
> - So when a vendor says that they are focused on quality and not
> security, and vice versa what exactly does this mean?

We spend most of Chapter 2 of Secure Programming with Static Analysis
describing the different problems that static analysis tools try to solve,
and we show where we think all of the companies you mention (plus a lot of
others) fit in.  The relative importance of false positives vs false
negatives is one important difference, but so is extensibility, rule set (as
John mentioned), ability of the tool to prioritize its findings, and the
interface the tool presents for exploring the results.  From my experience,
the vendors do different things in all of these areas, and these differences
aren't just a result of dumb luck.  They stem from different philosophies
about what the tools are supposed to do.  "Quality vs. Security" may be an
oversimplification, but the differences between the tools are much more than
cosmetic.


> - Is it reasonable to expect that all of the vendors in this space will
> have the ability to support COBOL, Ruby and Smalltalk sometime next year
> so that customers don't have to specifically request it?

I don't think so.  The way a tool is designed can make it easier or harder
to add support for a new language, but unless you're doing a really
superficial analysis, adding a new language is always a big deal. Supporting
a language requires more than just being able to parse it.  The tools often
have to do special work to make sure that the meaning of common idioms
carries over correctly in the analysis, then there's the small matter of
developing a rule set.

Someone mentioned that Ruby makes life hard because it lacks static types.
While that's true, it compensates in other ways.  For example, because of a
lack of static types, there are often more bugs to find.  There's some
really good academic work going on right now around security analysis of
scripting languages (mostly PHP).  Here's my pick of the week:

Sound and Precise Analysis of Web Applications for Injection Vulnerabilities
by Gary Wassermann and Zhendong Su
http://wwwcsif.cs.ucdavis.edu/~wassermg/research/pldi07.pdf


Regards,
Brian





___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Secure Programming with Static Analysis

2007-07-05 Thread Brian Chess
Jacob West and I are proud to announce that our book, Secure Programming
with Static Analysis, is now available.

http://www.amazon.com/dp/0321424778

The book covers a lot of ground.
* It explains why static source code analysis is a critical part of a secure
development process.
* It shows how static analysis tools work, what makes one tool better than
another, and how to integrate static analysis into the SDLC.
* It details a tremendous number of vulnerability categories, using
real-world examples from programs such as Sendmail, Tomcat, Adobe Acrobat,
Mac OSX, and dozens of others.

We'd like to thank the many members of the sc-l list who helped us out with
the book in one way or another, including:
  Pravir Chandra
  Gary McGraw
  Katrina O'Neil
  John Steven
  Ken van Wyk

Regards,
Brian and Jacob

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] JavaScript Hijacking

2007-04-19 Thread Brian Chess

Frederik De Keukelaere <[EMAIL PROTECTED]> writes:
> Would you mind sharing the different data formats you came across for
> exchanging data in mashups/Web 2.0? Considering the challenges you
> recently discovered, it might be good to have such an overview to look at
> it from a security point of view.

Oops, sorry for taking so long to respond.  In addition to JSON, I've seen
two other uses of JavaScript as a data transport format.

1) JavaScript arrays
Example: [ "a", "b", "c" ]

Technically speaking, this is a subset of JSON, but in these systems there
is no notion of an object, only an array.  These systems are more vulnerable
than systems using JSON because they're guaranteed to always use array
syntax.


2) Function calls
Example:  addRecord("a", "b", "c");

This format is even easier to hijack, just define the named function.  This
is the worst of the bunch from a confidentiality standpoint.

Regards,
Brian

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] SC-L Digest, Vol 3, Issue 73

2007-04-09 Thread Brian Chess
Hi Frederik, 
You're right that IE does not have the setter methods.  You're also right
that hijacking the Object() or Array() constructor method would be enough to
pull off the attack.  The bad (good?) news is that IE doesn't call those
methods unless an object is explicitly created with the "new" keyword.  We
got this wrong when we looked at it initially, which is why we said the code
could be ported to IE.  We're going to go back and fix that in the paper.

Of course, any JavaScript data transport format that explicitly calls a
function is vulnerable in all browsers.  Over the last week or two I've been
learning that people are moving data around using a lot more than just JSON,
though JSON is the clear front-runner.

Brian

> 
> Message: 1
> Date: Fri, 6 Apr 2007 11:32:33 +0900
> From: Frederik De Keukelaere <[EMAIL PROTECTED]>
> Subject: Re: [SC-L] JavaScript Hijacking
> To: sc-l@securecoding.org
> Message-ID:
> <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset="us-ascii"
> 
> Hi Brian, Hi Stefano,
> 
> 
>  
>> Ok I see the difference.
>> You are taking advantage of a pure json CSRF with a evil script which
>> contains a modified version of the Object prototype.
>> And when the callback function is executed you use a XMLHttpRequest in
>> order to send the information extracted by the instantiated object.
> 
> In the beginning of the paper there was a comment that the code that was
> presented was designed for use in Firefox but could be ported to IE or
> other browsers. However, since IE does not seem to have the setter methods
> (correct me if I am wrong), I did not quite find a way to achieve this in
> IE. 
> We tried several things such as replacing Array and Object constructor as
> well as as overriding eval, neither of which worked. Do you have any
> suggestions about how to port this attack to IE?
> 
> Btw, thanks for the papers.
> 
> Kind Regards,
> 
> Fred
> 
> ---
> Frederik De Keukelaere, Ph.D.
> Post-Doc Researcher
> IBM Research, Tokyo Research Laboratory
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> http://krvw.com/pipermail/sc-l/attachments/20070406/b9ac46c2/attachment-0001.h
> tml 
> 
> --
> 
> ___
> SC-L mailing list
> SC-L@securecoding.org
> http://krvw.com/mailman/listinfo/sc-l
> 
> 
> End of SC-L Digest, Vol 3, Issue 73
> ***

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] JavaScript Hijacking

2007-04-02 Thread Brian Chess
Hi Stefano,

Yes, we are aware of your paper, but we intentionally chose to omit the
reference because we are quite snobby.  I'm joking!  I hadn't seen your
paper previously.  It was a good read.

The difference between what you discuss and JavaScript Hijacking is that we
do not assume the presence of another defect.  JavaScript Hijacking does not
require the existence of a cross-site scripting vulnerability or the like.
It's a new attack technique (and a new vulnerable code pattern), not a new
method for exploiting an existing class of vulnerabilities.

Thanks,
Brian

> From: Stefano Di Paola <[EMAIL PROTECTED]>
> Date: Mon, 02 Apr 2007 11:11:24 +0200
> To: "sc-l@securecoding.org" 
> Cc: Brian Chess <[EMAIL PROTECTED]>
> Subject: Re: [SC-L] JavaScript Hijacking
> 
> Brian,
> 
> i don't know if you read it but me and Giorgio Fedon presented a paper
> named "Subverting Ajax" at 23rd CCC Congress.
> (4th section XSS Prototype Hijacking)
> http://events.ccc.de/congress/2006/Fahrplan/attachments/1158-Subverting_Ajax.p
> df
> 
> It described a technique called Prototype Hijacking, which is about
> overriding methods and attributes by using contructors and prototyping.
> It was described how to override XMLHttprequest object, but it was
> stated that it could be applied to every prototype.
> 
> If you didn't read it, please read it and add some reference to your
> paper.
> If you read it:
> - i think we deserve at least reference to our paper.
> - even if you covered JSON hijacking, the technique is the same and the
> name (Javascript Hijacking) is quite similar.
> 
> Regards,
> 
> Stefano
> 


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] JavaScript Hijacking

2007-04-01 Thread Brian Chess
I've been getting questions about Ajax/Web 2.0 for a few years now.  Most of
the time the first question is along these lines: "Does Ajax cause any new
security problems?"  Until recently, my answer has been right in line with
the answers I've heard from other corners of the world: "No."

Then I've gone on to explain that Ajax doesn't change the rules of the game,
but it does tilt the playing field.  For example:
  - By splitting your code between a client and a server, you increase
you opportunity for misplacing input validation logic and access
control checks.
  - Dynamic testing tools tend to have a harder time with Ajax apps.

Now my story has changed.  We've found a new type of vulnerability that only
affects Ajax-style apps.  We call the attack "JavaScript Hijacking".  It
enables an attacker to read confidential information from vulnerable sites.
The attack works because many Ajax apps have given up on the "x" in Ajax.
Instead of XML, they're using JavaScript as a data transport format.

The problem is that web browsers don't protect JavaScript the same way they
protect HTML, so a malicious web site can peek into some of the JavaScript
returned from a vulnerable Ajax app.  We've looked at a lot of Ajax
frameworks over the past few weeks, including Google's GWT, Microsoft Atlas,
and half a dozen open source frameworks.  Almost all of them make it easy
for developers to write vulnerable code.  Some of them *require* developers
to write vulnerable code.

Our write-up on the problem, along with our proposed solution, is here:

http://www.fortify.com/servlet/downloads/public/JavaScript_Hijacking.pdf

Enjoy,
Brian

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Silverbullet: Fortify TAB

2007-02-03 Thread Brian Chess
We just posted the transcript from the Sliver Bullet podcast with the
Fortify TAB:
http://www.fortify.com/silverbullet

Audio still available on the Silver Bullet site:
http://www.cigital.com/silverbullet/show-010/

Brian

-
From: <[EMAIL PROTECTED]>
Reply-To: 
Date: Mon, 22 Jan 2007 18:57:28 -0800
To: 
Conversation: SC-L Digest, Vol 3, Issue 17
Subject: SC-L Digest, Vol 3, Issue 17

Date: Mon, 22 Jan 2007 15:43:02 -0500
From: "Gary McGraw" <[EMAIL PROTECTED]>
Subject: [SC-L] Silverbullet: Fortify TAB
To: 
Message-ID:
<[EMAIL PROTECTED]>
Content-Type: text/plain;   charset="us-ascii"

Hi all,

I am pleased to announce the tenth (!) episode of the "Silver Bullet
Security Podcast with Gary McGraw" was released today.  We tried
something different in this episode by doing a group interview with the
entire Fortify Technical Advisory Board.  This was a super opportunity
to cover software security from many different angles...which is
precisely what I did.

On the 'cast are the pontifacatory stylings of:
* Bill Pugh, Professor at University of Maryland, static analysis for
finding bugs
* Li Gong, GM at Microsoft, MSN in China
* Marcus Ranum, CSO of Tenable Network Security, security products
trainer
* Avi Rubin, Professor at Johns Hopkins, electronic voting security
* Fred Schneider, Professor at Cornell, trustworthy computing
* Greg Morrisett, Professor at Harvard, dependant type theory
* Matt Bishop, Professor at UC Davis, computer security
* Dave Wagner, Professor at Berkeley, software security and electronic
voting

To listen, hit http://www.cigital.com/silverbullet/show-010/

Also note that during the same TAB meeting, reporters from ZDNet stopped
in and did an interview that resulted in this very software security o
centric blog entry: http://blogs.zdnet.com/BTL/?p=4280

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
book www.swsec.com

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] Java Open Review Project

2006-12-12 Thread Brian Chess
Hello all, I'm pleased to announce that we've just launched the Java Open
Review Project (http://opensource.fortifysoftware.com).  We're reviewing
open source Java code all the way from Tomcat down to PetStore looking for
bugs and security vulnerabilities.  We're using two static analysis tools to
do the heavy lifting: FindBugs and Fortify SCA.  We can use plenty of human
eyes to help sort through the results.  We're also soliciting ideas for
which projects we should be reviewing next.  Please help!

Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] RE: Comparing Scanning Tools

2006-06-09 Thread Brian Chess
Title: RE: Comparing Scanning Tools



McGovern, James F wrote:

> I have yet to find a large enterprise that has made a significant investment in such tools. 

I’ll give you pointers to two.  They’re two of the three largest software companies in the world.

http://news.com.com/2100-1002_3-5220488.html
http://news.zdnet.com/2100-3513_22-6002747.html

Brian




___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Re: Comparing Scanning Tools

2006-06-09 Thread Brian Chess
Hi Jerry, as one of the creators of the tool you evaluated, I have to admit
I have the urge to comment on your message one line at a time and explain
each way in which the presentation you attended did not adequately explain
what Fortify does or how we do it.  But I don't think the rest of the people
on this list would find that to be a very interesting posting, so instead
I'm going to try to stick to general comments about a few of the subjects
you brought up.


False positives:
Nobody likes dealing with a pile of false positives, and we work hard to
reduce false positives without giving up potentially exploitable
vulnerabilities.

In some sense, this is where security tools get the raw end of the deal.  If
you're performing static analysis in order to find general quality problems,
you can get away with dropping a potential issue on the floor as soon as you
get a hint that your analysis might be off.  You can't do that if you are
really focused on security.  To make matters worse for security tools, when
a quality-focused tool can detect just some small subset of some security
issue, the create labels it a "quality and security" tool.  Ugh.  This
rarely flies with a security team, but sometimes it works on non-security
folks.
 
Compounding the problem is that, when the static analysis tool does point
you at an exploitable vulnerability, it's often not a very memorable
occasion.  It's just a little goof-up in the code, and often the problem is
obvious once the tool points it out.  So you fix it, and life goes on.  If
you aren't acutely aware of how problematic those little goof-ups can be
once some "researcher" announces one of them, it can almost seem like a
non-event.  All of this can make the hour you spent going through reams of
uninteresting results seem more important than the 5 minutes you spent
solving what could have become a major problem, even though exactly the
opposite is true.


Suppression:
A suppression system that relies on line numbers wouldn't work very well.
When it comes to suppression, the biggest choice you've got to make is
whether or not you're going to rely on code annotation.  Code annotation can
work well if you're reviewing your own code, but if you're reviewing someone
else's code and you can't just go adding annotation goo wherever you like,
you can't use it, at least not exclusively.

Instead, the suppression system needs to be able to match up the salient
features of the suppressed issue against the code it is now evaluating.
Salient features should include factors like the names of variables and
functions, the path or paths required to activate the problem, etc.


Customization:
Of course the more knowledge you provide the tool, the better a job it can
do at telling you things you'd like to know.  But in the great majority of
cases that I've seen, little or no customization is required in order to
derive benefit from any of the commercial static analysis tools I've seen.

In the most successful static analysis deployments, the customization
process never ends--people keep coming up with additional properties they'd
like to check.  The static analysis tool becomes a way to share standards
and best practices.

Regards,
Brian


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] RE: The role static analysis tools play in uncovering elements of design

2006-02-05 Thread Brian Chess
"Jeff Williams" <[EMAIL PROTECTED]> wrote:

> I think there's a lot more that static analysis can do than what you're
> describing. They're not (necessarily) just fancy pattern matchers.


Jeff, you raise a important point.  Getting good value out of static
analysis requires a second component in addition to a fancy pattern
matcher.  You also need a good set of fancy patterns (aka rules)
to match against.

Understand that when I say ³fancy pattern², I don¹t mean ³regular
expression².  More formally, I mean "program property".  Taint
propagation, state transitions, feasible control flow paths,
alias analysis, etc. are all important if you¹d like to build
a tool, but if you¹re contemplating how to best apply a tool,
all of your interaction will be of the form

³Show me places in the program where property X holds²

and the goal of the tool is to match the code against the property
you've specified.  For a good initial user experience, a tool
needs to come with a well-constructed default set of rules.  For
aiding in program understanding, it needs to allow you to easily
add new rules of your own construction.

The number of false alarms you get from a good static analysis
tool is directly related to how aggressive the tool is in finding
constructs that might be what you're looking for. As an example, I'll
use a question you posed: "are there any paths around the
encryption?"  If you're going to bet your life that there aren't
any such paths, then you'd like the tool to make conservative
assumptions and allow you to manually review anything that might
possibly match the pattern you've specified. If you only have a
passing interest in the property, then you'd prefer the tool weed
out paths that, while not impossible, are not likely to be of
interest.  Maybe some code will help illustrate my point:

  if (b) {
panic();
  else {
buf = encrypt(buf);
  }
  return buf;

Would you like to be warned that buf may be returned unencrypted?
This code fragment guarantees that either buf is encrypted or
panic() is called.  But usually control doesn't return from a
function named panic().  The problem is, that's not universally
true; sometimes panic() just sets a flag and the system reboot
happens shortly thereafter.

Whether or not you want to see this path depends on how important
it really is to you that encryption is absolutely never bypassed.
Your tolerance for noise is dictated by the level of assurance
you require.

Brian


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-03 Thread Brian Chess
The best definition for "flaw" and "bug" I've heard so far is that a flaw is
a successful implementation of your intent, while a bug is unintentional.  I
think I've also heard "a bug is small", a flaw is big", but that definition
is awfully squishy.

If the difference between a bug and a flaw is indeed one of intent, then I
don't think it's a useful distinction.  Intent rarely brings with it other
dependable characteristics.

I've also heard "bugs are things that a static analysis tool can find", but
I don't think that really captures it either.  For example, it's easy for a
static analysis tool to point out that the following Java statement implies
that the program is using weak cryptography:

SecretKey key = KeyGenerator.getInstance("DES").generateKey();

Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Re: SC-L Digest, Vol 2, Issue 17

2006-02-03 Thread Brian Chess
John, I think this has to do with what you want to achieve when you explore
code.

A static analysis tool is a fancy sort of pattern matcher.  If the kinds of
patterns you're interested in aren't that fancy, ("does the program use
function X()?"; "what is the class hierarchy?") then a fancy pattern matcher
is overkill.

If your version of code exploration include things like "is function A()
always called before function B()?" or "is it possible for this data
structure Z to be populated with the result of function X()?"  then you're
in the realm where a static analysis tool might help.

Of course, a static analysis tool allows you to take shortcuts, so you may
learn less about the code than you would if you had to answer these
questions the hard way.

Brian


Date: Fri, 03 Feb 2006 13:39:36 -0500
From: "John Steven" <[EMAIL PROTECTED]>
Subject: [SC-L] The role static analysis tools play in uncovering
elementsof design
To: "Jeff Williams" <[EMAIL PROTECTED]>,"Secure Coding
Mailing List" 
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset="iso-8859-1"

Jeff,

An unpopular opinion I¹ve held is that static analysis tools, while very
helpful in finding problems, inhibit a reviewer¹s ability to find collect as
much information about the structure, flow, and idiom of code¹s design as
the reviewer might find if he/she spelunks the code manually.

I find it difficult to use tools other than source code navigators (source
insight) and scripts to facilitate my code understanding (at the
design-level). 

Perhaps you can give some examples of static analysis library/tool use that
overcomes my prejudicehttp://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread Brian Chess
I spent Phase One of both my academic and professional careers
working on hardware fault models and design for testability.
In fact, the first static analysis tool I wrote was for hardware:
it analyzed Verilog looking for design mistakes that would make
it difficult or impossible to perform design verification or to
apply adequate manufacturing tests.  Some observations:

- The hardware guys are indeed ahead.  Chip designers budget for
test and verification from day one.  They also do a fair amount
of thinking about what's going to go wrong.  Somebody's going to
give you 5 volts instead of 3.3 volts.  What's going to happen?
The transistors are going to switch at a different rate when the
chip is cold.  What's going to happen?  A speck of dust is going
to fall on the wafer between the time the metal 2 layer goes down
and the time the metal 3 layer goes down.  What's going to happen?

- The difference between a manufacturing defect and a design
defect is not always immediately obvious.  Maybe two wires got
bridged because a piece of dust fell in exactly the right spot.
Maybe two wires got bridged because you made a mistake in your
process physics and you need 50 nm of tolerance instead of 0.5 nm.
You'd better figure it out before you go into full-swing
manufacturing, or big batches of defective chips could kill your
profit margins and drive your customers away at the same time.
For that reason, diagnosing the cause of failure is an important
topic.

Brian

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On
Behalf Of Chris Wysopal
Sent: 02 February 2006 21:35
To: Gary McGraw
Cc: William Kruse; Wall, Kevin; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws



In the manufacturing world, which is far more mature than the software
development world, they use the terminology of "design defect" and
"manufacturing defect".  So this distinction is useful and has stood the
test of
time.

Flaw and defect are synonymous. We should just pick one. You could say that
the
term for manufacturing software is "implementation".

So why do we need to change the terms for the software world?  Wouldn't
"design
defect" and "implementation defect" be clearer and more in line with the
manufacturing quality discipline, which the software quality discipline
should
be working towards emulating. (When do we get to Six
Sigma?)

I just don't see the usefulness of calling a "design defect" a "flaw".
"Flaw" by
itself is overloaded.  And in the software world, "bug" can mean an
implementation or design problem, so "bug" alone is overloaded for
describing an
implementation defect.

At @stake the Application Center of Excellence used the terminology "design
flaw" and "implementation flaw".  It well understood by our customers.

As Crispin said in an earlier post on the subject, the line is sometimes
blurry.
I am sure this is the case in manufacturing too.  Architecture flaws can be
folded into the design flaw category for simplicity.

My vote is for a less overloaded and clearer terminology.

-Chris

P.S. My father managed a non-destructive test lab at a jet engine
manufacturer.
They had about the highest quality requirements in the world. So for many
hours
I was regaled with tales about the benefits of performing static analysis on
individual components early in the manufacturing cycle.

They would dip cast parts in a fluorescent liquid and look at them under
ultraviolet light to illuminate cracks caused during casting process. For
critical parts which would receive more stress, such as the fan blades, they
would x-ray each part to inspect for internal cracks. A more expensive
process
but warranted due to the increased risk of total system failure for a defect
in
those parts.

The static testing was obviously much cheaper and delivered better quality
than
just bolting the parts together and doing dynamic testing in a test cell.
It's
a wonder that it has taken the software security world so long to catch onto
the
benefits of static testing of implementation.  I think we can learn a lot
more
from the manufacturing world.

On Thu, 2 Feb 2006, Gary McGraw wrote:

> Hi all,
>
> When I introduced the "bugs" and "flaws" nomenclature into the
> literature, I did so in an article about the software security
> workshop I chaired in 2003 (see http://www.cigital.com/ssw/).  This
> was ultimately written up in an "On the Horizon" paper published by
> IEEE Security & Privacy.
>
> Nancy Mead and I queried the SWEBOK and looked around to see if the
> new usage caused collision.  It did not.  The reason I think it is
> important to distinguish the two ends of the rather slippery range
> (crispy is right about that) is that software security as a field is
> not paying enough attention to architecture.  By identifying flaws as
> a subcategory of defects (according the the SWEBOK), we can focus some
> attention on the problem.
>
> >>From the small glossary in "Software Security" (my new book out
> tomorrow):
>
> Bug-A bu