RE: [SC-L] RE: The role static analysis tools play in uncoveringelements of design

2006-02-06 Thread Jeff Williams
Brian,

³Show me places in the program where property X holds²

Yes. That's it exactly. Current tools can answer this type of question to
some extent, but they're not really designed for it. The interaction
contemplated by most of the tools is more like show me the line of code the
vulnerability is on. This doesn't really help verify the security of an
application and doesn't work at the design level. The property X approach
does both.

 aiding in program understanding, it needs to allow you to easily
 add new rules of your own construction.

This is absolutely critical. In addition to creating new rules, we need to
be able to tag custom libraries and methods with their security
properties. This will allow existing rules to be applied in new contexts.
E.g. tagging a custom validation method with untaint so existing data
validation rules now include it.

 Whether or not you want to see this path depends on how important
 it really is to you that encryption is absolutely never bypassed.
 Your tolerance for noise is dictated by the level of assurance
 you require.

Absolutely. The encryption example demonstrates your point well. Still, I
wouldn't want anyone to get the impression that there's a direct
relationship between the signal-to-noise setting on the tool and the level
of assurance one gets in an application. This is because the tools tend to
find the problems that are easiest for them to find, not the ones that
represent the biggest risk.

For example, access control problems in web applications are difficult to
find automatically, because the implementations are generally complex and
distributed across a software baseline. So even if I only want a typical
commercial level of assurance in a web application, I have to turn up the
volume on the tools all the way. And even that might not make them visible.

--Jeff




___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-06 Thread Evans, Arian
Original message bounced due to address; I chopped to remove WMF and rambling
to focus on the subject of language standardization:

[...wmf...]
fyi// on attack surface: http://www-2.cs.cmu.edu/~wing/

Attack surface concepts fit hand-in-glove with threat modeling concepts,
which fit hand in glove with this equivocal design/implementation dialogue.

[...]
Q. What does the bug/flaw dialogue demonstrate the need for?

There's plenty of folks on this list smarter than I am, so it is
nice to see a majority agree on what I think the key issues are:
communicating (a) accurate and (b) actionable data; expanded:

1. Defect Definition
2. Defect Classification
3. Defect Identification
4. Defect Implication (effectively communicating defect implication)

By example I mean (number corresponds to above):

1. Format String, weak crypto use, define what  why are these security defects?
2. Implementation Defect, Design Defect, bug, flaw
3. How do we identify these defects in software?
4. Implication: RTAWV (Risk, Threat, Attack, Weakness, Vuln)  communication
to both technical and non-technical audience is the goal.

I added Weakness at the TRIKE group's suggestion, and it has significantly
helped in classification instead of using two confusing vuln categories.

There is obviously a many-to-one mapping between threat-attack-weakness
and even from vuln to weakness, depending on how we define vuln. (I have
defined vuln as a particular instance or attackable instance of a weakness).

This is *valuable* information to the person trying to solve issues in this
problem domain, but I rarely find it well understood by *non-appsec* folks.

(Valuable in the sense that it is easier for non-appsec folks to act on a 
weakness,
like insufficient output encoding standards/implementation, than a list of 
10,000
exploitable URLs in a large templated site representing 4 XSS variants.)

[...]

I continue to encounter equivocal uses of the words Threat, Attack, 
Vulnerability,
Flaw, Defect, Artifact (and associated phrases like security-artifact), Fault,
Bug, Error, Failure, Mistake, MFV (multi-factor vulnerability) in our collective
software security dialogue and literature.

What is the best way to work on establishing a common language? Is it reasonable
or realistic to expect such standardization?

OWASP and WASC have made strides in the webified space on defining attack 
classes,
and some weak patterns; Mitre has worked terminology in the unmanaged code 
space.

Where to go from here?

Arian J. Evans
FishNet Security

816.421.6611 [fns office]
816.701.2045 [direct] --limited access
888.732.9406 [fns toll-free]
816.421.6677 [fns general fax]
913.710.7045 [mobile] --best bet
[EMAIL PROTECTED] [email]

http://www.fishnetsecurity.com





 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Crispin Cowan
 Sent: Friday, February 03, 2006 2:12 PM
 To: Gary McGraw
 Cc: Kenneth R. van Wyk; Secure Coding Mailing List
 Subject: Re: [SC-L] Bugs and flaws
 
 
 Gary McGraw wrote:
  To cycle this all back around to the original posting, lets 
 talk about
  the WMF flaw in particular.  Do we believe that the best way for
  Microsoft to find similar design problems is to do code review?  Or
  should they use a higher level approach?
 
  Were they correct in saying (officially) that flaws such as 
 WMF are hard
  to anticipate? 

 I have heard some very insightful security researchers from Microsoft
 pushing an abstract notion of attack surface, which is the amount of
 code/data/API/whatever that is exposed to the attacker. To design for
 security, among other things, reduce your attack surface.
 
 The WMF design defect seems to be that IE has too large of an attack
 surface. There are way too many ways for unauthenticated remote web
 servers to induce the client to run way too much code with parameters
 provided by the attacker. The implementation flaw is that the 
 WMF API in
 particular is vulnerable to malicious content.
 
 None of which strikes me as surprising, but maybe that's just me :)
 
 Crispin
 -- 
 Crispin Cowan, Ph.D.  
 http://crispincowan.com/~crispin/
 Director of Software Engineering, Novell  http://novell.com
   Olympic Games: The Bi-Annual Festival of Corruption
 
 
 ___
 Secure Coding mailing list (SC-L)
 SC-L@securecoding.org
 List information, subscriptions, etc - 
 http://krvw.com/mailman/listinfo/sc-l
 List charter available at - 
 http://www.securecoding.org/list/charter.php
 

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-06 Thread Gary McGraw
Hi all,

I'm afraid I don't concur with this definition.  Here's a (rather vague) flaw 
example that may help clarify what I mean.  Think about an error of omission 
where an API is exposed with no AA protection whatsoever.  This API may have 
been designed not to have been exposed originally, but somehow  became exposed 
only over time.

How do you find errors of omission with a static analysis tool?  

This is only one of salzer and schroeder's principles in action.  What of the 
other 9?

gem

P.s. Five points to whoever names the principle in question.

P.p.s. The book is out www.swsec.com

 -Original Message-
From:   Brian Chess [mailto:[EMAIL PROTECTED]
Sent:   Sat Feb 04 00:56:16 2006
To: sc-l@securecoding.org
Subject:RE: [SC-L] Bugs and flaws

The best definition for flaw and bug I've heard so far is that a flaw is
a successful implementation of your intent, while a bug is unintentional.  I
think I've also heard a bug is small, a flaw is big, but that definition
is awfully squishy.

If the difference between a bug and a flaw is indeed one of intent, then I
don't think it's a useful distinction.  Intent rarely brings with it other
dependable characteristics.

I've also heard bugs are things that a static analysis tool can find, but
I don't think that really captures it either.  For example, it's easy for a
static analysis tool to point out that the following Java statement implies
that the program is using weak cryptography:

SecretKey key = KeyGenerator.getInstance(DES).generateKey();

Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-06 Thread Gary McGraw
I'm with you on this threat modeling thing...which is the process  meant to lay 
flaws bare.  I like to call it risk analysis of course (using american war 
nomenclature instead of british/australian).  STRIDE is an important step in 
the right direction, but a checklist approach has essential creativity 
constraints worth pondering.

My only point in making the distinction clear (bugs vs flaws) is to make sure 
that we don't forget design, requirements, and early lifecycle artifacts in our 
rush to analyze code.

Please do both (touchpoints 1 and 2 in Software Security).

gem

 -Original Message-
From:   Evans, Arian [mailto:[EMAIL PROTECTED]
Sent:   Fri Feb 03 18:29:29 2006
To: Crispin Cowan; Gary McGraw; Secure Coding Mailing List; Kenneth R. van 
Wyk
Subject:RE: [SC-L] Bugs and flaws

per WMF// Let's face it, this was legacy, possibly deprecated code that
was likely low on the security things-to-do list. I suspect MS, like the
rest of the world, has resource limitations regarding analyzing all their
various product/api entry points for security implications.

Which is one of the reasons I think threat modeling came in vogue, and I
think a threat model would flag this in bright red for review, but you
need resources with quite a bit of knowledge and time to build that model,
and again, since this was legacy functionality...

fyi// on attack surface: http://www-2.cs.cmu.edu/~wing/

There are several ppls that have done nice work here; it fits hand-in-glove
with threat modeling concepts, which fits hand in glove with this whole
equivocal dialogue about design/implementation verbiage.

This whole discussion underscores the real issue we have, which is
a common language.

So how to fix it? A taxonomy and terminology guide; simple, concise.

There's plenty of folks on this list a lot smarter than I am, so it is
nice to see that a majority agree on what I think the key issues are:
communicating (a) accurate and (b) actionable data, or expanded:

1. Defect Definition
2. Defect Classification
3. Defect Identification
4. Defect Implication (communicating defect implication as goal)

By example I mean:

1. Format String, weak crypto use, define what  why are these security defects?
2. Implementation Defect, Design Defect, bug, flaw, blah
3. How do we identify these defects in software?
4. Implication: RTAWV (Risk, Threat, Attack, Weakness, Vuln)  communication
to both technical, and non-technical audience.

I added Weakness at the TRIKE group's suggestion, and it has significantly
helped in classification instead of using two confusing vuln categories.

There is obviously a many-to-one mapping between threat-attack-weakness
and even from vuln to weakness, depending on how we define vuln. (I have
defined vuln as a particular instance or attackable instance of a weakness).

This is *valuable* information to the person trying to solve issues in this
problem domain, but I rarely find it well understood by non-appsec folks.

I have attempted to address and communicate this in a short paper titled:
::Taxonomy of Software Security Analysis Types:: 

(Software Security Analysis == defined as == Software Analysis for Defects
with Security Implications, implications being contextual.)

Is significantly weakened if at the end of the day no one knows what I mean
by design weakness, implementation defect, goblins, etc. So I will need
all your help in shoring up the language.

My reason for distinction of security as a defect implication is that
defects are sometimes clear; the implications are not always clear and do
not always follow from the defects. Defects are neither a necessary nor
sufficient condition for security implications (obviously), but it the
implications most people solving problems care about, not defect language.

Much of this is underscored in the IEEE software defect terminology, but
look at our current industry ambiguity between attacks and vulnerabilities!

I continue to encounter wildly equivocal uses of the words Threat, Attack,
Vulnerability, Flaw, Defect, Artifact (and associated phrases like security-
artifact), Fault, Bug, Error, Failure, Mistake, MFV (multi-factor 
vulnerability)
in our collective software security dialogue and literature.

I am *not* *married* to any particular verbiage; my goal is a common
language so we can have more effective dialogue,

Arian J. Evans
FishNet Security

816.421.6611 [fns office]
816.701.2045 [direct] --limited access
888.732.9406 [fns toll-free]
816.421.6677 [fns general fax]
913.710.7045 [mobile] --best bet
[EMAIL PROTECTED] [email]

http://www.fishnetsecurity.com





 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Crispin Cowan
 Sent: Friday, February 03, 2006 2:12 PM
 To: Gary McGraw
 Cc: Kenneth R. van Wyk; Secure Coding Mailing List
 Subject: Re: [SC-L] Bugs and flaws
 
 
 Gary McGraw wrote:
  To cycle this all back around to the original posting, lets 
 talk about
  the WMF flaw in