RE: [SC-L] Bugs and flaws

2006-02-07 Thread Gunnar Peterson
Perhaps a useful distinction that we could to assign responsibility is to
separate concerns in algorithms from the concerns of the system as a whole.
Butler Lampson describes how designing a computer system is different from
designing an algorithm:

"The external interface (that is, the requirement) is less precisely defined,
more complex, and more subject to change.
The system has much more internal structure, and hence many internal interfaces.
The measure of success is much less clear."

In the WMF case was the system supposed to protect the algorithm or should the
algorithm have been able to defend itself?

-gp


>  -Original Message-
> From: Brian Chess [mailto:[EMAIL PROTECTED]
> Sent: Sat Feb 04 00:56:16 2006
> To:   sc-l@securecoding.org
> Subject:  RE: [SC-L] Bugs and flaws
>
> The best definition for "flaw" and "bug" I've heard so far is that a flaw is
> a successful implementation of your intent, while a bug is unintentional.  I
> think I've also heard "a bug is small", a flaw is big", but that definition
> is awfully squishy.
>
> If the difference between a bug and a flaw is indeed one of intent, then I
> don't think it's a useful distinction.  Intent rarely brings with it other
> dependable characteristics.
>
> I've also heard "bugs are things that a static analysis tool can find", but
> I don't think that really captures it either.  For example, it's easy for a
> static analysis tool to point out that the following Java statement implies
> that the program is using weak cryptography:
>
> SecretKey key = KeyGenerator.getInstance("DES").generateKey();
>
> Brian
>
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
>
>
>
>
> 
> This electronic message transmission contains information that may be
> confidential or privileged.  The information contained herein is intended
> solely for the recipient and use by any other party is not authorized.  If
> you are not the intended recipient (or otherwise authorized to receive this
> message by the intended recipient), any disclosure, copying, distribution or
> use of the contents of the information is prohibited.  If you have received
> this electronic message transmission in error, please contact the sender by
> reply email and delete all copies of this message.  Cigital, Inc. accepts no
> responsibility for any loss or damage resulting directly or indirectly from
> the use of this email or its contents.
> Thank You.
> 
>
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
>
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
>

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-07 Thread Julie Ryan
8 principles with 2 more from physical security that "apply only  
imperfectly to computer systems"


http://www.cap-lore.com/CapTheory/ProtInf/Basic.html


On Feb 7, 2006, at 9:59 AM, Jeff Williams wrote:

I'm not sure which of the three definitions in Brian's message you're  
not
concurring with, but I think he was only listing them as strawmen  
anyway.


In any case, there's no reason that static analysis tools shouldn't be  
able
to find errors of omission. We use our tools to find these 'dogs that  
didn't

bark' every day.

The tools can identify, for example, places where logging, input  
validation,
and error handling should have been done. With a little work teaching  
the
tool about your application, assets, and libraries, it's easy to find  
places
where encryption, access control, and authentication should have been  
done

but haven't.

In your hypothetical, if the API isn't ever invoked with an identity  
and a

secret, there can't be authentication. If there's no call to an access
control component, we know at least that there's no centralized  
mechanism.
In this case, the tool could check whether the code follows the  
project's

standard access control pattern. If not, it's an error of omission.

If I remember correctly, Saltzer and Schroeder only suggested 8  
principles.
Your hypo is closest to complete mediation, but touches on several  
others.
But, in theory, there's no reason that static analysis can't help  
verify all

of them in an application.

--Jeff

-Original Message-
From: [EMAIL PROTECTED]  
[mailto:[EMAIL PROTECTED]

On Behalf Of Gary McGraw
Sent: Monday, February 06, 2006 11:13 PM
To: Brian Chess; sc-l@securecoding.org
Subject: RE: [SC-L] Bugs and flaws

Hi all,

I'm afraid I don't concur with this definition.  Here's a (rather  
vague)
flaw example that may help clarify what I mean.  Think about an error  
of
omission where an API is exposed with no A&A protection whatsoever.   
This
API may have been designed not to have been exposed originally, but  
somehow

became exposed only over time.

How do you find errors of omission with a static analysis tool?

This is only one of salzer and schroeder's principles in action.  What  
of

the other 9?

gem

P.s. Five points to whoever names the principle in question.

P.p.s. The book is out www.swsec.com

 -Original Message-
From:   Brian Chess [mailto:[EMAIL PROTECTED]
Sent:   Sat Feb 04 00:56:16 2006
To: sc-l@securecoding.org
Subject:RE: [SC-L] Bugs and flaws

The best definition for "flaw" and "bug" I've heard so far is that a  
flaw is
a successful implementation of your intent, while a bug is  
unintentional.  I
think I've also heard "a bug is small", a flaw is big", but that  
definition

is awfully squishy.

If the difference between a bug and a flaw is indeed one of intent,  
then I
don't think it's a useful distinction.  Intent rarely brings with it  
other

dependable characteristics.

I've also heard "bugs are things that a static analysis tool can  
find", but
I don't think that really captures it either.  For example, it's easy  
for a
static analysis tool to point out that the following Java statement  
implies

that the program is using weak cryptography:

SecretKey key = KeyGenerator.getInstance("DES").generateKey();

Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc -  
http://krvw.com/mailman/listinfo/sc-l
List charter available at -  
http://www.securecoding.org/list/charter.php





--- 
-

This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is  
intended
solely for the recipient and use by any other party is not authorized.  
 If
you are not the intended recipient (or otherwise authorized to receive  
this
message by the intended recipient), any disclosure, copying,  
distribution or
use of the contents of the information is prohibited.  If you have  
received
this electronic message transmission in error, please contact the  
sender by
reply email and delete all copies of this message.  Cigital, Inc.  
accepts no
responsibility for any loss or damage resulting directly or indirectly  
from

the use of this email or its contents.
Thank You.
--- 
-


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc -  
http://krvw.com/mailman/listinfo/sc-l
List charter available at -  
http://www.securecoding.org/list/charter.php


___

RE: [SC-L] Bugs and flaws

2006-02-07 Thread Jeff Williams
I'm not sure which of the three definitions in Brian's message you're not
concurring with, but I think he was only listing them as strawmen anyway.

In any case, there's no reason that static analysis tools shouldn't be able
to find errors of omission. We use our tools to find these 'dogs that didn't
bark' every day.

The tools can identify, for example, places where logging, input validation,
and error handling should have been done. With a little work teaching the
tool about your application, assets, and libraries, it's easy to find places
where encryption, access control, and authentication should have been done
but haven't.

In your hypothetical, if the API isn't ever invoked with an identity and a
secret, there can't be authentication. If there's no call to an access
control component, we know at least that there's no centralized mechanism.
In this case, the tool could check whether the code follows the project's
standard access control pattern. If not, it's an error of omission.

If I remember correctly, Saltzer and Schroeder only suggested 8 principles.
Your hypo is closest to complete mediation, but touches on several others.
But, in theory, there's no reason that static analysis can't help verify all
of them in an application.

--Jeff

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Gary McGraw
Sent: Monday, February 06, 2006 11:13 PM
To: Brian Chess; sc-l@securecoding.org
Subject: RE: [SC-L] Bugs and flaws

Hi all,

I'm afraid I don't concur with this definition.  Here's a (rather vague)
flaw example that may help clarify what I mean.  Think about an error of
omission where an API is exposed with no A&A protection whatsoever.  This
API may have been designed not to have been exposed originally, but somehow
became exposed only over time.

How do you find errors of omission with a static analysis tool?  

This is only one of salzer and schroeder's principles in action.  What of
the other 9?

gem

P.s. Five points to whoever names the principle in question.

P.p.s. The book is out www.swsec.com

 -Original Message-
From:   Brian Chess [mailto:[EMAIL PROTECTED]
Sent:   Sat Feb 04 00:56:16 2006
To: sc-l@securecoding.org
Subject:RE: [SC-L] Bugs and flaws

The best definition for "flaw" and "bug" I've heard so far is that a flaw is
a successful implementation of your intent, while a bug is unintentional.  I
think I've also heard "a bug is small", a flaw is big", but that definition
is awfully squishy.

If the difference between a bug and a flaw is indeed one of intent, then I
don't think it's a useful distinction.  Intent rarely brings with it other
dependable characteristics.

I've also heard "bugs are things that a static analysis tool can find", but
I don't think that really captures it either.  For example, it's easy for a
static analysis tool to point out that the following Java statement implies
that the program is using weak cryptography:

SecretKey key = KeyGenerator.getInstance("DES").generateKey();

Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-07 Thread Crispin Cowan
Thanks for the very detailed and informative explanation.

However, I still think it sounds like IE has too large of an attack
surface :) It still seems to be the case that IE can be persuaded to
execute any of a large amount of code based on its raw (web) input, with
(fairly) arbitrary parameters, and this large attack surface allows
attackers to find vulnerabilities in any of the code that IE calls out to.

Crispin

Dana Epp wrote:
> I think I would word that differently. The design defect was when
> Microsoft decided to allow meta data to call GDI functions.
>  
> Around 1990 when this was introduced the threat profile was entirely
> different; the operating system could trust the metadata. Well,
> actually I would argue that they couldn't, but no one knew any better
> yet. At the time SetAbortProc() was an important function to allow for
> print cancellation in the co-operative multitasking environment that
> was Windows 3.0.
>  
> To be clear, IE was NOT DIRECTLY vulnerable to the WMF attack vector
> everyone likes to use as a test case for this discussion. IE actually
> refuses to process any type of metadata that supported META_ESCAPE
> records (which SetAbortProc relies on). Hence why its not possible to
> exploit the vulnerability by simply calling a WMF image via HTML. So
> how is IE vulnerable then? It's not actually. The attack vector uses
> IE as a conduit to actually call out to secondary library code that
> will process it. In the case of the exploits that hit the Net,
> attackers used an IFRAME hack to call out to the shell to process it.
> The shell would look up the handler for WMF, which was the Windows
> Picture Viewer that did the processing in shimgvw.dll. When the dll
> processed the WMF, it would convert it to a printable EMF format, and
> bam... we ran into problems.
>  
> With the design defect being the fact metadata can call arbitrary GDI
> code, the implementation flaw is the fact applications like IE rely so
> heavily on calling out to secondary libraries that just can't be
> trusted. Even if IE has had a strong code review, it is extremely
> probable that most of the secondary library code has not had the same
> audit scrutiny. This is a weakness to all applications, not just IE.
> When you call out to untrusted code that you don't control, you put
> the application at risk. No different than any other operating system.
> Only problem is Windows is riddled with these potential holes because
> its sharing so much of the same codebase. And in the past the teams
> rarely talk to each other to figure this out.
>  
> Code reuse is one thing, but some of the components in Windows are
> carry over from 15 years ago, and will continue to put us at risk due
> to the implementation flaws that haven't yet been found. But with such
> a huge master sources to begin with, its not something that will be
> fixed over night.
>  
> ---
> Regards,
> Dana Epp [Microsoft Security MVP]
> Blog: http://silverstr.ufies.org/blog/
>
> 
> *From:* [EMAIL PROTECTED] on behalf of Crispin Cowan
> *Sent:* Fri 2/3/2006 12:12 PM
> *To:* Gary McGraw
> *Cc:* Kenneth R. van Wyk; Secure Coding Mailing List
> *Subject:* Re: [SC-L] Bugs and flaws
>
> Gary McGraw wrote:
> > To cycle this all back around to the original posting, lets talk about
> > the WMF flaw in particular.  Do we believe that the best way for
> > Microsoft to find similar design problems is to do code review?  Or
> > should they use a higher level approach?
> >
> > Were they correct in saying (officially) that flaws such as WMF are hard
> > to anticipate?
> >  
> I have heard some very insightful security researchers from Microsoft
> pushing an abstract notion of "attack surface", which is the amount of
> code/data/API/whatever that is exposed to the attacker. To design for
> security, among other things, reduce your attack surface.
>
> The WMF design defect seems to be that IE has too large of an attack
> surface. There are way too many ways for unauthenticated remote web
> servers to induce the client to run way too much code with parameters
> provided by the attacker. The implementation flaw is that the WMF API in
> particular is vulnerable to malicious content.
>
> None of which strikes me as surprising, but maybe that's just me :)
>
> Crispin
> --
> Crispin Cowan, Ph.D. 
> http://crispincowan.com/~crispin/ <http://crispincowan.com/%7Ecrispin/>
> Director of Software Engineering, Novell  http://novell.com
> Olympic Games: The Bi-Annual Festival of Corruption
>
>
> ___
> Secure Co

RE: [SC-L] Bugs and flaws

2006-02-06 Thread Gary McGraw
I'm with you on this threat modeling thing...which is the process  meant to lay 
flaws bare.  I like to call it "risk analysis" of course (using american war 
nomenclature instead of british/australian).  STRIDE is an important step in 
the right direction, but a checklist approach has essential creativity 
constraints worth pondering.

My only point in making the distinction clear (bugs vs flaws) is to make sure 
that we don't forget design, requirements, and early lifecycle artifacts in our 
rush to analyze code.

Please do both (touchpoints 1 and 2 in Software Security).

gem

 -Original Message-
From:   Evans, Arian [mailto:[EMAIL PROTECTED]
Sent:   Fri Feb 03 18:29:29 2006
To: Crispin Cowan; Gary McGraw; Secure Coding Mailing List; Kenneth R. van 
Wyk
Subject:        RE: [SC-L] Bugs and flaws

per WMF// Let's face it, this was legacy, possibly deprecated code that
was likely low on the security things-to-do list. I suspect MS, like the
rest of the world, has resource limitations regarding analyzing all their
various product/api entry points for security implications.

Which is one of the reasons I think threat modeling came in vogue, and I
think a threat model would flag this in bright red for review, but you
need resources with quite a bit of knowledge and time to build that model,
and again, since this was legacy functionality...

fyi// on attack surface: http://www-2.cs.cmu.edu/~wing/

There are several ppls that have done nice work here; it fits hand-in-glove
with threat modeling concepts, which fits hand in glove with this whole
equivocal dialogue about design/implementation verbiage.

This whole discussion underscores the real issue we have, which is
a common language.

So how to fix it? A taxonomy and terminology guide; simple, concise.

There's plenty of folks on this list a lot smarter than I am, so it is
nice to see that a majority agree on what I think the key issues are:
communicating (a) accurate and (b) actionable data, or expanded:

1. Defect Definition
2. Defect Classification
3. Defect Identification
4. Defect Implication (communicating defect implication as goal)

By example I mean:

1. Format String, weak crypto use, define what & why are these security defects?
2. Implementation Defect, Design Defect, bug, flaw, blah
3. How do we identify these defects in software?
4. Implication: RTAWV (Risk, Threat, Attack, Weakness, Vuln) & communication
to both technical, and non-technical audience.

I added Weakness at the TRIKE group's suggestion, and it has significantly
helped in classification instead of using two confusing vuln categories.

There is obviously a many-to-one mapping between threat->attack<-weakness
and even from vuln to weakness, depending on how we define vuln. (I have
defined vuln as "a particular instance or attackable instance of a weakness").

This is *valuable* information to the person trying to solve issues in this
problem domain, but I rarely find it well understood by non-appsec folks.

I have attempted to address and communicate this in a short paper titled:
::Taxonomy of Software Security Analysis Types:: 

(Software Security Analysis == defined as == Software Analysis for Defects
with Security Implications, implications being contextual.)

Is significantly weakened if at the end of the day no one knows what I mean
by design weakness, implementation defect, goblins, etc. So I will need
all your help in shoring up the language.

My reason for distinction of "security as a defect implication" is that
defects are sometimes clear; the implications are not always clear and do
not always follow from the defects. Defects are neither a necessary nor
sufficient condition for security implications (obviously), but it the
implications most people solving problems care about, not defect language.

Much of this is underscored in the IEEE software defect terminology, but
look at our current industry ambiguity between attacks and vulnerabilities!

I continue to encounter wildly equivocal uses of the words Threat, Attack,
Vulnerability, Flaw, Defect, Artifact (and associated phrases like "security-
artifact"), Fault, Bug, Error, Failure, Mistake, MFV (multi-factor 
vulnerability)
in our collective software security dialogue and literature.

I am *not* *married* to any particular verbiage; my goal is a common
language so we can have more effective dialogue,

Arian J. Evans
FishNet Security

816.421.6611 [fns office]
816.701.2045 [direct] <--limited access
888.732.9406 [fns toll-free]
816.421.6677 [fns general fax]
913.710.7045 [mobile] <--best bet
[EMAIL PROTECTED] [email]

http://www.fishnetsecurity.com





> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Crispin Cowan
> Sent: Friday, February 03, 2006 2:12 PM
> To: Gary McGraw
> Cc: Kenneth R. van Wyk; Secure Coding Mailing List
> Subject: Re: [SC-L] Bugs an

RE: [SC-L] Bugs and flaws

2006-02-06 Thread Gary McGraw
Hi all,

I'm afraid I don't concur with this definition.  Here's a (rather vague) flaw 
example that may help clarify what I mean.  Think about an error of omission 
where an API is exposed with no A&A protection whatsoever.  This API may have 
been designed not to have been exposed originally, but somehow  became exposed 
only over time.

How do you find errors of omission with a static analysis tool?  

This is only one of salzer and schroeder's principles in action.  What of the 
other 9?

gem

P.s. Five points to whoever names the principle in question.

P.p.s. The book is out www.swsec.com

 -Original Message-
From:   Brian Chess [mailto:[EMAIL PROTECTED]
Sent:   Sat Feb 04 00:56:16 2006
To: sc-l@securecoding.org
Subject:    RE: [SC-L] Bugs and flaws

The best definition for "flaw" and "bug" I've heard so far is that a flaw is
a successful implementation of your intent, while a bug is unintentional.  I
think I've also heard "a bug is small", a flaw is big", but that definition
is awfully squishy.

If the difference between a bug and a flaw is indeed one of intent, then I
don't think it's a useful distinction.  Intent rarely brings with it other
dependable characteristics.

I've also heard "bugs are things that a static analysis tool can find", but
I don't think that really captures it either.  For example, it's easy for a
static analysis tool to point out that the following Java statement implies
that the program is using weak cryptography:

SecretKey key = KeyGenerator.getInstance("DES").generateKey();

Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-06 Thread Evans, Arian
Original message bounced due to address; I chopped to remove WMF and rambling
to focus on the subject of language standardization:

[...wmf...]
fyi// on attack surface: http://www-2.cs.cmu.edu/~wing/

Attack surface concepts fit hand-in-glove with threat modeling concepts,
which fit hand in glove with this equivocal design/implementation dialogue.

[...]
Q. What does the bug/flaw dialogue demonstrate the need for?

There's plenty of folks on this list smarter than I am, so it is
nice to see a majority agree on what I think the key issues are:
communicating (a) accurate and (b) actionable data; expanded:

1. Defect Definition
2. Defect Classification
3. Defect Identification
4. Defect Implication (effectively communicating defect implication)

By example I mean (number corresponds to above):

1. Format String, weak crypto use, define what & why are these security defects?
2. Implementation Defect, Design Defect, bug, flaw
3. How do we identify these defects in software?
4. Implication: RTAWV (Risk, Threat, Attack, Weakness, Vuln) & communication
to both technical and non-technical audience is the goal.

I added Weakness at the TRIKE group's suggestion, and it has significantly
helped in classification instead of using two confusing vuln categories.

There is obviously a many-to-one mapping between threat->attack<-weakness
and even from vuln to weakness, depending on how we define vuln. (I have
defined vuln as "a particular instance or attackable instance of a weakness").

This is *valuable* information to the person trying to solve issues in this
problem domain, but I rarely find it well understood by *non-appsec* folks.

(Valuable in the sense that it is easier for non-appsec folks to act on a 
weakness,
like insufficient output encoding standards/implementation, than a list of 
10,000
exploitable URLs in a large templated site representing 4 XSS variants.)

[...]

I continue to encounter equivocal uses of the words Threat, Attack, 
Vulnerability,
Flaw, Defect, Artifact (and associated phrases like "security-artifact"), Fault,
Bug, Error, Failure, Mistake, MFV (multi-factor vulnerability) in our collective
software security dialogue and literature.

What is the best way to work on establishing a common language? Is it reasonable
or realistic to expect such standardization?

OWASP and WASC have made strides in the webified space on defining attack 
classes,
and some weak patterns; Mitre has worked terminology in the unmanaged code 
space.

Where to go from here?

Arian J. Evans
FishNet Security

816.421.6611 [fns office]
816.701.2045 [direct] <--limited access
888.732.9406 [fns toll-free]
816.421.6677 [fns general fax]
913.710.7045 [mobile] <--best bet
[EMAIL PROTECTED] [email]

http://www.fishnetsecurity.com





> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Crispin Cowan
> Sent: Friday, February 03, 2006 2:12 PM
> To: Gary McGraw
> Cc: Kenneth R. van Wyk; Secure Coding Mailing List
> Subject: Re: [SC-L] Bugs and flaws
> 
> 
> Gary McGraw wrote:
> > To cycle this all back around to the original posting, lets 
> talk about
> > the WMF flaw in particular.  Do we believe that the best way for
> > Microsoft to find similar design problems is to do code review?  Or
> > should they use a higher level approach?
> >
> > Were they correct in saying (officially) that flaws such as 
> WMF are hard
> > to anticipate? 
> >   
> I have heard some very insightful security researchers from Microsoft
> pushing an abstract notion of "attack surface", which is the amount of
> code/data/API/whatever that is exposed to the attacker. To design for
> security, among other things, reduce your attack surface.
> 
> The WMF design defect seems to be that IE has too large of an attack
> surface. There are way too many ways for unauthenticated remote web
> servers to induce the client to run way too much code with parameters
> provided by the attacker. The implementation flaw is that the 
> WMF API in
> particular is vulnerable to malicious content.
> 
> None of which strikes me as surprising, but maybe that's just me :)
> 
> Crispin
> -- 
> Crispin Cowan, Ph.D.  
> http://crispincowan.com/~crispin/
> Director of Software Engineering, Novell  http://novell.com
>   Olympic Games: The Bi-Annual Festival of Corruption
> 
> 
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - 
> http://krvw.com/mailman/listinfo/sc-l
> List charter available at - 
> http://www.securecoding.org/list/charter.php
> 

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-06 Thread Evans, Arian

> Sent: Thursday, February 02, 2006 7:50 PM
> 
> I'm sorry, but it is just not possible to find design flaws 
> by staring at code.
> 
> gem

 
That is simply not true. 

[...]
(I originally wrote out some anecdotal stuffs here that on reflection
implied more organizational issues with providing poor design specification,
...and leaving design up to implementation choices...than providing
examples of design choices that are obvious from implementation review.)



So while I politely disagree, my insufficient examples reinforce
that there is a slippery inference slope here.

This has been one of the more stimulating dialogues on SCL; thanks.

-ae

p.s.--the original response bounced the list due to being subscribed
under an older email alias.



>  -Original Message-
> From: Jeff Williams [mailto:[EMAIL PROTECTED]
> Sent: Thu Feb 02 20:32:29 2006
> To:   'Secure Coding Mailing List'
> Subject:  RE: [SC-L] Bugs and flaws
> 
> At the risk of piling on here, there's no question that it's 
> critical to
> consider security problems across the continuum. While we're 
> at it, the
> analysis should start back even further with the requirements 
> or even the
> whole system concept.
> 
> All of the representations across the continuum (rqmts, arch, 
> design, code)
> are just models of the same thing.  They start more abstract 
> and end up as
> code.  A *single* problem could exist in all these models at 
> the same time.
> 
> Higher-level representations of systems are generally 
> eclipsed by lower
> level ones fairly rapidly.  For example, it's a rare group 
> that updates
> their design docs as implementation progresses. So once 
> you've got code, the
> architecture-flaws don't come from architecture documents 
> (which lie). The
> best place to look for them (if you want truth) is to look in 
> the code.
> 
> To me, the important thing here is to give software teams 
> good advice about
> the level of effort they're going to have to put into fixing 
> a problem. If
> it helps to give a security problem a label to let them know 
> they're going
> to have to go back to the drawing board, I think saying 
> 'architecture-flaw'
> or 'design-flaw' is fine. But I agree with others that saying 
> 'flaw' alone
> doesn't help distinguish it from 'bug' in the minds of most 
> developers or
> architects.
> 
> --Jeff
> 
> Jeff Williams, CEO
> Aspect Security
> http://www.aspectsecurity.com
> 
> 
> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED]
> On Behalf Of Crispin Cowan
> Sent: Wednesday, February 01, 2006 5:07 PM
> To: John Steven
> Cc: Will Kruse; Secure Coding Mailing List
> Subject: Re: [SC-L] Bugs and flaws
> 
> John Steven wrote:
> > I'm not sure there's any value in discussing this minutia 
> further, but
> here
> > goes:
> >   
> We'll let the moderator decide that :)
> 
> > 1) Crispin, I think you've nailed one thing. The continuum from:
> >
> > Architecture --> Design --> Low-level Design --> (to) Implementation
> >
> > is a blurry one, and certainly slippery as you move from 'left' to
> 'right'.
> >   
> Cool.
> 
> > But, we all should understand that there's commensurate blur in our
> analysis
> > techniques (aka architecture and code review) to assure 
> that as we sweep
> > over software that we uncover both bugs and architectural flaws.
> >   
> Also agreed.
> 
> > 2) Flaws are different in important ways bugs when it comes to
> presentation,
> > prioritization, and mitigation. Let's explore by physical 
> analog first.
> >   
> I disagree with the word usage. To me, "bug" and "flaw" are exactly
> synonyms. The distinction being drawn here is between "implementation
> flaws" vs. "design flaws". You are just creating confusing jargon to
> claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
> ::= bug. A vulnerability is a special subset of 
> flaws/defects/bugs that
> has the property of being exploitable.
> 
> > I nearly fell through one of my consultant's tables as I 
> leaned on it this
> > morning. We explored: "Bug or flaw?".
> >   
> The wording issue aside, at the implementation level you try to
> code/implement to prevent flaws, by doing things such as using higher
> quality steel (for bolts) and good coding practices (for software). At
> the design level, you try to design so as t

Re: [SC-L] Bugs and flaws

2006-02-03 Thread Nick FitzGerald
Al Eridani <[EMAIL PROTECTED]> wrote:

> If the design says "For each fund that the user owns, do X" and my
> code does X for
> all the funds but it skips the most recently acquired fund, I see it as a
> "manufacturing" error.
> 
> On the other hand, if a user sells all of her funds and the design
> does not properly
> contemplate the situation where no funds are owned and therefore the software
> misbehaves, I see it as a "design" error.

Maybe I'm confused, but...

If the design in your second case is still the same one -- "For each 
fund that the user owns, do X" -- then this second example, like your 
first, is NOT a design error but an implementation (or "manufacturing" 
if you prefer) error.  (Both are (probably) due to some or other form 
of improper bounds checking, and probably due to naïve use of zero-
based counters controlling a loop...  8-) )

The design "For each fund that the user owns, do X" clearly (well, to 
me -- am I odd in this?) says that NOTHING be done if the number of 
funds is zero, hence the second result is an implemention error.


Regards,

Nick FitzGerald


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-03 Thread Brian Chess
The best definition for "flaw" and "bug" I've heard so far is that a flaw is
a successful implementation of your intent, while a bug is unintentional.  I
think I've also heard "a bug is small", a flaw is big", but that definition
is awfully squishy.

If the difference between a bug and a flaw is indeed one of intent, then I
don't think it's a useful distinction.  Intent rarely brings with it other
dependable characteristics.

I've also heard "bugs are things that a static analysis tool can find", but
I don't think that really captures it either.  For example, it's easy for a
static analysis tool to point out that the following Java statement implies
that the program is using weak cryptography:

SecretKey key = KeyGenerator.getInstance("DES").generateKey();

Brian

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-03 Thread Nick FitzGerald
"Gary McGraw" <[EMAIL PROTECTED]> wrote:

> To cycle this all back around to the original posting, lets talk about
> the WMF flaw in particular.  Do we believe that the best way for
> Microsoft to find similar design problems is to do code review?  Or
> should they use a higher level approach?

I'll leave that to those with relevant specification/design/ 
implementation/review experiences...

> Were they correct in saying (officially) that flaws such as WMF are hard
> to anticipate? 

No.

That claim is totally bogus on its face.

It is an very well-established "rule" that you commingle code and data 
_at extreme risk_.

We have also known for a very long time that our historically preferred 
use of (simple) von Neumann architectures make maintaining that 
distinction rather tricky.

However, neither absolves us of the duty of care to be aware of these 
issues and to take suitable measures to ensure we don't design systems 
apparently intent on shooting themselves in the feet.

I'd wager that even way back then, some designer and/or developer at 
MS, when doing the WMF design/implementation back in Win16 days (Win 
3.0, no?) experienced one of those "We really shouldn't be doing that 
like this..." moments, but then dismissed it as an unnecessary concern 
"because it's only for a personal computer" (or some other cosmically 
shallow reason -- "if I get this done by Friday I'll have a weekend for 
a change", "if I get this done by Friday I'll make a nice bonus", 
"usability is more important than anything else", "performance is more 
important than anything else", etc, etc, etc).

Given the intended userbase and extant computing environment at that 
time, the design probably was "quite acceptable".  The real fault is 
that it was then, repeatedly and apparently largely unquestioningly, 
ported into new implementations (Win 3.1, NT3x, Win9x, NT4, ME, Win2K, 
XP, XPSP2, W2K3) _including_ the ones done after Billy Boy's "security 
is now more important than anything" memo.  At some point in that 
evolution, several someone's should have been raising their hands and 
saying, "You know, now is the time we should fix this...".  Someone on 
one of the the IE teams obviously noticed and flagged the issue, but 
why didn't that flag get raised bigger, higher, brighter?

...

It is bogus for another reason too -- some of the people at MS making 
that official claim also said "this is the first such flaw of this 
kind", and that's just BS.  Long before WM/Concept.A (or its 
forerunner, the oft-forgotten WM/DMV.A) many security and antivirus 
folk were warning that embedding the more powerful, complex programming 
language and architecture macros (such as WordBasic, VBA and 
AccessBasic) into their associated "document" files was an inherently 
flawed design and would only lead to trouble.

So, not only have we long-understood the theoretical reasons for why 
the underlying causes of WMF are inherently bad design and best avoided 
if at all possible, BUT MS has had its own, self-inflicted stupidities 
of exactly the same kind.

If MS truly could not anticipate, at some point along the Win3x to W2K3 
development timeline earlier than 28 Dec 2005, that this WMF design 
"feature" would cause trouble, one has to ask if MS should be allowed 
to make software for general consumption...


Regards,

Nick FitzGerald

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-03 Thread Dana Epp
Title: Re: [SC-L] Bugs and flaws






I think I would word that 
differently. The design defect was when Microsoft decided to allow meta data to 
call GDI functions. 
 
Around 1990 when this was 
introduced the threat profile was entirely different; the operating system could 
trust the metadata. Well, actually I would argue that they couldn't, but no one 
knew any better yet. At the time SetAbortProc() was an important function to 
allow for print cancellation in the co-operative multitasking environment that 
was Windows 3.0.
 
To be clear, IE was NOT 
DIRECTLY vulnerable to the WMF attack vector everyone likes to use as a test 
case for this discussion. IE actually refuses to process any type of metadata 
that supported META_ESCAPE records (which SetAbortProc relies on). Hence why its 
not possible to exploit the vulnerability by simply calling a WMF image via 
HTML. So how is IE vulnerable then? It's not actually. The attack vector uses IE 
as a conduit to actually call out to secondary library code that will process 
it. In the case of the exploits that hit the Net, attackers used an IFRAME hack 
to call out to the shell to process it. The shell would look up the handler for 
WMF, which was the Windows Picture Viewer that did the processing in 
shimgvw.dll. When the dll processed the WMF, it would convert it to a printable 
EMF format, and bam... we ran into problems.
 
With the design defect being 
the fact metadata can call arbitrary GDI code, the implementation flaw is the 
fact applications like IE rely so heavily on calling out to secondary libraries 
that just can't be trusted. Even if IE has had a strong code review, it is 
extremely probable that most of the secondary library code has not had the same 
audit scrutiny. This is a weakness to all applications, not just IE. When you 
call out to untrusted code that you don't control, you put the application 
at risk. No different than any other operating system. Only problem is Windows 
is riddled with these potential holes because its sharing so much of the same 
codebase. And in the past the teams rarely talk to each other to figure this 
out.
 
Code reuse is one thing, but 
some of the components in Windows are carry over from 15 years ago, and will 
continue to put us at risk due to the implementation flaws that haven't yet been 
found. But with such a huge master sources to begin with, its not something that 
will be fixed over night.
 


---
Regards,
Dana Epp [Microsoft Security 
MVP]
Blog: http://silverstr.ufies.org/blog/


From: [EMAIL PROTECTED] on behalf of 
Crispin CowanSent: Fri 2/3/2006 12:12 PMTo: Gary 
McGrawCc: Kenneth R. van Wyk; Secure Coding Mailing 
ListSubject: Re: [SC-L] Bugs and flaws

Gary McGraw wrote:> To cycle this all back around to the 
original posting, lets talk about> the WMF flaw in particular.  Do 
we believe that the best way for> Microsoft to find similar design 
problems is to do code review?  Or> should they use a higher level 
approach?>> Were they correct in saying (officially) that flaws 
such as WMF are hard> to anticipate?>  I have heard 
some very insightful security researchers from Microsoftpushing an abstract 
notion of "attack surface", which is the amount ofcode/data/API/whatever 
that is exposed to the attacker. To design forsecurity, among other things, 
reduce your attack surface.The WMF design defect seems to be that IE has 
too large of an attacksurface. There are way too many ways for 
unauthenticated remote webservers to induce the client to run way too much 
code with parametersprovided by the attacker. The implementation flaw is 
that the WMF API inparticular is vulnerable to malicious 
content.None of which strikes me as surprising, but maybe that's just me 
:)Crispin--Crispin Cowan, 
Ph.D.  
http://crispincowan.com/~crispin/Director 
of Software Engineering, Novell  http://novell.com    
Olympic Games: The Bi-Annual Festival of 
Corruption___Secure 
Coding mailing list (SC-L)SC-L@securecoding.orgList information, 
subscriptions, etc - http://krvw.com/mailman/listinfo/sc-lList 
charter available at - http://www.securecoding.org/list/charter.php


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread Crispin Cowan
Gary McGraw wrote:
> To cycle this all back around to the original posting, lets talk about
> the WMF flaw in particular.  Do we believe that the best way for
> Microsoft to find similar design problems is to do code review?  Or
> should they use a higher level approach?
>
> Were they correct in saying (officially) that flaws such as WMF are hard
> to anticipate? 
>   
I have heard some very insightful security researchers from Microsoft
pushing an abstract notion of "attack surface", which is the amount of
code/data/API/whatever that is exposed to the attacker. To design for
security, among other things, reduce your attack surface.

The WMF design defect seems to be that IE has too large of an attack
surface. There are way too many ways for unauthenticated remote web
servers to induce the client to run way too much code with parameters
provided by the attacker. The implementation flaw is that the WMF API in
particular is vulnerable to malicious content.

None of which strikes me as surprising, but maybe that's just me :)

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread Greg Beeley
Wietse Venema wrote:
> My experience is otherwise. Without detailed documentation I can
> usually see where in the life cycle the mistake was made: analysis
> (e.g., solving the wrong problem), design (e.g., using an inappropriate
> solution) or coding.

I tend to agree - for *many* design related problems.  But I think it
is only true for design flaws that are violations of well-recognized
approaches to things (for instance, putting too much trust in a source
IP address for authentication, or blatant misuse of cryptography), or
when the problem being "solved" by the software is self-evident enough
that the auditor essentially repeats much of the software engineering
process, albeit (possibly) very informally, just by auditing the code.

Other design related defects are hard to find if you don't have a
well-defined problem - the old "validation" vs "verification" issue.
When the problem being solved by the software is an uncommon one, or
unique to the software, it is more likely that a design flaw will go
undetected by an auditor (for instance, your average code auditor
won't catch a design flaw in how retinal scanning software authenticates
a person, without having studied how it is supposed to work in the
first place).

- Greg

03-Feb-2006

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-03 Thread James Stibbards
Hi Gary,

In one of your prior posts you mentioned documentation.  I believe that the
problem with WMF was that someone had not examined WMF as a postential
source of vulnerabilities, since the embedded code was an legacy capability.

My belief is that one of the keys to finding flaws lies in the proper
capture of the requirements/contract of a software component, and then
examining and testing against that. Without the proper requirements that
speak clearly to security,  we can inspect and examine, but we won't know
what we're measuring against.  That doesn't solve the problem of knowing
when we're done, I realize.

See you at SSS.
- James

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Gary McGraw
Sent: Friday, February 03, 2006 11:13 AM
To: Kenneth R. van Wyk; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws

To cycle this all back around to the original posting, lets talk about the
WMF flaw in particular.  Do we believe that the best way for Microsoft to
find similar design problems is to do code review?  Or should they use a
higher level approach?

Were they correct in saying (officially) that flaws such as WMF are hard to
anticipate? 

gem




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread Al Eridani
On 2/2/06, David Crocker <[EMAIL PROTECTED]> wrote:
> If some small bolt in my car fails because the bolt met its manufacturer's
> specification but was not strong enough to withstand the loads it was 
> subjected
> to, that is a low-level design error, not a manufacturing error.

I agree.

> Similarly, I view coding errors as low-level design errors.

I disagree.

If the design says "For each fund that the user owns, do X" and my
code does X for
all the funds but it skips the most recently acquired fund, I see it as a
"manufacturing" error.

On the other hand, if a user sells all of her funds and the design
does not properly
contemplate the situation where no funds are owned and therefore the software
misbehaves, I see it as a "design" error.

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-03 Thread Gary McGraw
To cycle this all back around to the original posting, lets talk about
the WMF flaw in particular.  Do we believe that the best way for
Microsoft to find similar design problems is to do code review?  Or
should they use a higher level approach?

Were they correct in saying (officially) that flaws such as WMF are hard
to anticipate? 

gem




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread Kenneth R. van Wyk

This thread sure has opened up some lively debate...

Gary McGraw wrote:


As a matter of practice, I usually use the terms that you suggested as
modifiers and say:

implementation bug
design flaw
software defect
 

FWIW, I like to use the nomenclature "security defect" as an 
all-encompassing term, irrespective of design vs. implementation.  Then, 
quite frankly, I think that the choice of "bug" or "flaw" is far less 
important than putting them into the appropriate _context_ -- which is 
why I also generally use the above "implementation bug" and "design flaw". 

I do think that the distinction is important, even though I agree with 
the thought that it's pretty much of a continuum across the spectrum.  
From a pragmatic viewpoint, one of the important distinctions is how 
one would go about rectifying the defect.  An implementation bug can 
often times be fixed in a couple lines of code (e.g., strncpy vs. 
strcpy), whereas a design flaw may well require going "back to the 
drawing board" and fixing an underlying architectural weakness.  This 
is, of course, irrespective of how the problem was found.


I'll also point out that none of three of the above terms even mention 
security.  They could be functional defects as well as security defects, 
which is just fine, IMHO.


Cheers,

Ken van Wyk

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread Wietse Venema
Gary McGraw:
> I'm sorry, but it is just not possible to find design flaws by
> staring at code.

My experience is otherwise. Without detailed documentation I can
usually see where in the life cycle the mistake was made: analysis
(e.g., solving the wrong problem), design (e.g., using an inappropriate
solution) or coding.

Wietse
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread der Mouse
> I'm sorry, but it is just not possible to find design flaws by
> staring at code.

I strongly disagree with this, largely because I've done it myself.
It's the primary way I find design flaws in code, in fact.

Even if you add "unmotivated by a misbehaviour example", I've still
done it, though on only a few occasions.

/~\ The ASCII   der Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-03 Thread John Steven
Ah,

The age-old Gary vs. jOHN debate. I do believe along the continuum of
architecture-->design-->impl. that I've shown the ability to discern flawed
design from source code in source code reviews.

Cigital guys reading this thread have an advantage in that they know both
the shared and exclusive activities defined as part of our architectural and
code review processes. The bottom line is this: as you look at source code,
given enough gift for architecture, you can identify _some_ of the design
(whether intended or implemented) from the implementation, and find _some_
flaws. Before you get wound up and say, "Maybe you jOHN" tongue fully
in-cheek, the Struts example I gave is one case. Looking at a single class
file (the privileged Servlet definition), you can determine that the Lead
Developer/Architect has not paid enough attention to authorization when
he/she designed how the application's functionality was organized.
Admittedly, _some_ (other) architectural flaws do demand attention paid only
through activities confined to architectural analysis--not code review.
 
Think back again to my original email. The situations I present (both with
the physical table and Struts) present a 'mistake' (IEEE parlance) that can
manifest itself in terms of both an architectural flaw and implementation
bug (Cigital parlance).

I believe that the concept that Jeff (Payne), Cowan, Wysopal, and even
Peterson (if you bend it correctly) present is that the 'mistake' may
cross-cut the SDLC--manifesting itself in each of the phases' artifacts. IE:
If the mistake was in requirements, it will manifest itself in design
deficiency (flaw), as well as in the implementation (bug).

Jeff (Williams) indicates that, since progress roles downstream in the SDLC,
you _could_ fix the 'mistake' in any of the phases it manifests itself, but
that an efficiency argument demands you look in the code. I implore the
reader recall my original email. I mention that when characterized as a bug,
the level of effort required to fix the 'mistake' is probably less than if
it's characterized as a flaw. However, in doing so, you may miss other
instances of the mistake throughout the code.

I whole-heartedly agree with Jeff (Williams) that:

1) Look to the docs. for the 'right' answer.
2) Look to the code for the 'truth'.
3) Look to the deployed bins. for 'God's Truth'.
 
The variance in these artifacts is a key element in Cigital's architectural
analysis.

Second, (a point I made in my original email) the objective is to give the
most practical advise as possible to developers for fixing the problem. I'll
just copy-paste it from the original:
-
Summarizing, my characterization of a vulnerability as a bug or a flaw has
important implications towards how it's mitigated. In the case of the Struts
example, the bug-based fix is easiest--but in so characterizing the problem
I may (or may not) miss other instances of this vulnerability within the
application's code base.

How do I know how to characterize a vulnerability along the continuum of
bugs-->flaws?  I don't know for sure, but I've taken to using my experience
over a number of assessments to "upcast" typically endemic problems as flaws
(and solve them in the design or architecture) and "downcast" those problems
that have glaring quick-fixes. In circumstances where both those heuristics
apply, I suggest a tactical fix to the bug, while prescribing that further
analysis take the tack of further fleshing out the flaw.
-

Where my opinion differs from the other posters is this: I believe:
"Where a 'mistake' manifests itself in multiple phases of the software
development lifecycle, you're most apt to completely MITIGATE its effects by
characterizing it as early in the lifecycle as possible, as design or even
requirements. As Williams indicates, to the contrary, you may FIND the
problem most easily later in the lifecycle. Perhaps in the code itself."

Look, 
McGraw put forth the 'bug' and 'flaw' nomenclature. It's useful because
there is value in explicitly pinning the vulnerability in architecture,
design, or code if it helps the dev. org. get things sorted out securely and
throughout their application. My experience is that this value is real.

The message of the  'defect'/'mistake' purist resonates with me as well:
it's all simply a mistake some human made along the path of developing the
application. But! I can assure you, to the extent that root-cause analysis
is valuable, telling a dev. team where to most effectively contend with a
vulnerability is also valuable.

In other words, "smart guys will always find the problems--by hook, or by
crook--but it takes classification to aid in efficient and thorough
mitigation".
 
-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908


> From: Gary McGraw <[EMAIL PROTECTED]>
> 
> 

Re: [SC-L] Bugs and flaws

2006-02-03 Thread Blue Boar

David Crocker wrote:

I don't think this analogy between software development and manufacturing holds.
There are no "manufacturing defects" in software construction


For software:
A design defect is when you correctly implement what you wanted, and you 
wanted the wrong thing.  A "manufacturing defect" is when you 
incorrectly implement what you wanted.


BB
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread Chris Wysopal

In the manufacturing world, manufacturing defects are defects that were
not intended by the design. With software, an implementation defect is a
defect that is not indended by the design.  That is where I see the
analogy. A factory worker forgetting to put on a washer or installing a
polarized capacitor backwards is similar to a programmer neglecting to
check a return code or being off by one in a length calculation.

In both disciplines, to increase quality you could say "don't do that",
you could add a quality process that tests for the correct implementation,
or best, you could make it impossible for the mistake to happen. So I
guess I see a lot of similarities between the manufacturing process and
the software implementation process.

Sure its not a perfect analogy.  Nothing seems to be between the physical
and digital worlds.  As you say, many of the flaws created during what is
traditionally known as implementation are low-level design errors but at
the very end of the continuum they are simply mistakes.

-Chris


On Thu, 2 Feb 2006, David Crocker wrote:

> I don't think this analogy between software development and
> manufacturing holds. There are no "manufacturing defects" in software
> construction, unless one counts a buggy chip (e.g. Pentium FPU or
> similar) or perhaps a buggy compiler. Software instructions execute
> predictably and are not subject to the problems of defective materials,
> difficulties in keeping dimensions within a precise tolerance, or wear
> and tear.
>
> If some small bolt in my car fails because the bolt met its
> manufacturer's specification but was not strong enough to withstand the
> loads it was subjected to, that is a low-level design error, not a
> manufacturing error. Similarly, I view coding errors as low-level design
> errors.
>
> David Crocker, Escher Technologies Ltd.
> Consultancy, contracting and tools for dependable software development
> www.eschertech.com
>
>
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
> Behalf Of Chris Wysopal
> Sent: 02 February 2006 21:35
> To: Gary McGraw
> Cc: William Kruse; Wall, Kevin; Secure Coding Mailing List
> Subject: RE: [SC-L] Bugs and flaws
>
>
>
> In the manufacturing world, which is far more mature than the software
> development world, they use the terminology of "design defect" and
> "manufacturing defect".  So this distinction is useful and has stood the test 
> of
> time.
>
> Flaw and defect are synonymous. We should just pick one. You could say that 
> the
> term for manufacturing software is "implementation".
>
> So why do we need to change the terms for the software world?  Wouldn't 
> "design
> defect" and "implementation defect" be clearer and more in line with the
> manufacturing quality discipline, which the software quality discipline should
> be working towards emulating. (When do we get to Six
> Sigma?)
>
> I just don't see the usefulness of calling a "design defect" a "flaw". "Flaw" 
> by
> itself is overloaded.  And in the software world, "bug" can mean an
> implementation or design problem, so "bug" alone is overloaded for describing 
> an
> implementation defect.
>
> At @stake the Application Center of Excellence used the terminology "design
> flaw" and "implementation flaw".  It well understood by our customers.
>
> As Crispin said in an earlier post on the subject, the line is sometimes 
> blurry.
> I am sure this is the case in manufacturing too.  Architecture flaws can be
> folded into the design flaw category for simplicity.
>
> My vote is for a less overloaded and clearer terminology.
>
> -Chris
>
> P.S. My father managed a non-destructive test lab at a jet engine 
> manufacturer.
> They had about the highest quality requirements in the world. So for many 
> hours
> I was regaled with tales about the benefits of performing static analysis on
> individual components early in the manufacturing cycle.
>
> They would dip cast parts in a fluorescent liquid and look at them under
> ultraviolet light to illuminate cracks caused during casting process. For
> critical parts which would receive more stress, such as the fan blades, they
> would x-ray each part to inspect for internal cracks. A more expensive process
> but warranted due to the increased risk of total system failure for a defect 
> in
> those parts.
>
> The static testing was obviously much cheaper and delivered better quality 
> than
> just bolting the parts together and doing dynamic testing in a test cell.  
> It's
> a wonder that it has taken the software security world so long

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Jeff Williams
That's not my experience. I believe there are many design problems you can
find more quickly and, more importantly, accurately by using the code. I
find this to be true even when there is a documented design -- but there's
no question in the case where all you have is code.

In fact, if the design isn't fairly obvious in the code, then that's a
security problem in itself. Unless it's clear, developers won't understand
it and will make more mistakes.

Static analysis tools can help a lot here. Used properly, they can provide
design-level insight into a software baseline. The huge advantage is that
it's correct.

--Jeff 

-Original Message-
From: Gary McGraw [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 02, 2006 9:06 PM
To: [EMAIL PROTECTED]; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws

Not unless you talk to the designer.  You might get lucky and find a design
problem or two by looking at code, but that usually doesn't work.

That's not to say that all systems have adequate documentation about design
(not to mention requirements that you correctly cited before)!  They don't.
When they don't, you have to try to construct them.  Doing them from code is
very difficult at best.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:59:14 2006
To: Gary McGraw; 'Secure Coding Mailing List'
Subject:RE: [SC-L] Bugs and flaws

Um, so if there is no documentation you can't find design flaws?

--Jeff

-Original Message-
From: Gary McGraw [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 02, 2006 8:50 PM
To: Jeff Williams; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws

I'm sorry, but it is just not possible to find design flaws by staring at
code.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:32:29 2006
To: 'Secure Coding Mailing List'
Subject:RE: [SC-L] Bugs and flaws

At the risk of piling on here, there's no question that it's critical to
consider security problems across the continuum. While we're at it, the
analysis should start back even further with the requirements or even the
whole system concept.

All of the representations across the continuum (rqmts, arch, design, code)
are just models of the same thing.  They start more abstract and end up as
code.  A *single* problem could exist in all these models at the same time.

Higher-level representations of systems are generally eclipsed by lower
level ones fairly rapidly.  For example, it's a rare group that updates
their design docs as implementation progresses. So once you've got code, the
architecture-flaws don't come from architecture documents (which lie). The
best place to look for them (if you want truth) is to look in the code.

To me, the important thing here is to give software teams good advice about
the level of effort they're going to have to put into fixing a problem. If
it helps to give a security problem a label to let them know they're going
to have to go back to the drawing board, I think saying 'architecture-flaw'
or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
doesn't help distinguish it from 'bug' in the minds of most developers or
architects.

--Jeff

Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Wednesday, February 01, 2006 5:07 PM
To: John Steven
Cc: Will Kruse; Secure Coding Mailing List
Subject: Re: [SC-L] Bugs and flaws

John Steven wrote:
> I'm not sure there's any value in discussing this minutia further, but
here
> goes:
>   
We'll let the moderator decide that :)

> 1) Crispin, I think you've nailed one thing. The continuum from:
>
> Architecture --> Design --> Low-level Design --> (to) Implementation
>
> is a blurry one, and certainly slippery as you move from 'left' to
'right'.
>   
Cool.

> But, we all should understand that there's commensurate blur in our
analysis
> techniques (aka architecture and code review) to assure that as we sweep
> over software that we uncover both bugs and architectural flaws.
>   
Also agreed.

> 2) Flaws are different in important ways bugs when it comes to
presentation,
> prioritization, and mitigation. Let's explore by physical analog first.
>   
I disagree with the word usage. To me, "bug" and "flaw" are exactly
synonyms. The distinction being drawn here is between "implementation
flaws" vs. "design flaws". You are just creating confusing jargon to
claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
::

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Gary McGraw
Not unless you talk to the designer.  You might get lucky and find a design 
problem or two by looking at code, but that usually doesn't work.

That's not to say that all systems have adequate documentation about design 
(not to mention requirements that you correctly cited before)!  They don't.  
When they don't, you have to try to construct them.  Doing them from code is 
very difficult at best.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:59:14 2006
To: Gary McGraw; 'Secure Coding Mailing List'
Subject:    RE: [SC-L] Bugs and flaws

Um, so if there is no documentation you can't find design flaws?

--Jeff

-Original Message-
From: Gary McGraw [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 02, 2006 8:50 PM
To: Jeff Williams; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws

I'm sorry, but it is just not possible to find design flaws by staring at
code.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:32:29 2006
To: 'Secure Coding Mailing List'
Subject:RE: [SC-L] Bugs and flaws

At the risk of piling on here, there's no question that it's critical to
consider security problems across the continuum. While we're at it, the
analysis should start back even further with the requirements or even the
whole system concept.

All of the representations across the continuum (rqmts, arch, design, code)
are just models of the same thing.  They start more abstract and end up as
code.  A *single* problem could exist in all these models at the same time.

Higher-level representations of systems are generally eclipsed by lower
level ones fairly rapidly.  For example, it's a rare group that updates
their design docs as implementation progresses. So once you've got code, the
architecture-flaws don't come from architecture documents (which lie). The
best place to look for them (if you want truth) is to look in the code.

To me, the important thing here is to give software teams good advice about
the level of effort they're going to have to put into fixing a problem. If
it helps to give a security problem a label to let them know they're going
to have to go back to the drawing board, I think saying 'architecture-flaw'
or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
doesn't help distinguish it from 'bug' in the minds of most developers or
architects.

--Jeff

Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Wednesday, February 01, 2006 5:07 PM
To: John Steven
Cc: Will Kruse; Secure Coding Mailing List
Subject: Re: [SC-L] Bugs and flaws

John Steven wrote:
> I'm not sure there's any value in discussing this minutia further, but
here
> goes:
>   
We'll let the moderator decide that :)

> 1) Crispin, I think you've nailed one thing. The continuum from:
>
> Architecture --> Design --> Low-level Design --> (to) Implementation
>
> is a blurry one, and certainly slippery as you move from 'left' to
'right'.
>   
Cool.

> But, we all should understand that there's commensurate blur in our
analysis
> techniques (aka architecture and code review) to assure that as we sweep
> over software that we uncover both bugs and architectural flaws.
>   
Also agreed.

> 2) Flaws are different in important ways bugs when it comes to
presentation,
> prioritization, and mitigation. Let's explore by physical analog first.
>   
I disagree with the word usage. To me, "bug" and "flaw" are exactly
synonyms. The distinction being drawn here is between "implementation
flaws" vs. "design flaws". You are just creating confusing jargon to
claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

> I nearly fell through one of my consultant's tables as I leaned on it this
> morning. We explored: "Bug or flaw?".
>   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http:/

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Jeff Williams
Um, so if there is no documentation you can't find design flaws?

--Jeff

-Original Message-
From: Gary McGraw [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 02, 2006 8:50 PM
To: Jeff Williams; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws

I'm sorry, but it is just not possible to find design flaws by staring at
code.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:32:29 2006
To: 'Secure Coding Mailing List'
Subject:        RE: [SC-L] Bugs and flaws

At the risk of piling on here, there's no question that it's critical to
consider security problems across the continuum. While we're at it, the
analysis should start back even further with the requirements or even the
whole system concept.

All of the representations across the continuum (rqmts, arch, design, code)
are just models of the same thing.  They start more abstract and end up as
code.  A *single* problem could exist in all these models at the same time.

Higher-level representations of systems are generally eclipsed by lower
level ones fairly rapidly.  For example, it's a rare group that updates
their design docs as implementation progresses. So once you've got code, the
architecture-flaws don't come from architecture documents (which lie). The
best place to look for them (if you want truth) is to look in the code.

To me, the important thing here is to give software teams good advice about
the level of effort they're going to have to put into fixing a problem. If
it helps to give a security problem a label to let them know they're going
to have to go back to the drawing board, I think saying 'architecture-flaw'
or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
doesn't help distinguish it from 'bug' in the minds of most developers or
architects.

--Jeff

Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Wednesday, February 01, 2006 5:07 PM
To: John Steven
Cc: Will Kruse; Secure Coding Mailing List
Subject: Re: [SC-L] Bugs and flaws

John Steven wrote:
> I'm not sure there's any value in discussing this minutia further, but
here
> goes:
>   
We'll let the moderator decide that :)

> 1) Crispin, I think you've nailed one thing. The continuum from:
>
> Architecture --> Design --> Low-level Design --> (to) Implementation
>
> is a blurry one, and certainly slippery as you move from 'left' to
'right'.
>   
Cool.

> But, we all should understand that there's commensurate blur in our
analysis
> techniques (aka architecture and code review) to assure that as we sweep
> over software that we uncover both bugs and architectural flaws.
>   
Also agreed.

> 2) Flaws are different in important ways bugs when it comes to
presentation,
> prioritization, and mitigation. Let's explore by physical analog first.
>   
I disagree with the word usage. To me, "bug" and "flaw" are exactly
synonyms. The distinction being drawn here is between "implementation
flaws" vs. "design flaws". You are just creating confusing jargon to
claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

> I nearly fell through one of my consultant's tables as I leaned on it this
> morning. We explored: "Bug or flaw?".
>   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php





This electronic messag

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Gary McGraw
I'm sorry, but it is just not possible to find design flaws by staring at code.

gem

 -Original Message-
From:   Jeff Williams [mailto:[EMAIL PROTECTED]
Sent:   Thu Feb 02 20:32:29 2006
To: 'Secure Coding Mailing List'
Subject:    RE: [SC-L] Bugs and flaws

At the risk of piling on here, there's no question that it's critical to
consider security problems across the continuum. While we're at it, the
analysis should start back even further with the requirements or even the
whole system concept.

All of the representations across the continuum (rqmts, arch, design, code)
are just models of the same thing.  They start more abstract and end up as
code.  A *single* problem could exist in all these models at the same time.

Higher-level representations of systems are generally eclipsed by lower
level ones fairly rapidly.  For example, it's a rare group that updates
their design docs as implementation progresses. So once you've got code, the
architecture-flaws don't come from architecture documents (which lie). The
best place to look for them (if you want truth) is to look in the code.

To me, the important thing here is to give software teams good advice about
the level of effort they're going to have to put into fixing a problem. If
it helps to give a security problem a label to let them know they're going
to have to go back to the drawing board, I think saying 'architecture-flaw'
or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
doesn't help distinguish it from 'bug' in the minds of most developers or
architects.

--Jeff

Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Wednesday, February 01, 2006 5:07 PM
To: John Steven
Cc: Will Kruse; Secure Coding Mailing List
Subject: Re: [SC-L] Bugs and flaws

John Steven wrote:
> I'm not sure there's any value in discussing this minutia further, but
here
> goes:
>   
We'll let the moderator decide that :)

> 1) Crispin, I think you've nailed one thing. The continuum from:
>
> Architecture --> Design --> Low-level Design --> (to) Implementation
>
> is a blurry one, and certainly slippery as you move from 'left' to
'right'.
>   
Cool.

> But, we all should understand that there's commensurate blur in our
analysis
> techniques (aka architecture and code review) to assure that as we sweep
> over software that we uncover both bugs and architectural flaws.
>   
Also agreed.

> 2) Flaws are different in important ways bugs when it comes to
presentation,
> prioritization, and mitigation. Let's explore by physical analog first.
>   
I disagree with the word usage. To me, "bug" and "flaw" are exactly
synonyms. The distinction being drawn here is between "implementation
flaws" vs. "design flaws". You are just creating confusing jargon to
claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

> I nearly fell through one of my consultant's tables as I leaned on it this
> morning. We explored: "Bug or flaw?".
>   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php





This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
me

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Brian Chess
I spent Phase One of both my academic and professional careers
working on hardware fault models and design for testability.
In fact, the first static analysis tool I wrote was for hardware:
it analyzed Verilog looking for design mistakes that would make
it difficult or impossible to perform design verification or to
apply adequate manufacturing tests.  Some observations:

- The hardware guys are indeed ahead.  Chip designers budget for
test and verification from day one.  They also do a fair amount
of thinking about what's going to go wrong.  Somebody's going to
give you 5 volts instead of 3.3 volts.  What's going to happen?
The transistors are going to switch at a different rate when the
chip is cold.  What's going to happen?  A speck of dust is going
to fall on the wafer between the time the metal 2 layer goes down
and the time the metal 3 layer goes down.  What's going to happen?

- The difference between a manufacturing defect and a design
defect is not always immediately obvious.  Maybe two wires got
bridged because a piece of dust fell in exactly the right spot.
Maybe two wires got bridged because you made a mistake in your
process physics and you need 50 nm of tolerance instead of 0.5 nm.
You'd better figure it out before you go into full-swing
manufacturing, or big batches of defective chips could kill your
profit margins and drive your customers away at the same time.
For that reason, diagnosing the cause of failure is an important
topic.

Brian

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On
Behalf Of Chris Wysopal
Sent: 02 February 2006 21:35
To: Gary McGraw
Cc: William Kruse; Wall, Kevin; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws



In the manufacturing world, which is far more mature than the software
development world, they use the terminology of "design defect" and
"manufacturing defect".  So this distinction is useful and has stood the
test of
time.

Flaw and defect are synonymous. We should just pick one. You could say that
the
term for manufacturing software is "implementation".

So why do we need to change the terms for the software world?  Wouldn't
"design
defect" and "implementation defect" be clearer and more in line with the
manufacturing quality discipline, which the software quality discipline
should
be working towards emulating. (When do we get to Six
Sigma?)

I just don't see the usefulness of calling a "design defect" a "flaw".
"Flaw" by
itself is overloaded.  And in the software world, "bug" can mean an
implementation or design problem, so "bug" alone is overloaded for
describing an
implementation defect.

At @stake the Application Center of Excellence used the terminology "design
flaw" and "implementation flaw".  It well understood by our customers.

As Crispin said in an earlier post on the subject, the line is sometimes
blurry.
I am sure this is the case in manufacturing too.  Architecture flaws can be
folded into the design flaw category for simplicity.

My vote is for a less overloaded and clearer terminology.

-Chris

P.S. My father managed a non-destructive test lab at a jet engine
manufacturer.
They had about the highest quality requirements in the world. So for many
hours
I was regaled with tales about the benefits of performing static analysis on
individual components early in the manufacturing cycle.

They would dip cast parts in a fluorescent liquid and look at them under
ultraviolet light to illuminate cracks caused during casting process. For
critical parts which would receive more stress, such as the fan blades, they
would x-ray each part to inspect for internal cracks. A more expensive
process
but warranted due to the increased risk of total system failure for a defect
in
those parts.

The static testing was obviously much cheaper and delivered better quality
than
just bolting the parts together and doing dynamic testing in a test cell.
It's
a wonder that it has taken the software security world so long to catch onto
the
benefits of static testing of implementation.  I think we can learn a lot
more
from the manufacturing world.

On Thu, 2 Feb 2006, Gary McGraw wrote:

> Hi all,
>
> When I introduced the "bugs" and "flaws" nomenclature into the
> literature, I did so in an article about the software security
> workshop I chaired in 2003 (see http://www.cigital.com/ssw/).  This
> was ultimately written up in an "On the Horizon" paper published by
> IEEE Security & Privacy.
>
> Nancy Mead and I queried the SWEBOK and looked around to see if the
> new usage caused collision.  It did not.  The reason I think it is
> important to distinguish the two ends of the rather slippery range
> (crispy is right about that) is that software security as a field is
> not paying enough attenti

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Jeff Williams
At the risk of piling on here, there's no question that it's critical to
consider security problems across the continuum. While we're at it, the
analysis should start back even further with the requirements or even the
whole system concept.

All of the representations across the continuum (rqmts, arch, design, code)
are just models of the same thing.  They start more abstract and end up as
code.  A *single* problem could exist in all these models at the same time.

Higher-level representations of systems are generally eclipsed by lower
level ones fairly rapidly.  For example, it's a rare group that updates
their design docs as implementation progresses. So once you've got code, the
architecture-flaws don't come from architecture documents (which lie). The
best place to look for them (if you want truth) is to look in the code.

To me, the important thing here is to give software teams good advice about
the level of effort they're going to have to put into fixing a problem. If
it helps to give a security problem a label to let them know they're going
to have to go back to the drawing board, I think saying 'architecture-flaw'
or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
doesn't help distinguish it from 'bug' in the minds of most developers or
architects.

--Jeff

Jeff Williams, CEO
Aspect Security
http://www.aspectsecurity.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Wednesday, February 01, 2006 5:07 PM
To: John Steven
Cc: Will Kruse; Secure Coding Mailing List
Subject: Re: [SC-L] Bugs and flaws

John Steven wrote:
> I'm not sure there's any value in discussing this minutia further, but
here
> goes:
>   
We'll let the moderator decide that :)

> 1) Crispin, I think you've nailed one thing. The continuum from:
>
> Architecture --> Design --> Low-level Design --> (to) Implementation
>
> is a blurry one, and certainly slippery as you move from 'left' to
'right'.
>   
Cool.

> But, we all should understand that there's commensurate blur in our
analysis
> techniques (aka architecture and code review) to assure that as we sweep
> over software that we uncover both bugs and architectural flaws.
>   
Also agreed.

> 2) Flaws are different in important ways bugs when it comes to
presentation,
> prioritization, and mitigation. Let's explore by physical analog first.
>   
I disagree with the word usage. To me, "bug" and "flaw" are exactly
synonyms. The distinction being drawn here is between "implementation
flaws" vs. "design flaws". You are just creating confusing jargon to
claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

> I nearly fell through one of my consultant's tables as I leaned on it this
> morning. We explored: "Bug or flaw?".
>   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread David Crocker
I don't think this analogy between software development and manufacturing holds.
There are no "manufacturing defects" in software construction, unless one counts
a buggy chip (e.g. Pentium FPU or similar) or perhaps a buggy compiler. Software
instructions execute predictably and are not subject to the problems of
defective materials, difficulties in keeping dimensions within a precise
tolerance, or wear and tear.

If some small bolt in my car fails because the bolt met its manufacturer's
specification but was not strong enough to withstand the loads it was subjected
to, that is a low-level design error, not a manufacturing error. Similarly, I
view coding errors as low-level design errors.

David Crocker, Escher Technologies Ltd.
Consultancy, contracting and tools for dependable software development
www.eschertech.com



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Chris Wysopal
Sent: 02 February 2006 21:35
To: Gary McGraw
Cc: William Kruse; Wall, Kevin; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws



In the manufacturing world, which is far more mature than the software
development world, they use the terminology of "design defect" and
"manufacturing defect".  So this distinction is useful and has stood the test of
time.

Flaw and defect are synonymous. We should just pick one. You could say that the
term for manufacturing software is "implementation".

So why do we need to change the terms for the software world?  Wouldn't "design
defect" and "implementation defect" be clearer and more in line with the
manufacturing quality discipline, which the software quality discipline should
be working towards emulating. (When do we get to Six
Sigma?)

I just don't see the usefulness of calling a "design defect" a "flaw". "Flaw" by
itself is overloaded.  And in the software world, "bug" can mean an
implementation or design problem, so "bug" alone is overloaded for describing an
implementation defect.

At @stake the Application Center of Excellence used the terminology "design
flaw" and "implementation flaw".  It well understood by our customers.

As Crispin said in an earlier post on the subject, the line is sometimes blurry.
I am sure this is the case in manufacturing too.  Architecture flaws can be
folded into the design flaw category for simplicity.

My vote is for a less overloaded and clearer terminology.

-Chris

P.S. My father managed a non-destructive test lab at a jet engine manufacturer.
They had about the highest quality requirements in the world. So for many hours
I was regaled with tales about the benefits of performing static analysis on
individual components early in the manufacturing cycle.

They would dip cast parts in a fluorescent liquid and look at them under
ultraviolet light to illuminate cracks caused during casting process. For
critical parts which would receive more stress, such as the fan blades, they
would x-ray each part to inspect for internal cracks. A more expensive process
but warranted due to the increased risk of total system failure for a defect in
those parts.

The static testing was obviously much cheaper and delivered better quality than
just bolting the parts together and doing dynamic testing in a test cell.  It's
a wonder that it has taken the software security world so long to catch onto the
benefits of static testing of implementation.  I think we can learn a lot more
from the manufacturing world.

On Thu, 2 Feb 2006, Gary McGraw wrote:

> Hi all,
>
> When I introduced the "bugs" and "flaws" nomenclature into the 
> literature, I did so in an article about the software security 
> workshop I chaired in 2003 (see http://www.cigital.com/ssw/).  This 
> was ultimately written up in an "On the Horizon" paper published by 
> IEEE Security & Privacy.
>
> Nancy Mead and I queried the SWEBOK and looked around to see if the 
> new usage caused collision.  It did not.  The reason I think it is 
> important to distinguish the two ends of the rather slippery range 
> (crispy is right about that) is that software security as a field is 
> not paying enough attention to architecture.  By identifying flaws as 
> a subcategory of defects (according the the SWEBOK), we can focus some 
> attention on the problem.
>
> >>From the small glossary in "Software Security" (my new book out
> tomorrow):
>
> Bug-A bug is an implementation-level software problem. Bugs may exist 
> in code but never be executed. Though the term bug is applied quite 
> generally by many software practitioners, I reserve use of the term to 
> encompass fairly
> simple implementation errors. Bugs are implementation-level problems
> that
> can be easily discovered and reme

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Gunnar Peterson
So from a countermeasure standpoint, a bug can and should be fixed locally,
while a flaw may require that the countermeasure exists at a different level of
abstraction. For example, I assume no one thinks (in OO at least) that input
validation is resident in every method, but rather called externally.

-gp

Quoting Gary McGraw <[EMAIL PROTECTED]>:

> Hi all,
>
> When I introduced the "bugs" and "flaws" nomenclature into the
> literature, I did so in an article about the software security workshop
> I chaired in 2003 (see http://www.cigital.com/ssw/).  This was
> ultimately written up in an "On the Horizon" paper published by IEEE
> Security & Privacy.
>
> Nancy Mead and I queried the SWEBOK and looked around to see if the new
> usage caused collision.  It did not.  The reason I think it is important
> to distinguish the two ends of the rather slippery range (crispy is
> right about that) is that software security as a field is not paying
> enough attention to architecture.  By identifying flaws as a subcategory
> of defects (according the the SWEBOK), we can focus some attention on
> the problem.
>
> >From the small glossary in "Software Security" (my new book out
> tomorrow):
>
> Bug-A bug is an implementation-level software problem. Bugs may exist in
> code but never be executed. Though the term bug is applied quite
> generally
> by many software practitioners, I reserve use of the term to encompass
> fairly
> simple implementation errors. Bugs are implementation-level problems
> that
> can be easily discovered and remedied. See Chapter 1.
>
> Flaw-A design-level or architectural software defect. High-level defects
> cause 50% of software security problems. See Chapter 1.
>
> In any case, I intend to still use these terms like this, and I would be
> very pleased if you would all join me.
>
> gem
>
>
>
> 
> This electronic message transmission contains information that may be
> confidential or privileged.  The information contained herein is intended
> solely for the recipient and use by any other party is not authorized.  If
> you are not the intended recipient (or otherwise authorized to receive this
> message by the intended recipient), any disclosure, copying, distribution or
> use of the contents of the information is prohibited.  If you have received
> this electronic message transmission in error, please contact the sender by
> reply email and delete all copies of this message.  Cigital, Inc. accepts no
> responsibility for any loss or damage resulting directly or indirectly from
> the use of this email or its contents.
> Thank You.
> 
>
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
>

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-02 Thread Crispin Cowan
John Steven wrote:
> Re-reading my post, I realize that it came off as heavy support for
> additional terminology. Truth is, we've found that the easiest way to
> communicate this concept to our Consultants and Clients here at Cigital has
> been to build the two buckets (flaws and bugs).
>   
My main problem with this terminology is that I have only ever seen it
coming from Cigital people. The rest of the world seems to treat "flaw"
and "bug" as synonyms.

The distinction here is between "design flaw" and "implementation flaw".
There doesn't seem to be anything in these words that suggest one is
larger scale than the other.

>From dictionary.com we have:

flaw^1 (flô) pronunciation
/n./

   1. An imperfection, often concealed, that impairs soundness: /a flaw
  in the crystal that caused it to shatter./ See synonyms at blemish
  
.
   2. A defect or shortcoming in something intangible: /They share the
  character flaw of arrogance./
   3. A defect in a legal document that can render it invalid.

"Bug"  is a little more arcane, and the
only relevant part is far down the document where it discusses the
history with Grace Hopper:

bug

An unwanted and unintended property of a program or piece of
hardware, esp. one that causes it to malfunction. Antonym of
/feature/

.
Examples: “There's a bug in the editor: it writes things out
backwards.” “The system crashed because of a hardware bug.” “Fred is
a winner, but he has a few bugs” (i.e., Fred is a good guy, but he
has a few personality problems).

Historical note: Admiral Grace Hopper (an early computing pioneer
better known for inventing /COBOL/

)
liked to tell a story in which a technician solved a /glitch/


in the Harvard Mark II machine by pulling an actual insect out from
between the contacts of one of its relays, and she subsequently
promulgated /bug/


in its hackish sense as a joke about the incident (though, as she
was careful to admit, she was not there when it happened). For many
years the logbook associated with the incident and the actual bug in
question (a moth) sat in a display case at the Naval Surface Warfare
Center (NSWC). The entire story, with a picture of the logbook and
the moth taped into it, is recorded in the /Annals of the History of
Computing/, Vol. 3, No. 3 (July 1981), pp. 285--286.


> What I was really trying to present was that Security people could stand to
> be a bit more thorough about how they synthesize the results of their
> analysis before they communicate the vulnerabilities they've found, and what
> mitigating strategies they suggest.
>   
Definitely. I think there is a deep cultural problem that people who fix
bugs or flaws tend to over-focus on the micro issue, fixing the specific
coding vulnerability, and ignore the larger architectural error that
allows the coding defect to be exploitable and cause damage. In the case
at hand, the WMF bug would be much less dangerous if there were not so
many ways to induce IE to invoke WMF decoding without asking the user.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread Gavin, Michael
"Architecture" is also an overloaded term, often meaning either a design
(the output of architects) or the implementation of certain types of
design (the output of engineers). 

Hoping to clarify Chris's comment on architecture flaws: architecture
defects as in the defects in the output produced by architects are
"design flaws"; architecture defects as in the defects in the output of
programmers/coders/engineers are "implementation flaws".

FWIW, I agree with Chris, "design flaw" and "implementation flaw" seem
better/more descriptive/less confusing than revised definitions for
"flaw" and "bug". (Then again, I once worked at @stake..., on the other
hand, IIRC this terminology is more consistent with what you find in
Ross Anderson's classic "Security Engineering".)

Michael

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Chris Wysopal
Sent: Thursday, February 02, 2006 4:35 PM
To: Gary McGraw
Cc: William Kruse; Wall, Kevin; Secure Coding Mailing List
Subject: RE: [SC-L] Bugs and flaws


In the manufacturing world, which is far more mature than the software
development world, they use the terminology of "design defect" and
"manufacturing defect".  So this distinction is useful and has stood the
test of time.

Flaw and defect are synonymous. We should just pick one. You could say
that the term for manufacturing software is "implementation".

So why do we need to change the terms for the software world?  Wouldn't
"design defect" and "implementation defect" be clearer and more in line
with the manufacturing quality discipline, which the software quality
discipline should be working towards emulating. (When do we get to Six
Sigma?)

I just don't see the usefulness of calling a "design defect" a "flaw".
"Flaw" by itself is overloaded.  And in the software world, "bug" can
mean
an implementation or design problem, so "bug" alone is overloaded for
describing an implementation defect.

At @stake the Application Center of Excellence used the terminology
"design flaw" and "implementation flaw".  It well understood by our
customers.

As Crispin said in an earlier post on the subject, the line is sometimes
blurry.  I am sure this is the case in manufacturing too.  Architecture
flaws can be folded into the design flaw category for simplicity.

My vote is for a less overloaded and clearer terminology.

-Chris

P.S. My father managed a non-destructive test lab at a jet engine
manufacturer. They had about the highest quality requirements in the
world. So for many hours I was regaled with tales about the benefits of
performing static analysis on individual components early in the
manufacturing cycle.

They would dip cast parts in a fluorescent liquid and look at them under
ultraviolet light to illuminate cracks caused during casting process.
For critical parts which would receive more stress, such as the fan
blades, they would x-ray each part to inspect for internal cracks. A
more
expensive process but warranted due to the increased risk of total
system
failure for a defect in those parts.

The static testing was obviously much cheaper and delivered better
quality
than just bolting the parts together and doing dynamic testing in a test
cell.  It's a wonder that it has taken the software security world so
long
to catch onto the benefits of static testing of implementation.  I think
we can learn a lot more from the manufacturing world.

On Thu, 2 Feb 2006, Gary McGraw wrote:

> Hi all,
>
> When I introduced the "bugs" and "flaws" nomenclature into the
> literature, I did so in an article about the software security
workshop
> I chaired in 2003 (see http://www.cigital.com/ssw/).  This was
> ultimately written up in an "On the Horizon" paper published by IEEE
> Security & Privacy.
>
> Nancy Mead and I queried the SWEBOK and looked around to see if the
new
> usage caused collision.  It did not.  The reason I think it is
important
> to distinguish the two ends of the rather slippery range (crispy is
> right about that) is that software security as a field is not paying
> enough attention to architecture.  By identifying flaws as a
subcategory
> of defects (according the the SWEBOK), we can focus some attention on
> the problem.
>
> >>From the small glossary in "Software Security" (my new book out
> tomorrow):
>
> Bug-A bug is an implementation-level software problem. Bugs may exist
in
> code but never be executed. Though the term bug is applied quite
> generally
> by many software practitioners, I reserve use of the term to encompass
> fairly
> simple implementation errors. Bugs are imp

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Gary McGraw
Hi Weld,

You make a very good point.  I think we have lots to learn from
manufacturing.  

As a matter of practice, I usually use the terms that you suggested as
modifiers and say:

implementation bug
design flaw

software defect

As long as there is a clear way to separate the two ends of the slippery
spectrum, I will be happy.  But I'm still gonna say "bug" and "flaw" for
short even when I mean the really long involved things above!  Simplify.

gem




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread Chris Wysopal

In the manufacturing world, which is far more mature than the software
development world, they use the terminology of "design defect" and
"manufacturing defect".  So this distinction is useful and has stood the
test of time.

Flaw and defect are synonymous. We should just pick one. You could say
that the term for manufacturing software is "implementation".

So why do we need to change the terms for the software world?  Wouldn't
"design defect" and "implementation defect" be clearer and more in line
with the manufacturing quality discipline, which the software quality
discipline should be working towards emulating. (When do we get to Six
Sigma?)

I just don't see the usefulness of calling a "design defect" a "flaw".
"Flaw" by itself is overloaded.  And in the software world, "bug" can mean
an implementation or design problem, so "bug" alone is overloaded for
describing an implementation defect.

At @stake the Application Center of Excellence used the terminology
"design flaw" and "implementation flaw".  It well understood by our
customers.

As Crispin said in an earlier post on the subject, the line is sometimes
blurry.  I am sure this is the case in manufacturing too.  Architecture
flaws can be folded into the design flaw category for simplicity.

My vote is for a less overloaded and clearer terminology.

-Chris

P.S. My father managed a non-destructive test lab at a jet engine
manufacturer. They had about the highest quality requirements in the
world. So for many hours I was regaled with tales about the benefits of
performing static analysis on individual components early in the
manufacturing cycle.

They would dip cast parts in a fluorescent liquid and look at them under
ultraviolet light to illuminate cracks caused during casting process.
For critical parts which would receive more stress, such as the fan
blades, they would x-ray each part to inspect for internal cracks. A more
expensive process but warranted due to the increased risk of total system
failure for a defect in those parts.

The static testing was obviously much cheaper and delivered better quality
than just bolting the parts together and doing dynamic testing in a test
cell.  It's a wonder that it has taken the software security world so long
to catch onto the benefits of static testing of implementation.  I think
we can learn a lot more from the manufacturing world.

On Thu, 2 Feb 2006, Gary McGraw wrote:

> Hi all,
>
> When I introduced the "bugs" and "flaws" nomenclature into the
> literature, I did so in an article about the software security workshop
> I chaired in 2003 (see http://www.cigital.com/ssw/).  This was
> ultimately written up in an "On the Horizon" paper published by IEEE
> Security & Privacy.
>
> Nancy Mead and I queried the SWEBOK and looked around to see if the new
> usage caused collision.  It did not.  The reason I think it is important
> to distinguish the two ends of the rather slippery range (crispy is
> right about that) is that software security as a field is not paying
> enough attention to architecture.  By identifying flaws as a subcategory
> of defects (according the the SWEBOK), we can focus some attention on
> the problem.
>
> >>From the small glossary in "Software Security" (my new book out
> tomorrow):
>
> Bug-A bug is an implementation-level software problem. Bugs may exist in
> code but never be executed. Though the term bug is applied quite
> generally
> by many software practitioners, I reserve use of the term to encompass
> fairly
> simple implementation errors. Bugs are implementation-level problems
> that
> can be easily discovered and remedied. See Chapter 1.
>
> Flaw-A design-level or architectural software defect. High-level defects
> cause 50% of software security problems. See Chapter 1.
>
> In any case, I intend to still use these terms like this, and I would be
> very pleased if you would all join me.
>
> gem
>
>
>
> 
> This electronic message transmission contains information that may be
> confidential or privileged.  The information contained herein is intended
> solely for the recipient and use by any other party is not authorized.  If
> you are not the intended recipient (or otherwise authorized to receive this
> message by the intended recipient), any disclosure, copying, distribution or
> use of the contents of the information is prohibited.  If you have received
> this electronic message transmission in error, please contact the sender by
> reply email and delete all copies of this message.  Cigital, Inc. accepts no
> responsibility for any loss or damage resulting directly or indirectly from
> the use of this email or its contents.
> Thank You.
> 
>
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-

RE: [SC-L] Bugs and flaws

2006-02-02 Thread Gary McGraw
Hi all,

When I introduced the "bugs" and "flaws" nomenclature into the
literature, I did so in an article about the software security workshop
I chaired in 2003 (see http://www.cigital.com/ssw/).  This was
ultimately written up in an "On the Horizon" paper published by IEEE
Security & Privacy.

Nancy Mead and I queried the SWEBOK and looked around to see if the new
usage caused collision.  It did not.  The reason I think it is important
to distinguish the two ends of the rather slippery range (crispy is
right about that) is that software security as a field is not paying
enough attention to architecture.  By identifying flaws as a subcategory
of defects (according the the SWEBOK), we can focus some attention on
the problem.

>From the small glossary in "Software Security" (my new book out
tomorrow):

Bug-A bug is an implementation-level software problem. Bugs may exist in
code but never be executed. Though the term bug is applied quite
generally
by many software practitioners, I reserve use of the term to encompass
fairly
simple implementation errors. Bugs are implementation-level problems
that
can be easily discovered and remedied. See Chapter 1.

Flaw-A design-level or architectural software defect. High-level defects
cause 50% of software security problems. See Chapter 1.

In any case, I intend to still use these terms like this, and I would be
very pleased if you would all join me.

gem




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-02 Thread John Steven
Kevin,

Jeff Payne and I were talking about this last night. Jeff's position was,
"...Or, you could just use the existing quality assurance terminology and
avoid the problem altogether." I agree with you and him; standardizing
terminology is a great start to obviating confusing discussions about what
type of problem the software faces.

Re-reading my post, I realize that it came off as heavy support for
additional terminology. Truth is, we've found that the easiest way to
communicate this concept to our Consultants and Clients here at Cigital has
been to build the two buckets (flaws and bugs).

What I was really trying to present was that Security people could stand to
be a bit more thorough about how they synthesize the results of their
analysis before they communicate the vulnerabilities they've found, and what
mitigating strategies they suggest.

I guess, in my mind, the most important things with regard to classifying
the mistakes software people make that lead to vulnerability (the piety of
vulnerability taxonomies aside) is to support:

1) Selection of the most effective mitigating strategy -and-
2) Root cause analysis that will result in changes in software development
that prevent software folk from making the same mistake again.

-
John Steven
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.  | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

> From: "Wall, Kevin" <[EMAIL PROTECTED]>
> 
> John Steven wrote:
> ...
>> 2) Flaws are different in important ways bugs when it comes to presentation,
>> prioritization, and mitigation. Let's explore by physical analog first.
> 
> Crispin Cowan responded:
>> I disagree with the word usage. To me, "bug" and "flaw" are exactly
>> synonyms. The distinction being drawn here is between "implementation
>> flaws" vs. "design flaws". You are just creating confusing jargon to
>> claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
>> ::= bug. A vulnerability is a special subset of flaws/defects/bugs that
>> has the property of being exploitable.
> 
> I'm not sure if this will clarify things or further muddy the waters,
> but... partial definitions taken SWEBOK
> (http://www.swebok.org/ironman/pdf/Swebok_Ironman_June_23_%202004.pdf)
> which in turn were taken from the IEEE standard glossary
> (IEEE610.12-90) are:
> + Error: "A difference.between a computed result and the correct result"
> + Fault: "An incorrect step, process, or data definition
>   in a computer program"
> + Failure: "The [incorrect] result of a fault"
> + Mistake: "A human action that produces an incorrect result"
> 
> Not all faults are manifested as errors. I can't find an online
> version of the glossary anywhere, and the one I have is about 15-20 years old
> and buried somewhere deep under a score of other rarely used books.
> 
> My point is though, until we start with some standard terminology this
> field of information security is never going to mature. I propose that
> we build on the foundational definitions of the IEEE-CS (unless there
> definitions have "bugs" ;-).
> 
> -kevin
> ---
> Kevin W. Wall  Qwest Information Technology, Inc.
> [EMAIL PROTECTED] Phone: 614.215.4788
> "The reason you have people breaking into your software all
> over the place is because your software sucks..."
>  -- Former whitehouse cybersecurity advisor, Richard Clarke,
> at eWeek Security Summit




This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] Bugs and flaws

2006-02-02 Thread Wall, Kevin
John Steven wrote:
...
> 2) Flaws are different in important ways bugs when it comes to presentation,
> prioritization, and mitigation. Let's explore by physical analog first.

Crispin Cowan responded:  
> I disagree with the word usage. To me, "bug" and "flaw" are exactly
> synonyms. The distinction being drawn here is between "implementation
> flaws" vs. "design flaws". You are just creating confusing jargon to
> claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
> ::= bug. A vulnerability is a special subset of flaws/defects/bugs that
> has the property of being exploitable.

I'm not sure if this will clarify things or further muddy the waters,
but... partial definitions taken SWEBOK
(http://www.swebok.org/ironman/pdf/Swebok_Ironman_June_23_%202004.pdf)
which in turn were taken from the IEEE standard glossary
(IEEE610.12-90) are:
+ Error: "A difference…between a computed result and the correct result"
+ Fault: "An incorrect step, process, or data definition
  in a computer program"
+ Failure: "The [incorrect] result of a fault"
+ Mistake: "A human action that produces an incorrect result"

Not all faults are manifested as errors. I can't find an online
version of the glossary anywhere, and the one I have is about 15-20 years old
and buried somewhere deep under a score of other rarely used books.

My point is though, until we start with some standard terminology this
field of information security is never going to mature. I propose that
we build on the foundational definitions of the IEEE-CS (unless there
definitions have "bugs" ;-).

-kevin
---
Kevin W. Wall   Qwest Information Technology, Inc.
[EMAIL PROTECTED]   Phone: 614.215.4788
"The reason you have people breaking into your software all 
over the place is because your software sucks..."
 -- Former whitehouse cybersecurity advisor, Richard Clarke,
at eWeek Security Summit

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread Gunnar Peterson



Hi John,


Which of the following more aptly characterizes the problem?:

IMPL. BUG: Insufficient security-constraint existed on the admin  
Servlet in

the app's deployment descriptor.

ARCH. FLAW: No façade component gated privileged functionality
-alternatively-
ARCH. FLAW: Privileged functionality incapable of judging Principal's
entitlement (both fine, one user changing another's password, or  
coarse,

application functionality improperly accessed)


Clausewitz said to be strong, first in general, and then at the  
decisive point. Assuming you consider authentication and  
authorization on admin functions a decisive point, then this scenario  
is a failure in both instances. The question you raise is locating  
the responsibility to deal with this problem. In a distributed  
system, there are many potential areas to locate those controls.  
Problems do not necessarily have to be solved (and in some cases  
cannot be) at the same logical layer they were created (http:// 
1raindrop.typepad.com/1_raindrop/2005/11/thinking_in_lay.html). Would  
an authenticating reverse proxy have prevented this problem? How  
about stronger identity protocols?


-gp
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread Crispin Cowan
John Steven wrote:
> I'm not sure there's any value in discussing this minutia further, but here
> goes:
>   
We'll let the moderator decide that :)

> 1) Crispin, I think you've nailed one thing. The continuum from:
>
> Architecture --> Design --> Low-level Design --> (to) Implementation
>
> is a blurry one, and certainly slippery as you move from 'left' to 'right'.
>   
Cool.

> But, we all should understand that there's commensurate blur in our analysis
> techniques (aka architecture and code review) to assure that as we sweep
> over software that we uncover both bugs and architectural flaws.
>   
Also agreed.

> 2) Flaws are different in important ways bugs when it comes to presentation,
> prioritization, and mitigation. Let's explore by physical analog first.
>   
I disagree with the word usage. To me, "bug" and "flaw" are exactly
synonyms. The distinction being drawn here is between "implementation
flaws" vs. "design flaws". You are just creating confusing jargon to
claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
::= bug. A vulnerability is a special subset of flaws/defects/bugs that
has the property of being exploitable.

> I nearly fell through one of my consultant's tables as I leaned on it this
> morning. We explored: "Bug or flaw?".
>   
The wording issue aside, at the implementation level you try to
code/implement to prevent flaws, by doing things such as using higher
quality steel (for bolts) and good coding practices (for software). At
the design level, you try to design so as to *mask* flaws by avoiding
single points of failure, doing things such as using 2 bolts (for
tables) and using access controls to limit privilege escalation (for
software).

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread John Steven
I'm not sure there's any value in discussing this minutia further, but here
goes:

1) Crispin, I think you've nailed one thing. The continuum from:

Architecture --> Design --> Low-level Design --> (to) Implementation

is a blurry one, and certainly slippery as you move from 'left' to 'right'.
But, we all should understand that there's commensurate blur in our analysis
techniques (aka architecture and code review) to assure that as we sweep
over software that we uncover both bugs and architectural flaws.
  
2) Flaws are different in important ways bugs when it comes to presentation,
prioritization, and mitigation. Let's explore by physical analog first.

I nearly fell through one of my consultant's tables as I leaned on it this
morning. We explored: "Bug or flaw?".

A bolt's head had been sheered. The sheer had allowed the bolt to wiggle
loose and made the table top wobble.

IMPL. BUG: The bolt's head was weak-enough to sheer*
* Some readers will complain that the bolt manufacturer should have made the
bolt stronger. Don't get bogged down, we're building a table 'system' here,
and we can't control the bolt any more than we can control IBM's
implementation of Websphere. With respect to this table 'system', we have a
bug.

Was there a masked bug? Was the truss' metal cut but not buffed in a way
that caused an otherwise strong-enough bolt to score over time and
eventually sheer under normal load?

ARCH. FLAW: [Some aspect of the table's design] caused the bolt to sheer.
ARCH. FLAW: The table's design is not resilient to a sheered bolt.

As Crispin, Steve, and myself would likely agree... Good application of
techniques involved in common architectural and code-based analyses would
have likely found all of these problems. Remember, my thesis is that the
real difference is what we do w/ the vulns., not now they're identified.

In my experience, where overlap between bugs and flaws exist, mitigating the
flaw is almost always warranted.

As a table architect, I could have calculated the forces the table would
face (through misuse case def.) and realized that my requirements warranted
a stronger bolt. I can't control the strength of the bolt, but I could pick
a stronger one (Maybe Jboss resists an attack that Websphere doesn't).

Alternatively, I could have designed my system so that the bolt's relative
weakness wasn't exposed. Specifically, I could have introduced more support
trusses into the legs, used additional bolts, pegs, or [whatever] at the
same interface, or scrapped my table design, and started over in an attempt
to avoid that weakness.

'Wondering why a Cigitalite is spending so much time on a table? I'm
beginning to as well; onto Struts:

A valid user accesses a web-app, recognizes that the URLs are predictable,
and tries out www.victim.com/this/that/admin ... It works! He uses the
available interface to change another user's password, then impersonates the
other user's identity.

Regardless of how we unravel this, we know that there was a failure in
authorization of authenticated users.

Which of the following more aptly characterizes the problem?:

IMPL. BUG: Insufficient security-constraint existed on the admin Servlet in
the app's deployment descriptor.

ARCH. FLAW: No façade component gated privileged functionality
-alternatively- 
ARCH. FLAW: Privileged functionality incapable of judging Principal's
entitlement (both fine, one user changing another's password, or coarse,
application functionality improperly accessed)

All of the above statements are ostensibly true... The difference in the
characterization is bias--each implying a different fix. At a workflow
level, I always verify that applications I'm reviewing do, in-fact, possess
the abilities to evaluate Principal identity, and authorize their access to
functionality and data. So, I might be prone to reporting the problem as the
last flaw.

In the case of this example, the bigot in me might get wound up with
labeling the problem as the first architectural flaw. Does the façade
pattern (for security purposes or not) break the Struts pattern of
parameterized dispatch of ActionController Servlets?  I can't say
conclusively at THIS level of analysis.

Certainly, the implied fix is easiest if I'd characterized the problem as
the security-constraint bug.

Summarizing, my characterization of a vulnerability as a bug or a flaw has
important implications towards how it's mitigated. In the case of the Struts
example, the bug-based fix is easiest--but in so characterizing the problem
I may (or may not) miss other instances of this vulnerability within the
application's code base.

How do I know how to characterize a vulnerability along the continuum of
bugs-->flaws?  I don't know for sure, but I've taken to using my experience
over a number of assessments to "upcast" typically endemic problems as flaws
(and solve them in the design or architecture) and "downcast" those problems
that have glaring quick-fixes. In circumstances where both those heuristics

Re: [SC-L] Bugs and flaws

2006-02-01 Thread Steven M. Bellovin
In message <[EMAIL PROTECTED]>, Crispin Cowan writes:
> Unfortunately, this safety feature is nearly useless, because if you
>take an infected whatever.doc file, and just *rename* it to whatever.rtf
>and send it, then MS Word will cheerfully open the file for you when you
>double click on the attachment, ignore the mismatch between the file
>extension and the actual file type, and run the fscking VB embedded within.
>

That actually illustrates a different principle: don't have two 
different ways of checking for the same thing.

--Steve Bellovin, http://www.stevebellovin.com


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] Bugs and flaws

2006-02-01 Thread Crispin Cowan
Gary McGraw wrote:
> If the WMF vulnerability teaches us anything, it teaches us that we need
> to pay more attention to flaws.
The "flaw" in question seems to be "validate inputs", i.e. don't just
trust network input (esp. from an untrusted source) to be well-formed.

Of special importance to the Windows family of platforms seems to be the
propensity to do security controls based on the file type extension (the
letters after the dot in the file name, such as .wmf) but to choose the
application to interpret the data based on some magic file typing based
on looking at the content.

My favorite ancient form of this flaw: .rtf files are much safer than
doc files, because the RTF standard does not allow you to attach
VBscript (where "VB" stands for "Virus Broadcast" :) while .doc files
do. Unfortunately, this safety feature is nearly useless, because if you
take an infected whatever.doc file, and just *rename* it to whatever.rtf
and send it, then MS Word will cheerfully open the file for you when you
double click on the attachment, ignore the mismatch between the file
extension and the actual file type, and run the fscking VB embedded within.

I am less familiar with the WMF flaw, but it smells like the same thing.

Validate your inputs.

There are automatic tools (taint and equivalent) that will check whether
you have validated your inputs. But they do *not* check the *quality* of
your validation of the input. Doing a consistency check on the file name
extension and the data interpreter type for the file is beyond (most?)
such checkers.

>   We spend lots of time talking about
> bugs in software security (witness the perpetual flogging of the buffer
> overflow), but architectural problems are just as important and deserve
> just as much airplay.
>   
IMHO the difference between "bugs" and "architecture" is just a
continuous grey scale of degree.

Crispin
-- 
Crispin Cowan, Ph.D.  http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com
Olympic Games: The Bi-Annual Festival of Corruption

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php