The age-old Gary vs. jOHN debate. I do believe along the continuum of
architecture-->design-->impl. that I've shown the ability to discern flawed
design from source code in source code reviews.

Cigital guys reading this thread have an advantage in that they know both
the shared and exclusive activities defined as part of our architectural and
code review processes. The bottom line is this: as you look at source code,
given enough gift for architecture, you can identify _some_ of the design
(whether intended or implemented) from the implementation, and find _some_
flaws. Before you get wound up and say, "Maybe you jOHN" tongue fully
in-cheek, the Struts example I gave is one case. Looking at a single class
file (the privileged Servlet definition), you can determine that the Lead
Developer/Architect has not paid enough attention to authorization when
he/she designed how the application's functionality was organized.
Admittedly, _some_ (other) architectural flaws do demand attention paid only
through activities confined to architectural analysis--not code review.
Think back again to my original email. The situations I present (both with
the physical table and Struts) present a 'mistake' (IEEE parlance) that can
manifest itself in terms of both an architectural flaw and implementation
bug (Cigital parlance).

I believe that the concept that Jeff (Payne), Cowan, Wysopal, and even
Peterson (if you bend it correctly) present is that the 'mistake' may
cross-cut the SDLC--manifesting itself in each of the phases' artifacts. IE:
If the mistake was in requirements, it will manifest itself in design
deficiency (flaw), as well as in the implementation (bug).

Jeff (Williams) indicates that, since progress roles downstream in the SDLC,
you _could_ fix the 'mistake' in any of the phases it manifests itself, but
that an efficiency argument demands you look in the code. I implore the
reader recall my original email. I mention that when characterized as a bug,
the level of effort required to fix the 'mistake' is probably less than if
it's characterized as a flaw. However, in doing so, you may miss other
instances of the mistake throughout the code.

I whole-heartedly agree with Jeff (Williams) that:

1) Look to the docs. for the 'right' answer.
2) Look to the code for the 'truth'.
3) Look to the deployed bins. for 'God's Truth'.
The variance in these artifacts is a key element in Cigital's architectural

Second, (a point I made in my original email) the objective is to give the
most practical advise as possible to developers for fixing the problem. I'll
just copy-paste it from the original:
Summarizing, my characterization of a vulnerability as a bug or a flaw has
important implications towards how it's mitigated. In the case of the Struts
example, the bug-based fix is easiest--but in so characterizing the problem
I may (or may not) miss other instances of this vulnerability within the
application's code base.

How do I know how to characterize a vulnerability along the continuum of
bugs-->flaws?  I don't know for sure, but I've taken to using my experience
over a number of assessments to "upcast" typically endemic problems as flaws
(and solve them in the design or architecture) and "downcast" those problems
that have glaring quick-fixes. In circumstances where both those heuristics
apply, I suggest a tactical fix to the bug, while prescribing that further
analysis take the tack of further fleshing out the flaw.

Where my opinion differs from the other posters is this: I believe:
"Where a 'mistake' manifests itself in multiple phases of the software
development lifecycle, you're most apt to completely MITIGATE its effects by
characterizing it as early in the lifecycle as possible, as design or even
requirements. As Williams indicates, to the contrary, you may FIND the
problem most easily later in the lifecycle. Perhaps in the code itself."

McGraw put forth the 'bug' and 'flaw' nomenclature. It's useful because
there is value in explicitly pinning the vulnerability in architecture,
design, or code if it helps the dev. org. get things sorted out securely and
throughout their application. My experience is that this value is real.

The message of the  'defect'/'mistake' purist resonates with me as well:
it's all simply a mistake some human made along the path of developing the
application. But! I can assure you, to the extent that root-cause analysis
is valuable, telling a dev. team where to most effectively contend with a
vulnerability is also valuable.

In other words, "smart guys will always find the problems--by hook, or by
crook--but it takes classification to aid in efficient and thorough
John Steven        
Principal, Software Security Group
Technical Director, Office of the CTO
703 404 5726 - Direct | 703 727 4034 - Cell
Cigital Inc.          | [EMAIL PROTECTED]

4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

> From: Gary McGraw <[EMAIL PROTECTED]>
> I'm sorry, but it is just not possible to find design flaws by staring at
> code.
>  -----Original Message-----
> From:  Jeff Williams [mailto:[EMAIL PROTECTED]
> At the risk of piling on here, there's no question that it's critical to
> consider security problems across the continuum. While we're at it, the
> analysis should start back even further with the requirements or even the
> whole system concept.
> All of the representations across the continuum (rqmts, arch, design, code)
> are just models of the same thing.  They start more abstract and end up as
> code.  A *single* problem could exist in all these models at the same time.
> Higher-level representations of systems are generally eclipsed by lower
> level ones fairly rapidly.  For example, it's a rare group that updates
> their design docs as implementation progresses. So once you've got code, the
> architecture-flaws don't come from architecture documents (which lie). The
> best place to look for them (if you want truth) is to look in the code.
> To me, the important thing here is to give software teams good advice about
> the level of effort they're going to have to put into fixing a problem. If
> it helps to give a security problem a label to let them know they're going
> to have to go back to the drawing board, I think saying 'architecture-flaw'
> or 'design-flaw' is fine. But I agree with others that saying 'flaw' alone
> doesn't help distinguish it from 'bug' in the minds of most developers or
> architects.
> -----Original Message-----
> John Steven wrote:
>> I'm not sure there's any value in discussing this minutia further, but
> here
>> goes:
> We'll let the moderator decide that :)
>> 1) Crispin, I think you've nailed one thing. The continuum from:
>> Architecture --> Design --> Low-level Design --> (to) Implementation
>> is a blurry one, and certainly slippery as you move from 'left' to
> 'right'.
> Cool.
>> But, we all should understand that there's commensurate blur in our
> analysis
>> techniques (aka architecture and code review) to assure that as we sweep
>> over software that we uncover both bugs and architectural flaws.
> Also agreed.
>> 2) Flaws are different in important ways bugs when it comes to
> presentation,
>> prioritization, and mitigation. Let's explore by physical analog first.
> I disagree with the word usage. To me, "bug" and "flaw" are exactly
> synonyms. The distinction being drawn here is between "implementation
> flaws" vs. "design flaws". You are just creating confusing jargon to
> claim that "flaw" is somehow more abstract than "bug". Flaw ::= defect
> ::= bug. A vulnerability is a special subset of flaws/defects/bugs that
> has the property of being exploitable.
>> I nearly fell through one of my consultant's tables as I leaned on it this
>> morning. We explored: "Bug or flaw?".
> The wording issue aside, at the implementation level you try to
> code/implement to prevent flaws, by doing things such as using higher
> quality steel (for bolts) and good coding practices (for software). At
> the design level, you try to design so as to *mask* flaws by avoiding
> single points of failure, doing things such as using 2 bolts (for
> tables) and using access controls to limit privilege escalation (for
> software).
> Crispin
> -- 
> Crispin Cowan, Ph.D.                      http://crispincowan.com/~crispin/
> Director of Software Engineering, Novell  http://novell.com
> Olympic Games: The Bi-Annual Festival of Corruption
> _______________________________________________
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php

This electronic message transmission contains information that may be
confidential or privileged.  The information contained herein is intended
solely for the recipient and use by any other party is not authorized.  If
you are not the intended recipient (or otherwise authorized to receive this
message by the intended recipient), any disclosure, copying, distribution or
use of the contents of the information is prohibited.  If you have received
this electronic message transmission in error, please contact the sender by
reply email and delete all copies of this message.  Cigital, Inc. accepts no
responsibility for any loss or damage resulting directly or indirectly from
the use of this email or its contents.
Thank You.

Secure Coding mailing list (SC-L)
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

Reply via email to