Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Crispin Cowan
Gary McGraw wrote:
 Though I don't quite understand computer science theory in the same way that 
 Crispin does, I do think it is worth pointing out that there are two major 
 kinds of security defects in software: bugs at the implementation level, and 
 flaws at the design/spec level.  I think Crispin is driving at that point.
   
Kind of. I'm saying that specification and implementation are
relative to each other: at one level, a spec can say put an iterative
loop here and implementation of a bunch of x86 instructions. At another
level, specification says initialize this array and the implementation
says for (i=0; iARRAY_SIZE;i++){ At yet another level the
specification says get a contractor to write an air traffic control
system and the implementation is a contract :)

So when you advocate automating the implementation and focusing on
specification, you are just moving the game up. You *do* change
properties when you move the game up, some for the better, some for the
worse. Some examples:

* If you move up to type safe languages, then the compiler can prove
  some nice safety properties about your program for you. It does
  not prove total correctness, does not prove halting, just some
  nice safety properties.
* If you move further up to purely declarative languages (PROLOG,
  strict functional languages) you get a bunch more analyzability.
  But they are still Turing-complete (thanks to Church-Rosser) so
  you still can't have total correctness.
* If you moved up to some specification form that was no longer
  Turing complete, e.g. something weaker like predicate logic, then
  you are asking the compiler to contrive algorithmic solutions to
  nominally NP-hard problems. Of course they mostly aren't NP-hard
  because humans can create algorithms to solve them, but now you
  want the computer to do it. Which begs the question of the
  correctness of a compiler so powerful it can solve general purpose
  algorithms.


 If we assumed perfection at the implementation level (through better 
 languages, say), then we would end up solving roughly 50% of the software 
 security problem.
   
The 50% being rather squishy, but yes this is true. Its only vaguely
what I was talking about, really, but it is true.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


[SC-L] The Specifications of the Thing

2007-06-12 Thread Michael S Hines
So - aren't a lot of the Internet security issues errors or omissions in the
IETF standards - leaving things unspecified which get implemented in
different ways - some of which can be exploited due to implementation flaws
(due to specification flaws)?

Mike H.
-
Michael S Hines
[EMAIL PROTECTED]


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Crispin Cowan
Sent: Monday, June 11, 2007 5:50 PM
To: Gary McGraw
Cc: SC-L@securecoding.org; Blue Boar
Subject: Re: [SC-L] Harvard vs. von Neumann

Gary McGraw wrote:
 Though I don't quite understand computer science theory in the same way
that Crispin does, I do think it is worth pointing out that there are two
major kinds of security defects in software: bugs at the implementation
level, and flaws at the design/spec level.  I think Crispin is driving at
that point.

Kind of. I'm saying that specification and implementation are relative
to each other: at one level, a spec can say put an iterative loop here and
implementation of a bunch of x86 instructions. At another level,
specification says initialize this array and the implementation says for
(i=0; iARRAY_SIZE;i++){ At yet another level the specification says
get a contractor to write an air traffic control system and the
implementation is a contract :)

So when you advocate automating the implementation and focusing on
specification, you are just moving the game up. You *do* change properties
when you move the game up, some for the better, some for the worse. Some
examples:

* If you move up to type safe languages, then the compiler can prove
  some nice safety properties about your program for you. It does
  not prove total correctness, does not prove halting, just some
  nice safety properties.
* If you move further up to purely declarative languages (PROLOG,
  strict functional languages) you get a bunch more analyzability.
  But they are still Turing-complete (thanks to Church-Rosser) so
  you still can't have total correctness.
* If you moved up to some specification form that was no longer
  Turing complete, e.g. something weaker like predicate logic, then
  you are asking the compiler to contrive algorithmic solutions to
  nominally NP-hard problems. Of course they mostly aren't NP-hard
  because humans can create algorithms to solve them, but now you
  want the computer to do it. Which begs the question of the
  correctness of a compiler so powerful it can solve general purpose
  algorithms.


 If we assumed perfection at the implementation level (through better
languages, say), then we would end up solving roughly 50% of the software
security problem.

The 50% being rather squishy, but yes this is true. Its only vaguely what I
was talking about, really, but it is true.

Crispin

--
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org List information,
subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Steven M. Christey

On Mon, 11 Jun 2007, Crispin Cowan wrote:

 Gary McGraw wrote:
  Though I don't quite understand computer science theory in the same way 
  that Crispin does, I do think it is worth pointing out that there are two 
  major kinds of security defects in software: bugs at the implementation 
  level, and flaws at the design/spec level.  I think Crispin is driving at 
  that point.
 
 Kind of. I'm saying that specification and implementation are
 relative to each other: at one level, a spec can say put an iterative
 loop here and implementation of a bunch of x86 instructions.

I agree with this notion.  They can overlap at what I call design
limitations: strcpy() being overflowable (and C itself being
overflowable) is a design limitation that enables programmers to make
implementation errors.  I suspect I'm just rephrasing a tautology, but
I've theorized that all implementation errors require at least one design
limitation.  No high-level language that I know of has a built-in
mechanism for implicitly containing files to a limited directory (barring
chroot-style jails), which is a design limitation that enables a wide
variety of directory traversal attacks.

If you have a standard authentication algorithm with a required step that
ensures integrity, then a product that doesn't perform this step has an
implementation bug at the algorithm's level - but if the developers didn't
even bother putting this requirement into the design, then at the product
level, it's a design problem.  Or something like that.

  If we assumed perfection at the implementation level (through better
  languages, say), then we would end up solving roughly 50% of the
  software security problem.
 
 The 50% being rather squishy, but yes this is true. Its only vaguely
 what I was talking about, really, but it is true.

For whatever it's worth, I think I agree with this, with the caveat that I
don't think we collectively have a solid understanding of design issues,
so the 50% guess is quite squishy.  For example, the terminology for
implementation issues is much more mature than terminology for design
issues.

One sort-of side note: in our vulnerability type distributions paper
[1], which we've updated to include all of 2006, I mention how major Open
vs. Closed source vendor advisories have different types of
vulnerabilities in their top 10 (see table 4 analysis in the paper).
While this discrepancy could be due to researcher/tool bias, it's probably
also at least partially due to development practices or language/IDE
design.  Might be interesting for someone to pursue *why* such differences
occur.

- Steve

[1] http://cwe.mitre.org/documents/vuln-trends/index.html
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Crispin Cowan
Steven M. Christey wrote:
 On Mon, 11 Jun 2007, Crispin Cowan wrote:
   
 Kind of. I'm saying that specification and implementation are
 relative to each other: at one level, a spec can say put an iterative
 loop here and implementation of a bunch of x86 instructions.
 
 I agree with this notion.  They can overlap at what I call design
 limitations: strcpy() being overflowable (and C itself being
 overflowable) is a design limitation that enables programmers to make
 implementation errors.  I suspect I'm just rephrasing a tautology, but
 I've theorized that all implementation errors require at least one design
 limitation.  No high-level language that I know of has a built-in
 mechanism for implicitly containing files to a limited directory (barring
 chroot-style jails), which is a design limitation that enables a wide
 variety of directory traversal attacks.
   
I thought that the Java 2 security container stuff let you specify file
accesses? Similarly, I thought that Microsoft .Net managed code could
have an access specification?

AppArmor provides exactly that kind of access specification, but it is
an OS feature rather than a high level language, unless you want to view
AA policies as high level specifications.

 If we assumed perfection at the implementation level (through better
 languages, say), then we would end up solving roughly 50% of the
 software security problem.
   
 The 50% being rather squishy, but yes this is true. Its only vaguely
 what I was talking about, really, but it is true.
 
 For whatever it's worth, I think I agree with this, with the caveat that I
 don't think we collectively have a solid understanding of design issues,
 so the 50% guess is quite squishy.  For example, the terminology for
 implementation issues is much more mature than terminology for design
 issues.
   
I don't agree with that. I think it is a community gap. The academic
security community has a very mature nomenclature for design issues. The
hax0r community has a mature nomenclature for implementation issues.
That these communities are barely aware of each other's existence, never
mind talking to each other, is a problem :)

 One sort-of side note: in our vulnerability type distributions paper
 [1], which we've updated to include all of 2006, I mention how major Open
 vs. Closed source vendor advisories have different types of
 vulnerabilities in their top 10 (see table 4 analysis in the paper).
 While this discrepancy could be due to researcher/tool bias, it's probably
 also at least partially due to development practices or language/IDE
 design.  Might be interesting for someone to pursue *why* such differences
 occur.
   
Do you suppose it is because of the different techniques researchers use
to detect vulnerabilities in source code vs. binary-only code? Or is
that a bad assumption because the hax0rs have Microsoft's source code
anyway? :-)

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Steven M. Christey

I agree with Ryan, at the top skill levels anyway.  Binary reverse
engineering seems to have evolved to the point where I refer to binary as
source-equivalent, and I was told by some well-known applied researcher
that some vulns are easier to find in binary than source.

But the bulk of public disclosures are not by top researchers, so I'd
suspect that in the general field, source inspection is more accessible
than binary.  So with closed source, people are more likely to use black
box tools, which might not be as effective in finding things like format
string issues, which often hide in rarely triggered error conditions but
are easy to grep for in source.  And maybe the people who have source code
aren't going to be as likely to use black box testing, which means that
obscure malformed-input issues might not be detected.  This is probably
the general researcher; the top researcher is more likely to do both.

Since techniques vary so widely across individuals and researcher bias is
not easily measurable, it's hard to get a conclusive answer about whether
there's a fundamental difference in the *latent* vulns in open vs. closed
(modulo OS-specific vulns), but the question is worth exploring.

On Tue, 12 Jun 2007, Blue Boar wrote:

 Crispin Cowan wrote:
  Do you suppose it is because of the different techniques researchers use
  to detect vulnerabilities in source code vs. binary-only code? Or is
  that a bad assumption because the hax0rs have Microsoft's source code
  anyway? :-)

 I'm in the process of hiring an outside firm for security review of the
 product for the day job. They didn't seem particularly interested in the
 source, the binaries are sufficient. It appears to me that the
 distinction between source and object is becoming a bit moot nowadays.


   Ryan

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Blue Boar
Crispin Cowan wrote:
 Do you suppose it is because of the different techniques researchers use
 to detect vulnerabilities in source code vs. binary-only code? Or is
 that a bad assumption because the hax0rs have Microsoft's source code
 anyway? :-)

I'm in the process of hiring an outside firm for security review of the
product for the day job. They didn't seem particularly interested in the
source, the binaries are sufficient. It appears to me that the
distinction between source and object is becoming a bit moot nowadays.


Ryan
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] The Specifications of the Thing

2007-06-12 Thread Steven M. Christey

On Tue, 12 Jun 2007, Michael S Hines wrote:

 So - aren't a lot of the Internet security issues errors or omissions in the
 IETF standards - leaving things unspecified which get implemented in
 different ways - some of which can be exploited due to implementation flaws
 (due to specification flaws)?

This happens a lot in interpretation conflicts [1] that occur in
intermediaries - proxies, IDses, firewalls, etc. - where they have to
interpret traffic/data according to how the end system is expected to
treat that data.  Incomplete specifications, or those that leave details
for an implementation, will often result in end systems that have
different behaviors based on the same input data.  nmap's OS detection
capability is an obvious example; Ptacek/Newsham's classic IDS evasion
paper is another.

Many of the anti-virus or spam bypass vulns being reported are of this
flavor (although lately, researchers have realized that they don't always
have to bother with interpretation conflicts when the products have
obvious overflows).

Non-standard implementations make the problem even worse, because then
they're not even acting like they're expected to, as we often see in
esoteric XSS variants.

- Steve

[1] interpretation conflict is my current term for
http://cwe.mitre.org/data/definitions/436.html
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___