Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Crispin Cowan
I strongly disagree with this.
Rigorous professional standards for mechanical and structural 
engineering came about only *after* a well-defined cookbook of how to 
properly engineer things was agreed upon. Only after such standards are 
established and *proven effective* is there any utility in enforcing the 
standards upon the practitioners.

Software is *not* yet at that stage. There is no well-established cook 
book for reliably producing reliable software (both of those reliablys 
mean something :)  There are *kludges* like the SEI model, but they are 
not reliable. People can faithfully follow the SEI model and still 
produce crap. Other people can wholesale violate the SEI model and 
produce highly reliable software.

It is *grossly* premature to start imposing standards on software 
engineers. We have not a clue what those standards should be.

Crispin
Edward Rohwer wrote:
 I my humble opinion, the bridge example gets to the heart of the
matter. In the bridge example the bridge would have been design and
engineered by licensed professionals, while we in the software business
sometime call ourselves engineers but fall far short of the real,
professional, licensed engineers other professions depend upon.  Until 
we as
a profession are willing to put up with that sort of rigorous examination
and certification process, we will always fall short in many area's and of
many expectations.

Ed. Rohwer CISSP

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of [EMAIL PROTECTED]
Sent: Friday, April 08, 2005 10:54 PM
To: Margus Freudenthal
Cc: Secure Coding Mailing List
Subject: [SC-L] Re: Application Insecurity --- Who is at Fault?


Margus Freudenthal wrote:
Consider the bridge example brought up earlier. If your bridge builder
finished the job but said: ohh, the bridge isn't secure though. If
someone tries to push it at a certain angle, it will fall.
Ultimately it is a matter of economics. Sometimes releasing something
earlier
is worth more than the cost of later patches. And managers/customers are
aware
of it.
Unlike in the world of commercial software, I'm pretty sure you don't
see a whole lot of construction contracts which absolve the architect of
liability for design flaws.  I think that is at the root of our
problems.  We know how to write secure software; there's simply precious
little economic incentive to do so.
--
David Talkington
[EMAIL PROTECTED]

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Dave Paris
Michael Silk wrote:
Ed,
[...]
 Back to the bridge or house example, would you allow the builder to
leave off 'security' of the structure? Allow them to introduce some
design flaws to get it done earlier? Hopefully not ... so why is it
allowed for programming? Why can people cut out 'security' ? It's not
extra! It's fundamental to 'programming' (imho anyway).
-- Michael
This paragraph contains the core dichotomy of this discussion.
The builder and the programmer are synonomous.
The builder is neither the architect, nor the engineer for the 
structure.  If the architect and engineer included security for the 
structure and the builder failed to build to specification, then the 
builder is at fault.

The programmer is neither the application architect nor the system 
engineer.  If the architect and engineer fail to include (or includes 
faulty) security features (as though it were an add-on, right) then 
the programmer is simply coding to the supplied specifications.  If 
security is designed into the system and the programmer fails to code to 
the specification, then the programmer is at fault.

While there are cases that the programmer is indeed at fault (as can 
builders be), it is _far_ more often the case that the security flaw (or 
lack of security) was designed into the system by the architect and/or 
engineer.  It's also much more likely that the foreman (aka 
programming manager) told the builder (programmer) to take shortcuts to 
meet time and budget - rather than the programmer taking it upon 
themselves to be sloppy and not follow the specifications.

In an earlier message, it was postulated that programmers are, by and 
large, a lazy, sloppy lot who will take shortcuts at every possible turn 
and therefore are the core problem vis-a-vis lousy software.  It's been 
my expreience that while these people exist, they wash out fairly 
quickly and most programmers take pride in their work and are highly 
frustrated with management cutting their legs out from under them, 
nearly _forcing_ them to appear to fit into the described mold.  Ever 
read Dilbert?  Why do you think so many programmers can relate?

I think the easiest summary to my position would be don't shoot the 
messenger - and that's all the programmer is in the bulk of the cases.

Respectfully,
-dsp



RE: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Chris Matthews
Dave Paris wrote:

It's also much more likely that the foreman (aka
programming manager) told the builder (programmer) to take shortcuts to

meet time and budget - rather than the programmer taking it upon
themselves to be sloppy and not follow the specifications.

I'd note that there is the question if the programmer was given a
undefined time period in which to deliver said software, would they be
able to deliver code that is free of 'mechanical' (buffer overflows,
pointer math bugs, etc) bugs?.

Additionally, as an industry, we will only really have the answer to the
above question when the programming managers allocate a programmer the
time to truly implement specifications in a mechanically secure way.

But I agree with the premise that a programmer cannot be held
accountable for (design) decisions that were out of his control.  He can
only be accountable for producing mechanically correct behaviour.

-Chris

(Note that references to mechanical bugs are ones that really are
within the programmer's realm to avoid, and include language specific
and language agnostic programming techniques.)




Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Michael Silk
Dave,

On Apr 11, 2005 9:58 PM, Dave Paris [EMAIL PROTECTED] wrote:
 The programmer is neither the application architect nor the system
 engineer.

In some cases he is. Either way, it doesn't matter. I'm not asking the
programmer to re-design the application, I'm asking them to just
program the design 'correctly' rather than 'with bugs' (or - security
problems). Sometimes they leave 'bugs' because they don't know any
better, so sure, train them. [oops, I'm moving off the point again].
All I mean is that they don't need to be the architect or engineer to
have their decisions impact the security of the work.


 If
 security is designed into the system and the programmer fails to code to
 the specification, then the programmer is at fault.

Security can be design into the system in many ways: maybe the manager
was vauge in describing it, etc, etc. I would question you if you
suggested to me that you always assume to _NOT_ include 'security' and
only _DO_ include security if someone asks. For me, it's the other way
round - when receiving a design or whatever.


 While there are cases that the programmer is indeed at fault (as can
 builders be), it is _far_ more often the case that the security flaw (or
 lack of security) was designed into the system by the architect and/or
 engineer.

So your opinion is that most security flaws are from bad design?
That's not my experience at all...

What are you classifying under that?


 It's also much more likely that the foreman (aka
 programming manager) told the builder (programmer) to take shortcuts to
 meet time and budget -

Maybe, but the programmer should not allow 'security' to be one of
these short-cuts. It's just as crucial to the finsihed application as
implementing that method to calculate the Net Proceedes or something.
The manager wouldn't allow you to not do that; what allow them to
remove so-called 'Security' (in reality - just common sense of
validating inputs, etc.).

-- Michael




RE: [SC-L] Theoretical question about vulnerabilities

2005-04-11 Thread David Crocker
Pascal Meunier wrote:


Do you think it is possible to enumerate all the ways all vulnerabilities can be
created?  Is the set of all possible exploitable programming mistakes bounded?


No. It's not so much a programming problem, more a specification problem.

Tools now exist that make it possible to develop single-threaded programs that
are mathematically proven to meet their specifications. The problem is knowing
what should be included in the specifications. Let me give you some examples:

1. Buffer overflow. Even if nobody realised that buffer overflows could be used
to bypass security, it is an implied specification of any program that no array
should ever be accessed via an out-of-bounds index. All the tools out there for
proving software correctness take this as a given. So buffer overflows can
always be avoided, because if there is ANY input whatsoever that can produce a
buffer overflow, the proofs will fail and the problem will be identified. You
don't even need to write a specification for the software in this case - the
implied specification is enough.

2. SQL injection. If the required behaviour of the application is correctly
specified, and the behaviour of the SQL server involved is also correctly
specified (or at least the correct constraints are specified for SQL query
commands), then it will be impossible to prove the software is correct if it has
any SQL injection vulnerabilities. So again, this would be picked up by the
proof process, even if nobody knew that SQL injection can be used to breach
security.

3. Cross-site scripting. This is a particular form of HTML injection and would
be caught by the proof process in a similar way to SQL injection, provided that
the specification included a notion of the generated HTML being well-formed. If
that was missing from the specification, then HTML injection would not be
caught.

4. Tricks to make the browser display an address in the address bar that is not
the address of the current HTML page. To catch these, you would need to include
in the specification, the address bar shall always show the address of the
current page. This is easy to state once you know it is a requirement; but
until last year it would probably not have been an obvious requirement.

In summary: If you can state what you mean by secure in terms of what must
happen and what must not happen, then by using precise specifications and
automatic proof, you can achieve complete security for all possible inputs -
until the definition of secure needs to be expanded.


This should have consequences for source code vulnerability analysis software.
It should make it impossible to write software that detects all of the mistakes
themselves.  Is it enough to look for violations of some invariants (rules)
without knowing how they happened?


The problem is that while you can enumerate the set of invariants that you
currently know are important, you don't know how the set may need to be expanded
in the future.

David Crocker, Escher Technologies Ltd.
Consultancy, contracting and tools for dependable software development
www.eschertech.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-11 Thread Nash
Pascal Meunier wrote:
 Do you think it is possible to enumerate all the ways
 all vulnerabilities can be created? Is the set of all
 possible exploitable programming mistakes bounded?

By bounded I take you to mean finite. In particular with reference
to your taxonomy below. By enumerate I take you to mean list out in
a finite way. Please note, these are not the standard mathematical
meanings for these terms. Though, they may be standard for CS folks.

If I interpreted you correctly, then the answer is, no, as Crispin
indicated.

However, let's take enumerate to mean list out, one by one and allow
ourselves to consider infinite enumerations as acceptable. In this case,
the answer becomes, yes.

This proof is abbreviated, but should be recognizable as a pretty
standard argument by those familiar with computable functions and/or
recursive function theory.

   Thm. The set of exploits for a program is enumerable.

   Pf.

   Let P(x) be a program computing the n-ary, partially computable
   function F(x). Let an exploit be a natural number input, y, such
   that at some time, t, during the computation performed by P(y) the
   fixed memory address, Z, contains the number k.**

   Then, there exists a computable function G(x,t) such that:

- G(x, t) = 1 if and only if P(x) gives value k to address Z at
some time less than or equal to t.

- G(x, t) = 0 otherwise.

   The values of x for which G(x,t) = 1 is effectively enumerable (in
   the infinite sense) because it is the domain of a computable function.

Q.E.D.

You can look up the relevent theory behind this proof in [Davis].

So, where does this leave us? Well, what we don't have is a computable
predicate, Exploit(p,y), that always tells us if y is an exploit for
the program p. That's what Crispin was saying about Turing. This predicate
is equivalently hard to Halt(p,y), which is not computable.

However, we can enumerate all the inputs that eventually result in the
computer's state satisfying the (Z == k) condition. I suspect this is
probably all you really need for a given program, as a practical matter.
Since, for example, most attackers probably will not wait for hours and
hours while an exploit develops.*

I think the real issue here is complexity, not computability. It takes a
long time to come up with the exploits. Maybe the time it takes is too
long for the amount of real economic value gained by the knowledge of
what's in that set. That seems to be part of Crispin's objection (more or
less).


 I would think that what makes it possible to talk about design patterns and
 attack patterns is that they reflect intentional actions towards desirable
 (for the perpetrator) goals, and the set of desirable goals is bounded at
 any given time (assuming infinite time then perhaps it is not bounded).

I think this is a very reasonable working assumption. It seems
consistent with my experience that given any actual system at any actual
point in time there are only finitely many desirable objectives in
play. There are many more theoretical objectives, though, so how you
choose to pare down the list could determine whether you end up with a
useful scheme, or not.


 All we can hope is to come reasonably close and produce something useful,
 but not theoretically strong and closed.

I think that there's lots of work going on in proof theory and Semantics
that makes me hopeful we'll eventually get tools that are both useful
and strong. Model Checking is one approach and it seems to have alot of
promise. It's relatively fast, e.g., and unlike deductive approaches it
doesn't require a mathematician to drive it. See [Clarke] for details.
[Clarke] is very interesting, I think. He explicitly argues that model
checking beats other formal methods at dealing with the state space
explosion problem.

Those with a more practical mind-set are probably laughing that beating
the other formal methods isn't really saying much because they are all
pretty awful. ;-)

 Is it enough to look for violations of some
 invariants (rules) without knowing how they happened?

In the static checking sense, I don't see how this could be done.


 Any thoughts on this?  Any references to relevant theories of failures and
 errors, or to explorations of this or similar ideas, would be welcome.

There are academics active in this field of research. Here's a few
links:

http://cm.bell-labs.com/cm/cs/what/spin2005/


http://www.google.com/search?q=international+SPIN+workshopstart=0start=0ie=utf-8oe=utf-8client=firefox-arls=org.mozilla:en-US:official


ciao,

 -nash

Notes:

** This definition of exploit is chosen more or less arbitrarily. It
seems reasonable to me. It might not be. I would conjecture that any
definition of exploit would be equivalent to this issue, though.

 Halt(x,y) is not computable, but it is enumerable. That is, I can
list out, one by one, all the inputs y on which program x 

Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Carl G. Alphonce
on Monday April 11, 2005, Damir Rajnovic wrote:
  On Mon, Apr 11, 2005 at 12:21:30PM +1000, Michael Silk wrote:
Back to the bridge or house example, would you allow the builder to
   leave off 'security' of the structure? Allow them to introduce some
   design flaws to get it done earlier? Hopefully not ... so why is it
   allowed for programming? Why can people cut out 'security' ? It's not
   extra! It's fundamental to 'programming' (imho anyway).
 
  Even builders and architects do experiment and introduce new things.
  Not all of these are outright success. We have a wobbly bridge in UK and
  there is(was) new terminal at Charles de Gaulle airport in Paris.
 
  Every profession makes mistakes. Some are more obvious and some not. I am
  almost certain that architects can tell you many more stories where
  things were not done as secure as they should have been.
 
  Comparisons can be misleading.

Indeed.  I am fairly certain that there are numerous examples of
buildings which were properly designed yet were built differently.  I
can't believe that builders never use different materials than are
called for in the plans, and that they never make on-site adjustments
to the plans to accomodate last-minute customer requests (we really
want a double sink in the master bath), etc.


   ()  ascii ribbon campaign - against html e-mail
   /\

Carl Alphonce[EMAIL PROTECTED]
Dept of Computer Science and Engineering (716) 645-3180 x115 (tel)
University at Buffalo(716) 645-3464  (fax)
Buffalo, NY 14260-2000   www.cse.buffalo.edu/~alphonce




Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Dave Aronson
[EMAIL PROTECTED]
[EMAIL PROTECTED]
In-Reply-To: [EMAIL PROTECTED]
MIME-Version: 1.0
Content-Disposition: inline
Content-Type: text/plain;
  charset=iso-8859-1
Content-Transfer-Encoding: 7bit
Message-Id: [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]
Precedence: bulk
Mailing-List: contact [EMAIL PROTECTED] ; run by MajorDomo
List-Id: Secure Coding Mailing List sc-l.securecoding.org
List-Post: mailto:sc-l@securecoding.org
List-Subscribe: http://www.securecoding.org/list/
List-Unsubscribe: http://www.securecoding.org/list/
List-Help: http://www.securecoding.org/list/charter.php
List-Archive: http://lists.virus.org
Delivered-To: mailing list SC-L@SecureCoding.org
Delivered-To: moderator for SC-L@SecureCoding.org

Dave Paris [EMAIL PROTECTED] wrote:

  The builder and the programmer are synonomous.
 
  The builder is neither the architect, nor the engineer for the
  structure.  If the architect and engineer included security for the
  structure and the builder failed to build to specification, then the
  builder is at fault.
 
  The programmer is neither the application architect nor the system
  engineer.

This is often not true, even on some things that stretch a single
programmer's productivity to the limits (which makes it even worse).

Programmers work within the specs they are given.  That can (NOT SHOULD!)
be anything from use this language on this platform to implement this
algorithm in this style, to we need something that will help us
accomplish this goal.  The latter cries out for a requirements analyst
to delve into it MUCH further, before an architect, let alone a
programmer, is allowed anywhere NEAR it!  However, sometimes that's all
you get, from a customer who is then NOT reasonably easily available to
refine his needs any further, relayed via a manager who is clueless
enough not to realize that refinement is needed, to a programmer who is
afraid to say so lest he get sacked for insubordination, and will also
have to architect it.

If this has not happened at your company, you work for a company with far
more clue about software development than, I would guess, easily 90% of
the companies that do it.

-Dave



Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-11 Thread Dave Paris
Joel Kamentz wrote:
Re: bridges and stuff.
I'm tempted to argue (though not with certainty) that it seems that the bridge 
analogy is flawed
in another way --
that of the environment.  While many programming languages have similarities 
and many things apply
to all programming,
there are many things which do not translate (or at least not readily).  Isn't 
this like trying to
engineer a bridge
with a brand new substance, or when the gravitational constant changes?  And 
even the physical
disciplines collide
with the unexpected -- corrosion, resonance, metal fatigue, etc.  To their 
credit, they appear far
better at
dispersing and applying the knowledge from past failures than the software 
world.
Corrosion, resonance, metal fatigue all have counterparts in the
software world.  glibc flaws, kernel flaws, compiler flaws.  Each of
these is an outside influence on the application - just as environmental
stressors are on a physical structure.
Engineering problems disperse faster because of law suits that happen
when a bridge fails.  I'm still waiting for a certain firm located in
Redmond to be hauled into court - and until that happens, nobody is
going to make security an absolute top priority.
Let's use an example someone else already brought up -- cross site scripting.  
How many people
feel that, before it
was ever known or had ever occurred the first time, good programming practices 
should have
prevented any such
vulnerability from ever happening?  I actually think that would have been 
possible for the
extremely skilled and
extremely paranoid.  However, we're asking people to protect against the 
unknown.
Hardly unknowns.  Not every possiblity has been enumerated, but then
again, not every physical phenomena has been experienced w/r/t
construction either.
I don't have experience with the formal methods, but I can see that, supposing 
this were NASA,
etc., formal approaches
might lead to perfect protection.  However, all of that paranoia, formality or 
whatever takes a
lot of time, effort
and therefore huge economic impact.  I guess my personal opinion is that unit 
testing, etc. are
great shortcuts
(compared to perfect) which help reduce flaws, but with lesser expense.
Unit testing is fine, but tests inside the box and doesn't veiw your
system through the eyes of an attacker.
All of this places me in the camp that thinks there isn't enough yet to 
standardize.  Perhaps a
new programming
environment (language, VM, automation of various sorts, direct neural 
interfaces) is required
before the art of
software is able to match the reliability and predictability of other fields?
You're tossing tools at the problem.  The problem is inherently human
and economically driven.  A hammer doesn't cause a building to be
constructed poorly.
Is software more subject to unintended consequences than physical engineering?
not more subject, just subject differently.
Respectfully,
-dsp