Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread der Mouse
 Like it or not, the Web doesn't work right without Javascript now.

Depends on what you mean by the Web and work right.  Fortunately,
for at least some people's values of those, this is not true.

/~\ The ASCII   der Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread David Crocker
Crispin Cowen wrote:


IMHO, all this hand wringing is for naught. To get systems that never fail
requires total correctness. Turing tells us that total correctness is not
decidable, so you simply never will get it completely, you will only get
approximations at best.


What Turing actually tells us is that it is possible to construct programs that
may be correct but whose correctness is not decidable. This is a far cry from
saying that it is not possible to build well-structured programs whose
correctness _is_ decidable.


Having humans write specifications and leaving programming to computers is
similarly a lost cause. At a sufficiently high level, that is asking the
computer to map NP to P, and that isn't going to happen.


I don't understand what you are getting at here. If you are saying that humans
can map NP to P, but that it is impossible in principle to have computers do the
same thing, then that sounds like a religious argument to me. If you are saying
that you can write a specification that is unsatisfiable (or whose
satisfiability is undecidable) and that therefore cannot be implemented as code,
then this applies equally to human programmers. Incidentally, I have heard of a
few cases in which programming teams have wasted effort trying to implement sets
of requirements which, when the requirements were formalised, turned out to be
contradictory.


At a less abstract level, you are just asking the human to code in a higher
level language. This will help, but will not eliminate the problem that you just
cannot have total correctness.


The higher the level in which the human codes, the less mistakes there are to
be made, assuming equal familiarity with the language etc. And you are just
repeating the same fallacious proposition by saying you cannot have total
correctness. Had you instead said you can never be sure that you have
established that the requirements capture the users' needs, I would have had to
agree with you.


Programmable Turing machines are great, they do wonderful things, but total
correctness for software simply isn't feasible. People need to understand that
programs are vastly more complex than any other class of man made artifact ever,
, and there fore can never achieve the reliability of, say, steam engines.


Same old flawed proposition. And, in software, the behaviour of the components
we build programs out of (i.e. machine instructions) are much more well-defined
and reliable than the materials that steam engines are built out of.


The complexity of software is beginning to approach living organisms. People at
least understand that living things are not totally predictable or reliable, and
s**t will happen, and so you cannot count on a critter or a plant to do exactly
what you want. When computer complexity clearly exceeds organism complexity,
perhaps people will come to recognize software for what it is; beyond definitive
analyzability.


Sure, if you develop a complex software system without a good design process
that carefully refines requirements to specifications to design to code - and
propagates changes in requirements down the chain too - then it may be
impossible to meaningfully analyse that software. That is why my approach is to
formalise requirements, write specifications that are proven to satisfy them,
then refine the specification to code - automatically where possible, but with
manual assistance where e.g. a data structure or an algorithm needs to be made
more efficient.


We can never solve this problem. At best we can make it better.


We can never solve the problem of being certain that we have got the
requirements right. I think that one implication for security is that there may
be whole new classes of threats out there that nobody has thought of yet, and
which we therefore can't refer to in the requirements. But we _can_ solve the
problem of ensuring that software meets the stated requirements, as long as
these are well-defined.

David Crocker, Escher Technologies Ltd.
Consultancy, contracting and tools for dependable software development
www.eschertech.com




___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread Gary McGraw
Hi all,

Though I don't quite understand computer science theory in the same way that 
Crispin does, I do think it is worth pointing out that there are two major 
kinds of security defects in software: bugs at the implementation level, and 
flaws at the design/spec level.  I think Crispin is driving at that point.

If we assumed perfection at the implementation level (through better languages, 
say), then we would end up solving roughly 50% of the software security problem.

Clearly we need to make some progress at the architecture/design level to 
attain reasonable levels of software security.  I don't hold out much hope for 
formal approaches to design (though I continue to watch the UK types with 
interest).  Our approach to analysis and design at the architecture level at 
Cigital is ad hoc and based on experience, but it works.  (For more on that, 
see Software Security Chapter 5 which I think you can get a free copy of if 
you poke around here [registration required] 
http://searchsoftwarequality.techtarget.com/qna/0,289202,sid92_gci1187360,00.html.)

Perfect languages won't solve the software security problem.

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Crispin Cowan
Sent: Monday, June 11, 2007 2:33 AM
To: Blue Boar
Cc: SC-L@securecoding.org
Subject: Re: [SC-L] Harvard vs. von Neumann

IMHO, all this hand wringing is for naught. To get systems that never
fail requires total correctness. Turing tells us that total correctness
is not decidable, so you simply never will get it completely, you will
only get approximations at best.

Having humans write specifications and leaving programming to computers
is similarly a lost cause. At a sufficiently high level, that is asking
the computer to map NP to P, and that isn't going to happen. At a less
abstract level, you are just asking the human to code in a higher level
language. This will help, but will not eliminate the problem that you
just cannot have total correctness.

Programmable Turing machines are great, they do wonderful things, but
total correctness for software simply isn't feasible. People need to
understand that programs are vastly more complex than any other class of
man made artifact ever, , and there fore can never achieve the
reliability of, say, steam engines.

The complexity of software is beginning to approach living organisms.
People at least understand that living things are not totally
predictable or reliable, and s**t will happen, and so you cannot count
on a critter or a plant to do exactly what you want. When computer
complexity clearly exceeds organism complexity, perhaps people will come
to recognize software for what it is; beyond definitive analyzability.

We can never solve this problem. At best we can make it better.

Crispin

--
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread der Mouse
 What Turing actually tells us is that it is possible to construct
 programs that may be correct but whose correctness is not decidable.
 This is a far cry from saying that it is not possible to build
 well-structured programs whose correctness _is_ decidable.

True as far as it goes - but don't forget that you also haven't shown
the latter to be possible for programs of nontrivial size.

 The higher the level in which the human codes, the [fewer] mistakes
 there are to be made, assuming equal familiarity with the language
 etc.

...but the more complex the compiler, and the greater the likelihood
of bugs in it causing the resulting binary to fail to implement what
the human wrote.

 And you are just repeating the same fallacious proposition by saying
 you cannot have total correctness.

It simply has not been formally established.  This does not make it
wrong, just unproven.  (Personally, I don't think it is wrong.)

 Had you instead said you can never be sure that you have established
 that the requirements capture the users' needs, I would have had to
 agree with you.

That too.

There are three places where problems can appear: (1) the
specifications can express something other than what the users
want/need; (2) the coders can make mistakes translating those
specifications to code; (3) the translation from code to binary can
introduce bugs.  (No, step (2) cannot be eliminated; at most you can
push around who the coders are.  Writing specifications in a formal,
compilable language is just another form of programming.)

I don't think any of these steps can ever be rendered flawless, except
possibly when they are vacuous (as, for exmaple, step 3 is when coders
write in machine code).

 People need to understand that programs are vastly more complex than
 any other class of man made artifact ever, and there fore can never
 achieve the reliability of, say, steam engines.
 Same old flawed proposition.

Same old *unproven* proposition.  Again, that doesn't make it wrong
(and, again, I don't think it *is* wrong).

 We can never solve this problem. At best we can make it better.
 We can never solve the problem of being certain that we have got the
 requirements right.

We also can never solve the problem of being certain the conversion
from high-level language (specifications, even) to executable code is
right, either.  Ultimately, everything comes down to a lot of smart
people have looked at this and think it's right - whether this is
code, a proof, prover software, whatever - and people make mistakes.

We're still finding bugs in C compilers.  Do you really think the
(vastly more complex) compilers for very-high-level specification
languages will be any better?

/~\ The ASCII   der Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread ljknews
At 9:00 AM -0400 6/11/07, Gary McGraw wrote:

 If we assumed perfection at the implementation level (through better
 languages, say), then we would end up solving roughly 50% of the
 software security problem.
 
 Clearly we need to make some progress at the architecture/design level
 to attain reasonable levels of software security.

 Perfect languages won't solve the software security problem.

And neither will perfect designs.

Both approaches needed.

But a large percentage of failures that result from weak languages are
already categorized in standard terms like buffer overflow.
-- 
Larry Kilgallen
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] What's the next tech problem to be solved in softwaresecurity?

2007-06-11 Thread McGovern, James F (HTSC, IT)
The next problem to be solved is moving higher up the food chain by teaching 
architects secure architecture principles. Would love to see Gary McGraw tackle 
this subject in his next book...



From: [EMAIL PROTECTED] on behalf of Kenneth Van Wyk
Sent: Sun 6/10/2007 9:37 AM
To: Secure Coding
Subject: Re: [SC-L] What's the next tech problem to be solved in 
softwaresecurity?



First off, many thanks to all who've contributed to this thread.  The 
responses and range of opinions I find fascinating, and I hope that 
others have found value in it as well.  Great stuff, keep it coming.

That said, I see us going towards that favorite of rat-holes here, 
namely the my programming language is better than yours, nyeah! 
path.  Let's please avoid that.  I'm confident that we've seen it 
enough times to know that it ends with no clear winners (but plenty 
of losers).

Cheers,

Ken
-
Kenneth R. van Wyk
SC-L Moderator
KRvW Associates, LLC
http://www.KRvW.com








*
This communication, including attachments, is
for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information.  If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution is
strictly prohibited.  If you are not the intended recipient, please notify
the sender immediately by return e-mail, delete this communication and
destroy all copies.
*

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread David Crocker
der Mouse wrote:


 What Turing actually tells us is that it is possible to construct 
 programs that may be correct but whose correctness is not decidable. 
 This is a far cry from saying that it is not possible to build 
 well-structured programs whose correctness _is_ decidable.

True as far as it goes - but don't forget that you also haven't shown the latter
to be possible for programs of nontrivial size.


Well, if you consider 86,000 lines of Ada a nontrivial size, this was shown back
in 1998 by the METEOR project. See Meteor:
A Successful Application of B in a Large Project by Patrick Behm, Paul Benoit,
Alain Faivre, and Jean-Marc Meynadier. The key to verifying large programs is to
structure them well, so that the verification can be done in a highly modular
fashion. The Design-by-Contract paradigm is an example of this.


 The higher the level in which the human codes, the [fewer] mistakes 
 there are to be made, assuming equal familiarity with the language 
 etc.

...but the more complex the compiler, and the greater the likelihood of bugs
in it causing the resulting binary to fail to implement what the human wrote.


This is potentially true, but is mitigated by a number of factors. First,
specification languages (and, for that matter, programming languages) do not
need to be as complicated as some existing programming languages (C++ springs to
mind). Second,
the semantics of the language you are starting from are formally defined (unlike
almost all programming languages), so the problem is much better defined than it
is for typical compilers. Third, what you call the compiler is in fact a
refiner (which refines specifications to algorithms) followed by a code
translator. The code translator is essentially a compiler for a fairly minimal
but well-defined programming language and is therefore well-known technology.
The difficult part is the refiner, but this can use mathematical rules to ensure
correctness - provided of course that the rules are correctly implemented. It
can also generate verification conditions to be passed to a prover in order to
ensure that the generated code really does meet the specification. If there is
any doubt as to the correctness of the theorem prover, the proofs can be passed
to an independent proof checker for verification.

In our own product, which does a limited amount of refinement of specifications
to code, the part that does the refinement was much easier to specify than the
code translator and we have found it highly reliable.

[snip]


There are three places where problems can appear: (1) the specifications can
express something other than what the users want/need; (2) the coders can make
mistakes translating those specifications to code; (3) the translation from code
to binary can introduce bugs.  (No, step (2) cannot be eliminated; at most you
can push around who the coders are.  Writing specifications in a formal,
compilable language is just another form of programming.)


Writing executable specifications can indeed be viewed as a form of high-level
programming, but it has a number of benefits:

- it is more concise, which leaves less room for some types of error;
- it relates much more directly to the requirements
- requirements can be added to the specification, then it can be proven that the
specification meets the requirements. For example, you might wish to express the
requirement The HTML generated by this component will not include the text
script except in these specific instances  Then you can prove that the
specification satisfies that requirement, or identify reasons why it does not.


I don't think any of these steps can ever be rendered flawless, except possibly
when they are vacuous (as, for exmaple, step 3 is when coders write in machine
code).


I'm not claiming that we can prove with absolute certainly that the process is
flawless. What I am saying is that we can get to the situation in which steps
(2) and (3) can be done with a probability of error of less than one in 10^n for
some sufficiently large and growing n. This is not where we are now with manual
coding at step (2). Step (1) is harder to get right, but by formalising
requirements and verifying that the specification, design and ultimately
implementation satisfy them, we can make some progress.

[snip]


We also can never solve the problem of being certain the conversion from
high-level language (specifications, even) to executable code is right,
either.  Ultimately, everything comes down to a lot of smart people have looked
at this and think it's right - whether this is code, a proof, prover
software, whatever - and people make mistakes.


I'm not looking for absolute certainty of correctness, just a very low
probability of error - which is not what any kind of manual coding process
delivers.


We're still finding bugs in C compilers.  Do you really think the (vastly more
complex) compilers for very-high-level specification languages will be any
better?


As I have tried to 

Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread Blue Boar
der Mouse wrote:
 Like it or not, the Web doesn't work right without Javascript now.
 
 Depends on what you mean by the Web and work right.  Fortunately,
 for at least some people's values of those, this is not true.

Obviously, I'm oversimplifying. I claim that there are enough web sites
that require active content to function right (in other words, that's a
big part of the reason they are popular) and that there is a big enough
critical mass of users who want to use those that it's going to stay.

Or another way to put it: First browser that drops support for
Javascript commits market suicide.

(Actually, the ratio of people who want flashy website to those who care
about disabling Javascript is probably 10,000:1, but I digress.)

Ryan
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___