Re: [SC-L] Harvard vs. von Neumann

2007-06-15 Thread Kenneth Van Wyk

On Jun 14, 2007, at 3:51 PM, Gary McGraw wrote:
I am in complete agreement with your thinking, which is why one of  
the touchpoints (and chapter 9 of Software Security is about  
operations.  Ken knows more about this than any of us, but he's on  
a plane now...right Ken?


Wow, I'd stop far short of such strong words, but I have spent a  
great deal of time in operations land, and I am convinced we're (all)  
missing out on significant opportunities to enhance our software  
security by better making use of deployment security, for lack of a  
better term.  I've seen far too many one size fits all approaches  
to software deployments that fall far short of adequately protecting  
the app, much less enabling the detection and response of issues when  
they come up.


Cheers,

Ken

P.S. And yes, I was on a plane.  Greetings from Lisbon, en route to  
Sevilla, Spain for the FIRST conference.  I'll again toss out the  
offer to meet with any SC-Lers who are at the conference.

-
Kenneth R. van Wyk
SC-L Moderator
KRvW Associates, LLC
http://www.KRvW.com






smime.p7s
Description: S/MIME cryptographic signature
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Crispin Cowan
Gary McGraw wrote:
 Though I don't quite understand computer science theory in the same way that 
 Crispin does, I do think it is worth pointing out that there are two major 
 kinds of security defects in software: bugs at the implementation level, and 
 flaws at the design/spec level.  I think Crispin is driving at that point.
   
Kind of. I'm saying that specification and implementation are
relative to each other: at one level, a spec can say put an iterative
loop here and implementation of a bunch of x86 instructions. At another
level, specification says initialize this array and the implementation
says for (i=0; iARRAY_SIZE;i++){ At yet another level the
specification says get a contractor to write an air traffic control
system and the implementation is a contract :)

So when you advocate automating the implementation and focusing on
specification, you are just moving the game up. You *do* change
properties when you move the game up, some for the better, some for the
worse. Some examples:

* If you move up to type safe languages, then the compiler can prove
  some nice safety properties about your program for you. It does
  not prove total correctness, does not prove halting, just some
  nice safety properties.
* If you move further up to purely declarative languages (PROLOG,
  strict functional languages) you get a bunch more analyzability.
  But they are still Turing-complete (thanks to Church-Rosser) so
  you still can't have total correctness.
* If you moved up to some specification form that was no longer
  Turing complete, e.g. something weaker like predicate logic, then
  you are asking the compiler to contrive algorithmic solutions to
  nominally NP-hard problems. Of course they mostly aren't NP-hard
  because humans can create algorithms to solve them, but now you
  want the computer to do it. Which begs the question of the
  correctness of a compiler so powerful it can solve general purpose
  algorithms.


 If we assumed perfection at the implementation level (through better 
 languages, say), then we would end up solving roughly 50% of the software 
 security problem.
   
The 50% being rather squishy, but yes this is true. Its only vaguely
what I was talking about, really, but it is true.

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Steven M. Christey

On Mon, 11 Jun 2007, Crispin Cowan wrote:

 Gary McGraw wrote:
  Though I don't quite understand computer science theory in the same way 
  that Crispin does, I do think it is worth pointing out that there are two 
  major kinds of security defects in software: bugs at the implementation 
  level, and flaws at the design/spec level.  I think Crispin is driving at 
  that point.
 
 Kind of. I'm saying that specification and implementation are
 relative to each other: at one level, a spec can say put an iterative
 loop here and implementation of a bunch of x86 instructions.

I agree with this notion.  They can overlap at what I call design
limitations: strcpy() being overflowable (and C itself being
overflowable) is a design limitation that enables programmers to make
implementation errors.  I suspect I'm just rephrasing a tautology, but
I've theorized that all implementation errors require at least one design
limitation.  No high-level language that I know of has a built-in
mechanism for implicitly containing files to a limited directory (barring
chroot-style jails), which is a design limitation that enables a wide
variety of directory traversal attacks.

If you have a standard authentication algorithm with a required step that
ensures integrity, then a product that doesn't perform this step has an
implementation bug at the algorithm's level - but if the developers didn't
even bother putting this requirement into the design, then at the product
level, it's a design problem.  Or something like that.

  If we assumed perfection at the implementation level (through better
  languages, say), then we would end up solving roughly 50% of the
  software security problem.
 
 The 50% being rather squishy, but yes this is true. Its only vaguely
 what I was talking about, really, but it is true.

For whatever it's worth, I think I agree with this, with the caveat that I
don't think we collectively have a solid understanding of design issues,
so the 50% guess is quite squishy.  For example, the terminology for
implementation issues is much more mature than terminology for design
issues.

One sort-of side note: in our vulnerability type distributions paper
[1], which we've updated to include all of 2006, I mention how major Open
vs. Closed source vendor advisories have different types of
vulnerabilities in their top 10 (see table 4 analysis in the paper).
While this discrepancy could be due to researcher/tool bias, it's probably
also at least partially due to development practices or language/IDE
design.  Might be interesting for someone to pursue *why* such differences
occur.

- Steve

[1] http://cwe.mitre.org/documents/vuln-trends/index.html
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Crispin Cowan
Steven M. Christey wrote:
 On Mon, 11 Jun 2007, Crispin Cowan wrote:
   
 Kind of. I'm saying that specification and implementation are
 relative to each other: at one level, a spec can say put an iterative
 loop here and implementation of a bunch of x86 instructions.
 
 I agree with this notion.  They can overlap at what I call design
 limitations: strcpy() being overflowable (and C itself being
 overflowable) is a design limitation that enables programmers to make
 implementation errors.  I suspect I'm just rephrasing a tautology, but
 I've theorized that all implementation errors require at least one design
 limitation.  No high-level language that I know of has a built-in
 mechanism for implicitly containing files to a limited directory (barring
 chroot-style jails), which is a design limitation that enables a wide
 variety of directory traversal attacks.
   
I thought that the Java 2 security container stuff let you specify file
accesses? Similarly, I thought that Microsoft .Net managed code could
have an access specification?

AppArmor provides exactly that kind of access specification, but it is
an OS feature rather than a high level language, unless you want to view
AA policies as high level specifications.

 If we assumed perfection at the implementation level (through better
 languages, say), then we would end up solving roughly 50% of the
 software security problem.
   
 The 50% being rather squishy, but yes this is true. Its only vaguely
 what I was talking about, really, but it is true.
 
 For whatever it's worth, I think I agree with this, with the caveat that I
 don't think we collectively have a solid understanding of design issues,
 so the 50% guess is quite squishy.  For example, the terminology for
 implementation issues is much more mature than terminology for design
 issues.
   
I don't agree with that. I think it is a community gap. The academic
security community has a very mature nomenclature for design issues. The
hax0r community has a mature nomenclature for implementation issues.
That these communities are barely aware of each other's existence, never
mind talking to each other, is a problem :)

 One sort-of side note: in our vulnerability type distributions paper
 [1], which we've updated to include all of 2006, I mention how major Open
 vs. Closed source vendor advisories have different types of
 vulnerabilities in their top 10 (see table 4 analysis in the paper).
 While this discrepancy could be due to researcher/tool bias, it's probably
 also at least partially due to development practices or language/IDE
 design.  Might be interesting for someone to pursue *why* such differences
 occur.
   
Do you suppose it is because of the different techniques researchers use
to detect vulnerabilities in source code vs. binary-only code? Or is
that a bad assumption because the hax0rs have Microsoft's source code
anyway? :-)

Crispin

-- 
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Steven M. Christey

I agree with Ryan, at the top skill levels anyway.  Binary reverse
engineering seems to have evolved to the point where I refer to binary as
source-equivalent, and I was told by some well-known applied researcher
that some vulns are easier to find in binary than source.

But the bulk of public disclosures are not by top researchers, so I'd
suspect that in the general field, source inspection is more accessible
than binary.  So with closed source, people are more likely to use black
box tools, which might not be as effective in finding things like format
string issues, which often hide in rarely triggered error conditions but
are easy to grep for in source.  And maybe the people who have source code
aren't going to be as likely to use black box testing, which means that
obscure malformed-input issues might not be detected.  This is probably
the general researcher; the top researcher is more likely to do both.

Since techniques vary so widely across individuals and researcher bias is
not easily measurable, it's hard to get a conclusive answer about whether
there's a fundamental difference in the *latent* vulns in open vs. closed
(modulo OS-specific vulns), but the question is worth exploring.

On Tue, 12 Jun 2007, Blue Boar wrote:

 Crispin Cowan wrote:
  Do you suppose it is because of the different techniques researchers use
  to detect vulnerabilities in source code vs. binary-only code? Or is
  that a bad assumption because the hax0rs have Microsoft's source code
  anyway? :-)

 I'm in the process of hiring an outside firm for security review of the
 product for the day job. They didn't seem particularly interested in the
 source, the binaries are sufficient. It appears to me that the
 distinction between source and object is becoming a bit moot nowadays.


   Ryan

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-12 Thread Blue Boar
Crispin Cowan wrote:
 Do you suppose it is because of the different techniques researchers use
 to detect vulnerabilities in source code vs. binary-only code? Or is
 that a bad assumption because the hax0rs have Microsoft's source code
 anyway? :-)

I'm in the process of hiring an outside firm for security review of the
product for the day job. They didn't seem particularly interested in the
source, the binaries are sufficient. It appears to me that the
distinction between source and object is becoming a bit moot nowadays.


Ryan
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread der Mouse
 Like it or not, the Web doesn't work right without Javascript now.

Depends on what you mean by the Web and work right.  Fortunately,
for at least some people's values of those, this is not true.

/~\ The ASCII   der Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread David Crocker
Crispin Cowen wrote:


IMHO, all this hand wringing is for naught. To get systems that never fail
requires total correctness. Turing tells us that total correctness is not
decidable, so you simply never will get it completely, you will only get
approximations at best.


What Turing actually tells us is that it is possible to construct programs that
may be correct but whose correctness is not decidable. This is a far cry from
saying that it is not possible to build well-structured programs whose
correctness _is_ decidable.


Having humans write specifications and leaving programming to computers is
similarly a lost cause. At a sufficiently high level, that is asking the
computer to map NP to P, and that isn't going to happen.


I don't understand what you are getting at here. If you are saying that humans
can map NP to P, but that it is impossible in principle to have computers do the
same thing, then that sounds like a religious argument to me. If you are saying
that you can write a specification that is unsatisfiable (or whose
satisfiability is undecidable) and that therefore cannot be implemented as code,
then this applies equally to human programmers. Incidentally, I have heard of a
few cases in which programming teams have wasted effort trying to implement sets
of requirements which, when the requirements were formalised, turned out to be
contradictory.


At a less abstract level, you are just asking the human to code in a higher
level language. This will help, but will not eliminate the problem that you just
cannot have total correctness.


The higher the level in which the human codes, the less mistakes there are to
be made, assuming equal familiarity with the language etc. And you are just
repeating the same fallacious proposition by saying you cannot have total
correctness. Had you instead said you can never be sure that you have
established that the requirements capture the users' needs, I would have had to
agree with you.


Programmable Turing machines are great, they do wonderful things, but total
correctness for software simply isn't feasible. People need to understand that
programs are vastly more complex than any other class of man made artifact ever,
, and there fore can never achieve the reliability of, say, steam engines.


Same old flawed proposition. And, in software, the behaviour of the components
we build programs out of (i.e. machine instructions) are much more well-defined
and reliable than the materials that steam engines are built out of.


The complexity of software is beginning to approach living organisms. People at
least understand that living things are not totally predictable or reliable, and
s**t will happen, and so you cannot count on a critter or a plant to do exactly
what you want. When computer complexity clearly exceeds organism complexity,
perhaps people will come to recognize software for what it is; beyond definitive
analyzability.


Sure, if you develop a complex software system without a good design process
that carefully refines requirements to specifications to design to code - and
propagates changes in requirements down the chain too - then it may be
impossible to meaningfully analyse that software. That is why my approach is to
formalise requirements, write specifications that are proven to satisfy them,
then refine the specification to code - automatically where possible, but with
manual assistance where e.g. a data structure or an algorithm needs to be made
more efficient.


We can never solve this problem. At best we can make it better.


We can never solve the problem of being certain that we have got the
requirements right. I think that one implication for security is that there may
be whole new classes of threats out there that nobody has thought of yet, and
which we therefore can't refer to in the requirements. But we _can_ solve the
problem of ensuring that software meets the stated requirements, as long as
these are well-defined.

David Crocker, Escher Technologies Ltd.
Consultancy, contracting and tools for dependable software development
www.eschertech.com




___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread Gary McGraw
Hi all,

Though I don't quite understand computer science theory in the same way that 
Crispin does, I do think it is worth pointing out that there are two major 
kinds of security defects in software: bugs at the implementation level, and 
flaws at the design/spec level.  I think Crispin is driving at that point.

If we assumed perfection at the implementation level (through better languages, 
say), then we would end up solving roughly 50% of the software security problem.

Clearly we need to make some progress at the architecture/design level to 
attain reasonable levels of software security.  I don't hold out much hope for 
formal approaches to design (though I continue to watch the UK types with 
interest).  Our approach to analysis and design at the architecture level at 
Cigital is ad hoc and based on experience, but it works.  (For more on that, 
see Software Security Chapter 5 which I think you can get a free copy of if 
you poke around here [registration required] 
http://searchsoftwarequality.techtarget.com/qna/0,289202,sid92_gci1187360,00.html.)

Perfect languages won't solve the software security problem.

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Crispin Cowan
Sent: Monday, June 11, 2007 2:33 AM
To: Blue Boar
Cc: SC-L@securecoding.org
Subject: Re: [SC-L] Harvard vs. von Neumann

IMHO, all this hand wringing is for naught. To get systems that never
fail requires total correctness. Turing tells us that total correctness
is not decidable, so you simply never will get it completely, you will
only get approximations at best.

Having humans write specifications and leaving programming to computers
is similarly a lost cause. At a sufficiently high level, that is asking
the computer to map NP to P, and that isn't going to happen. At a less
abstract level, you are just asking the human to code in a higher level
language. This will help, but will not eliminate the problem that you
just cannot have total correctness.

Programmable Turing machines are great, they do wonderful things, but
total correctness for software simply isn't feasible. People need to
understand that programs are vastly more complex than any other class of
man made artifact ever, , and there fore can never achieve the
reliability of, say, steam engines.

The complexity of software is beginning to approach living organisms.
People at least understand that living things are not totally
predictable or reliable, and s**t will happen, and so you cannot count
on a critter or a plant to do exactly what you want. When computer
complexity clearly exceeds organism complexity, perhaps people will come
to recognize software for what it is; beyond definitive analyzability.

We can never solve this problem. At best we can make it better.

Crispin

--
Crispin Cowan, Ph.D.   http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
AppArmor Chat: irc.oftc.net/#apparmor

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread der Mouse
 What Turing actually tells us is that it is possible to construct
 programs that may be correct but whose correctness is not decidable.
 This is a far cry from saying that it is not possible to build
 well-structured programs whose correctness _is_ decidable.

True as far as it goes - but don't forget that you also haven't shown
the latter to be possible for programs of nontrivial size.

 The higher the level in which the human codes, the [fewer] mistakes
 there are to be made, assuming equal familiarity with the language
 etc.

...but the more complex the compiler, and the greater the likelihood
of bugs in it causing the resulting binary to fail to implement what
the human wrote.

 And you are just repeating the same fallacious proposition by saying
 you cannot have total correctness.

It simply has not been formally established.  This does not make it
wrong, just unproven.  (Personally, I don't think it is wrong.)

 Had you instead said you can never be sure that you have established
 that the requirements capture the users' needs, I would have had to
 agree with you.

That too.

There are three places where problems can appear: (1) the
specifications can express something other than what the users
want/need; (2) the coders can make mistakes translating those
specifications to code; (3) the translation from code to binary can
introduce bugs.  (No, step (2) cannot be eliminated; at most you can
push around who the coders are.  Writing specifications in a formal,
compilable language is just another form of programming.)

I don't think any of these steps can ever be rendered flawless, except
possibly when they are vacuous (as, for exmaple, step 3 is when coders
write in machine code).

 People need to understand that programs are vastly more complex than
 any other class of man made artifact ever, and there fore can never
 achieve the reliability of, say, steam engines.
 Same old flawed proposition.

Same old *unproven* proposition.  Again, that doesn't make it wrong
(and, again, I don't think it *is* wrong).

 We can never solve this problem. At best we can make it better.
 We can never solve the problem of being certain that we have got the
 requirements right.

We also can never solve the problem of being certain the conversion
from high-level language (specifications, even) to executable code is
right, either.  Ultimately, everything comes down to a lot of smart
people have looked at this and think it's right - whether this is
code, a proof, prover software, whatever - and people make mistakes.

We're still finding bugs in C compilers.  Do you really think the
(vastly more complex) compilers for very-high-level specification
languages will be any better?

/~\ The ASCII   der Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread ljknews
At 9:00 AM -0400 6/11/07, Gary McGraw wrote:

 If we assumed perfection at the implementation level (through better
 languages, say), then we would end up solving roughly 50% of the
 software security problem.
 
 Clearly we need to make some progress at the architecture/design level
 to attain reasonable levels of software security.

 Perfect languages won't solve the software security problem.

And neither will perfect designs.

Both approaches needed.

But a large percentage of failures that result from weak languages are
already categorized in standard terms like buffer overflow.
-- 
Larry Kilgallen
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread David Crocker
der Mouse wrote:


 What Turing actually tells us is that it is possible to construct 
 programs that may be correct but whose correctness is not decidable. 
 This is a far cry from saying that it is not possible to build 
 well-structured programs whose correctness _is_ decidable.

True as far as it goes - but don't forget that you also haven't shown the latter
to be possible for programs of nontrivial size.


Well, if you consider 86,000 lines of Ada a nontrivial size, this was shown back
in 1998 by the METEOR project. See Meteor:
A Successful Application of B in a Large Project by Patrick Behm, Paul Benoit,
Alain Faivre, and Jean-Marc Meynadier. The key to verifying large programs is to
structure them well, so that the verification can be done in a highly modular
fashion. The Design-by-Contract paradigm is an example of this.


 The higher the level in which the human codes, the [fewer] mistakes 
 there are to be made, assuming equal familiarity with the language 
 etc.

...but the more complex the compiler, and the greater the likelihood of bugs
in it causing the resulting binary to fail to implement what the human wrote.


This is potentially true, but is mitigated by a number of factors. First,
specification languages (and, for that matter, programming languages) do not
need to be as complicated as some existing programming languages (C++ springs to
mind). Second,
the semantics of the language you are starting from are formally defined (unlike
almost all programming languages), so the problem is much better defined than it
is for typical compilers. Third, what you call the compiler is in fact a
refiner (which refines specifications to algorithms) followed by a code
translator. The code translator is essentially a compiler for a fairly minimal
but well-defined programming language and is therefore well-known technology.
The difficult part is the refiner, but this can use mathematical rules to ensure
correctness - provided of course that the rules are correctly implemented. It
can also generate verification conditions to be passed to a prover in order to
ensure that the generated code really does meet the specification. If there is
any doubt as to the correctness of the theorem prover, the proofs can be passed
to an independent proof checker for verification.

In our own product, which does a limited amount of refinement of specifications
to code, the part that does the refinement was much easier to specify than the
code translator and we have found it highly reliable.

[snip]


There are three places where problems can appear: (1) the specifications can
express something other than what the users want/need; (2) the coders can make
mistakes translating those specifications to code; (3) the translation from code
to binary can introduce bugs.  (No, step (2) cannot be eliminated; at most you
can push around who the coders are.  Writing specifications in a formal,
compilable language is just another form of programming.)


Writing executable specifications can indeed be viewed as a form of high-level
programming, but it has a number of benefits:

- it is more concise, which leaves less room for some types of error;
- it relates much more directly to the requirements
- requirements can be added to the specification, then it can be proven that the
specification meets the requirements. For example, you might wish to express the
requirement The HTML generated by this component will not include the text
script except in these specific instances  Then you can prove that the
specification satisfies that requirement, or identify reasons why it does not.


I don't think any of these steps can ever be rendered flawless, except possibly
when they are vacuous (as, for exmaple, step 3 is when coders write in machine
code).


I'm not claiming that we can prove with absolute certainly that the process is
flawless. What I am saying is that we can get to the situation in which steps
(2) and (3) can be done with a probability of error of less than one in 10^n for
some sufficiently large and growing n. This is not where we are now with manual
coding at step (2). Step (1) is harder to get right, but by formalising
requirements and verifying that the specification, design and ultimately
implementation satisfy them, we can make some progress.

[snip]


We also can never solve the problem of being certain the conversion from
high-level language (specifications, even) to executable code is right,
either.  Ultimately, everything comes down to a lot of smart people have looked
at this and think it's right - whether this is code, a proof, prover
software, whatever - and people make mistakes.


I'm not looking for absolute certainty of correctness, just a very low
probability of error - which is not what any kind of manual coding process
delivers.


We're still finding bugs in C compilers.  Do you really think the (vastly more
complex) compilers for very-high-level specification languages will be any
better?


As I have tried to 

Re: [SC-L] Harvard vs. von Neumann

2007-06-11 Thread Blue Boar
der Mouse wrote:
 Like it or not, the Web doesn't work right without Javascript now.
 
 Depends on what you mean by the Web and work right.  Fortunately,
 for at least some people's values of those, this is not true.

Obviously, I'm oversimplifying. I claim that there are enough web sites
that require active content to function right (in other words, that's a
big part of the reason they are popular) and that there is a big enough
critical mass of users who want to use those that it's going to stay.

Or another way to put it: First browser that drops support for
Javascript commits market suicide.

(Actually, the ratio of people who want flashy website to those who care
about disabling Javascript is probably 10,000:1, but I digress.)

Ryan
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___