der Mouse wrote:

>>
> What Turing actually tells us is that it is possible to construct 
> programs that may be correct but whose correctness is not decidable. 
> This is a far cry from saying that it is not possible to build 
> well-structured programs whose correctness _is_ decidable.

True as far as it goes - but don't forget that you also haven't shown the latter
to be possible for programs of nontrivial size.
<<

Well, if you consider 86,000 lines of Ada a nontrivial size, this was shown back
in 1998 by the METEOR project. See "Meteor:
A Successful Application of B in a Large Project" by Patrick Behm, Paul Benoit,
Alain Faivre, and Jean-Marc Meynadier. The key to verifying large programs is to
structure them well, so that the verification can be done in a highly modular
fashion. The Design-by-Contract paradigm is an example of this.

>>
> The higher the level in which the human "codes", the [fewer] mistakes 
> there are to be made, assuming equal familiarity with the language 
> etc.

...but the more complex the "compiler", and the greater the likelihood of bugs
in it causing the resulting binary to fail to implement what the human wrote.
<<

This is potentially true, but is mitigated by a number of factors. First,
specification languages (and, for that matter, programming languages) do not
need to be as complicated as some existing programming languages (C++ springs to
mind). Second,
the semantics of the language you are starting from are formally defined (unlike
almost all programming languages), so the problem is much better defined than it
is for typical compilers. Third, what you call the "compiler" is in fact a
refiner (which refines specifications to algorithms) followed by a code
translator. The code translator is essentially a compiler for a fairly minimal
but well-defined programming language and is therefore well-known technology.
The difficult part is the refiner, but this can use mathematical rules to ensure
correctness - provided of course that the rules are correctly implemented. It
can also generate verification conditions to be passed to a prover in order to
ensure that the generated code really does meet the specification. If there is
any doubt as to the correctness of the theorem prover, the proofs can be passed
to an independent proof checker for verification.

In our own product, which does a limited amount of refinement of specifications
to code, the part that does the refinement was much easier to specify than the
code translator and we have found it highly reliable.

[snip]

>>
There are three places where problems can appear: (1) the specifications can
express something other than what the users want/need; (2) the coders can make
mistakes translating those specifications to code; (3) the translation from code
to binary can introduce bugs.  (No, step (2) cannot be eliminated; at most you
can push around who "the coders" are.  Writing specifications in a formal,
compilable language is just another form of programming.)
<<

Writing "executable specifications" can indeed be viewed as a form of high-level
programming, but it has a number of benefits:

- it is more concise, which leaves less room for some types of error;
- it relates much more directly to the requirements
- requirements can be added to the specification, then it can be proven that the
specification meets the requirements. For example, you might wish to express the
requirement "The HTML generated by this component will not include the text
"<script>" except in these specific instances ...". Then you can prove that the
specification satisfies that requirement, or identify reasons why it does not.

>>
I don't think any of these steps can ever be rendered flawless, except possibly
when they are vacuous (as, for exmaple, step 3 is when coders write in machine
code).
<<

I'm not claiming that we can prove with absolute certainly that the process is
flawless. What I am saying is that we can get to the situation in which steps
(2) and (3) can be done with a probability of error of less than one in 10^n for
some sufficiently large and growing n. This is not where we are now with manual
coding at step (2). Step (1) is harder to get right, but by formalising
requirements and verifying that the specification, design and ultimately
implementation satisfy them, we can make some progress.

[snip]

>>
We also can never solve the problem of being certain the conversion from
high-level language ("specifications", even) to executable code is right,
either.  Ultimately, everything comes down to "a lot of smart people have looked
at this and think it's right" - whether "this" is code, a proof, prover
software, whatever - and people make mistakes.
<<

I'm not looking for absolute certainty of correctness, just a very low
probability of error - which is not what any kind of manual coding process
delivers.

>>
We're still finding bugs in C compilers.  Do you really think the (vastly more
complex) compilers for very-high-level specification languages will be any
better?
<<

As I have tried to outline, the additional complexity of automatic refinement of
specifications to code can be mitigated, for example through the use of
automated verification. As for compiler reliability: it's now 8 years since I
had to workaround bugs in the C++ compilers that are the primary targets for our
code generator. That is not to say that there are none, just that the
probability that we will hit one is small enough that we don't have to worry
about it outside the critical software markets. People who use low-volume
specialist compilers for some embedded processors may have a different
experience; but the reality for many (most?) of us is that finding a bug in our
executable code that is caused by a faulty compiler is a rare event.

More than forty years ago, we started the move from assembly language
programming to a higher level of abstraction - "high level programming
languages". This raised productivity, and almost certainly reduced error rates.
I think the time has come for a shift to an even higher level of abstraction.

This posting has drifted a little more off-topic than I intended, however I do
think it has a direct bearing on security. In order to get secure software, we
need to:

1. Identify security as a requirement of the software right from the start; and
2. Use a development process that ensures that the finished software meets the
requirements.

David Crocker, Escher Technologies Ltd.
Consultancy, contracting and tools for dependable software development
www.eschertech.com




_______________________________________________
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________

Reply via email to