Book talked about
as "beyond A1".
The most interesting next step would be to take the seL4 microkernel
(with its proofs) and conduct an EAL7 Common Criteria evaluation of it
in a real system context, and to determine what new requirements could
be met in a future EAL8 evaluation.
TECTED]>
>Subject: Re: [SC-L] GCC and pointer overflows [LWN.net]
>
>Ken, a good example. For those of you who want to reach much further
>back, Paul Karger told me of a similar problem in the compiler (I don't
>remember the language) used for compiling the A1 VAX VMM kernel,
[EMAIL PROTECTED] writes
>
>Extended criteria to the above would involve designs/code that are
>formally specified or verified (using formal methods/logic). Think
>Orange Book (TCSEC, ITSEC, Common Criteria, et al) and you'll know
>what I mean by this. The interesting thing about Orange Book is
My comments are interleaved below:
Crispin Cowan writes:
>
>
>PGN cites many of the things that Multics did right and history did not
>follow. Most of these issues are sufficiently entrenched in legacy
>hardware and software that there is little chance to change them any
>time soon.
>
>Of partic
Steve Bellovin <[EMAIL PROTECTED] writes:
>I've heard that claim before; I've always thought it overrated. In an
>upward-growing stack, if you overflow a parameter passed to a
>subroutine it will overwrite the subroutine's return address. Since
>many buffer overflows happen as a result of libra
system out
of memory - only yourself.
Note - that is not to say that Multics was free of all security
vulnerabilities. It certainly was not, but it was dramatically better
than almost anything in widespread use today.
The paper to look at is:
Karger, P.A. and R.R. Schell. Thirty Years Later: Le
Pascal Meunier <[EMAIL PROTECTED]> writes
>Do you think it is possible to enumerate all the ways all vulnerabilities
>can be created? Is the set of all possible exploitable programming mistakes
>bounded?
I believe that one can make a Turing machine halting argument to show
that this is impossib
om either executing
untrusted code or from reading untrusted data that could facilitate a
data-driven attack. Biba constrains such a process to only executing
trusted code and reading trusted data. Of course, deciding which code
and data should be trusted is a much harder problem! See this paper for
some i
"Paco Hope" writes:
>
>Date: Mon, 20 Dec 2004 13:35:23 -0500
>From: "Paco Hope" <[EMAIL PROTECTED]>
>Subject: Re: [SC-L] Re: DJB's students release 44 poorly-worded, overblowna=
>dvisories
>
>On 12/20/04 1:03 PM, "Crispin Cowan" <[EMAIL PROTECTED]> wrote:
>>> If they exploited notepad.exe when the