Interesting!!!
--- Begin Message ---
Hi,
For those out there who missed the publicity (incl ./): NICTA
announced last week that the formal verification of our seL4 kernel
has been completed. This makes seL4 the first ever general-purpose OS
kernel with a formal proof that the implementation is consistent with
the specification.
A paper on the project will appear in this year's SOSP.
Details at http://ertos.nicta.com.au/news/home.pml#verified
Gernot
_______________________________________________
l4-hackers mailing list
l4-hack...@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers
--- End Message ---
--- Begin Message ---
>>>>> On Sun, 16 Aug 2009 22:34:27 -0400, "John van V." <john.va...@gmail.com>
>>>>> said:
JvV> It would be really nice to create some plain-langauge text for the
JvV> general public so that it knows the significance of this L4 milestone.
For an outsiders' view see this article:
http://www.computerworld.com.au/article/314817/nicta_wins_race_secure_l4
... and this podcast:
http://itradio.com.au/networking/?p=98
Gernot
_______________________________________________
l4-hackers mailing list
l4-hack...@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers
--- End Message ---
--- Begin Message ---
Hi.
Are there any indications which properties of seL4 are verified?
{{{
Heiser said one of the primary research objectives was to come up with a
kernel that would not only be verified but whose performance would be
suitable for use in real-life systems.
“My briefing was that we must not *lose more than 10 per cent
performance through the verification process*. Right from the start,
this was not just an academic exercise but something that was for
real-world use.”
}}}
I just can't understand how a verification process can undermine
performance.
Gernot Heiser wrote:
For an outsiders' view see this article:
http://www.computerworld.com.au/article/314817/nicta_wins_race_secure_l4
_______________________________________________
l4-hackers mailing list
l4-hack...@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers
--- End Message ---
--- Begin Message ---
On Mon, 17 Aug 2009, Roman Beslik wrote:
> Hi.
> Are there any indications which properties of seL4 are verified?
>
> {{{
> Heiser said one of the primary research objectives was to come up with a
> kernel that would not only be verified but whose performance would be
> suitable for use in real-life systems.
>
> ?My briefing was that we must not *lose more than 10 per cent
> performance through the verification process*. Right from the start,
> this was not just an academic exercise but something that was for
> real-world use.?
> }}}
>
> I just can't understand how a verification process can undermine
> performance.
simply by putting constraints on permitted code constructs and thus
potentially prevent optimization in some cases - most C verification
will require to use only a subset of C (something like C-light) putting
constraints on switch statements, goto and the use of pointers.
hofrat
_______________________________________________
l4-hackers mailing list
l4-hack...@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers
--- End Message ---
--- Begin Message ---
On Mon, 17 Aug 2009, Gernot Heiser wrote:
> Hi,
>
> For those out there who missed the publicity (incl ./): NICTA
> announced last week that the formal verification of our seL4 kernel
> has been completed. This makes seL4 the first ever general-purpose OS
> kernel with a formal proof that the implementation is consistent with
> the specification.
>
> A paper on the project will appear in this year's SOSP.
>
> Details at http://ertos.nicta.com.au/news/home.pml#verified
>
This linke might help:
http://www.theengineer.co.uk/Articles/312631/Safersoftware.htm?nl=TE_NL&dep=webops&dte=140809
for some general comments (including this project) on complex software
and safety related verification - I would like to point you to
http://www.abnormaldistribution.org/ from Peter Bernard Ladkin as he raises
some interesting questions related to this work.
What did supprise me a bit is that:
7500 LoC yield 200k Lines of formal proof
7500 LoC yield "more than 10k intermediate theorems
I wonder how scalable such an approach is... And of course raising (from a
safety perspective) the question if the complexity and potential failures of
7.5k of Code were not replaced by a even higher complexity (of course its
a diverse representation - which takes out a lot of this imediately) - the
main concern I would have is that while this initial core might be very
reliable (see concerns raised at http://www.abnormaldistribution.org/) it
is a question of how changes to such a system can be handled and how the
interaction of the application/drivers can be handled in such a system. The
isolation of the core (microkernel) is not the issue (provided the full range
of the interface is covered in the formal proof) - but how far does that
actually help with respect to the actual safety related code, which is in
a user-space task/driver ?
Neve the less this is a really impressive stunt - and I do hope more will be
made available on the verification of this project !
thx!
hofrat
_______________________________________________
l4-hackers mailing list
l4-hack...@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers
--- End Message ---
--- Begin Message ---
Hi Nicholas,
Nicholas Mc Guire wrote:
> What did supprise me a bit is that:
>
> 7500 LoC yield 200k Lines of formal proof
> 7500 LoC yield "more than 10k intermediate theorems
>
> I wonder how scalable such an approach is... And of course raising (from a
> safety perspective) the question if the complexity and potential failures of
> 7.5k of Code were not replaced by a even higher complexity (of course its
> a diverse representation - which takes out a lot of this imediately)
>
the question is whether lines of Isabelle code, lines of Isabelle proof
script and number of intermediate theorems is really a good metric to
judge about the trustworthiness of the model and the corresponding proofs.
In Robin (sources are here:
http://robin.tudos.org/robin-hw-model-2008-05-15.tgz), we ended up with
approx. 20K LOC of PVS code for the somewhat detailed C++ / Hardware
model. However, most of this code is not to describe the model but to
establish higher level results. There is for example a rather lengthy
proof that establishes that under a bunch of preconditions virtual
memory behaves as normal unaliased memory. That is reading and writing
have the desired property and do not cause side effects (e.g., due to
virtual memory aliases).
The specification of the page-table walker is only about 50 LOC (plus
another 50-100 LOC for the reading and writing page-table entries rather
than bytes). The actual lemma boils down to a number of results such as:
data(read (a1) ; read (a2)) = data(read (a2))
which means the first read to address a1 has no side effects on the data
returned by the second read.
The LOCs for the auxiliary lemmas that were necessary to produce these
results are however in the order of 1500 LOC (proof commands not included).
If you would trust PVS to work correctly (which is critical due to some
reported soundness bugs) all you need to do is to agree with the +- 100
lines of specification and with the lemma results. The intermediate
stuff you can just skip.
A second point is that proof commands in PVS (and also in Isabelle) have
different power. For example, the comand expand just replaces a lemma
with its definition --- a step you can easily follow --- the command
grind however applies almost all known simplification techniques at
once. Thus you have a 1 line proof is grind works and a 100+ line proof
if you would apply the respective simplification steps by hand.
I would therefore very much like to see Nicta's Isabelle code for the
model and for the final lemmas at least.
Best Regards
Marcus
_______________________________________________
l4-hackers mailing list
l4-hack...@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers
--- End Message ---
--- Begin Message ---
The size of the proof is irrelevant as far as its trustworthiness is
concerned, it is only a measure of the amount of effort it took. The
proof can be extracted and independently checked, eg. by a small,
special-purpose, independently-developed proof checker. This is what
evaluation labs would presumably do.
The size definitely tells you about scalability of what can be
verified. Basically, verification is out of the question for an OS
kernel that isn't a microkernel.
This doesn't mean you cannot verify bigger systems. Once the kernel is
verified, verifying user-level bits of the trusted computing base is
easier, as everything in userland is subject to the (verififed) kernel
mechanisms. We're working on a real-world system where the whole TCB
is verifiable (and may very well be verified).
Gernot
_______________________________________________
l4-hackers mailing list
l4-hack...@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers
--- End Message ---
--- Begin Message ---
On Tue, August 18, 2009 10:18 am, John van V. wrote:
> Doesn't load.
Apologies, that was the link to the internal stage site. The correct
link is http://ertos.nicta.com.au/research/l4.verified/
Gernot
_______________________________________________
l4-hackers mailing list
l4-hack...@os.inf.tu-dresden.de
http://os.inf.tu-dresden.de/mailman/listinfo/l4-hackers
--- End Message ---