On 3/22/06, [EMAIL PROTECTED] (Victor Duchovni) wrote:

>On Wed, Mar 22, 2006 at 02:31:37PM -0800, Bill Frantz wrote:
>> One of my pet peeves: The idea that the "user" is the proper atom of
>> protection in an OS.
>> My threat model includes different programs run by one (human) user.  If
>> a Trojan, running as part of my userID, can learn something about the
>> random numbers harvested by my browser/gpg/ssh etc., then it can start
>> to attack the keys used by those applications, even if the OS does a
>> good job of keeping the memory spaces separate and protected.
>Why would a trojan running in your security context bother with attacking
>a PRNG? It can just read your files, record your keystrokes, change your
>browser proxy settings, ...

Why should any program (except for the "power box"
<http://plash.beasts.org/powerbox.html>) run in my security context. 
They should all run in their own security contexts.  Plash
<http://plash.beasts.org/> and Polaris
<http://www.hpl.hp.com/techreports/2004/HPL-2004-221.html> are examples
of systems where programs do not run in the security context of the

>If the trojan is a sand-box of some sort, the sand-box is a different
>security context, and in that case, perhaps a different RNG view is

I would agree.  Remember however, (that user == security context) should
not be the norm.  The norm should be to run each program in its own
security context, and allow programs to run parts of themselves in other
security contexts.  (One example of the last would be running the
programs which implement a web site in separate security contexts from
the web server that parses the url to find out what program to run.

The general requirement here is to have authorities that are derived
from the call, along with authorities that are carried with the program.
Think setuid programs in Unix.  They get the ability to read/write the
user's terminal, along with a bunch of privileges that the user doesn't
have.  (IMHO, setuid is a kludge, but a useful kludge, given Unix has
little better.)

>Some applications that consume a steady stream of RNG data, maintain
>their own random pool, and use the public pool to periodically mix in
>some fresh state. These are less vulnerable to snooping/exhaustion of
>the public stream.

This suggestion is a good way of kludging around the basic problem that
most systems equate user with security context, so you, as the writer of
the program, may be sharing a flaky RNG with some other program you
don't trust.

Cheers - Bill

Bill Frantz        | The first thing you need   | Periwinkle 
(408)356-8506      | when using a perimeter     | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.    | Los Gatos, CA 95032

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to