RE: Linux RNG paper

2006-05-05 Thread Kuehn, Ulrich

 From: Travis H. [mailto:[EMAIL PROTECTED]
 
 On 5/4/06, markus reichelt [EMAIL PROTECTED] wrote:
  Agreed; but regarding unix systems, I know of none crypto
  implementation that does integrity checking. Not just de/encrypt the
  data, but verify that the encrypted data has not been tampered with.
 
 Are you sure?  There's a aes-cbc-essiv:sha256 cipher with dm-crypt.
 Are they using sha256 for something other than integrity?
 
That sounds like they are applying sha256 to the passphrase, and not for adding 
redundancy to the data. The big problem with disk encryption is to achieve 
nondeterministic encryption and authenticated encryption.

Nondeterminism requires a new IV each time you encrypt a sector (I am talking 
here of sectors, to avoid confusion with blocks of a block cipher) for disk 
storage. The reason for nondeterminism is that otherwise at least common 
prefixes of the old and new contents show up in the ciphertext. 

Authenticated encryption basically prevents tampering with ciphertext going 
unnoticed. However, some while ago I read a paper somewhere pointing out that 
the lower block layer at least in linux is not capable of dealing with data 
errors due to authentication failure. If interest is, I could try to dig out 
the reference.

Nevertheless, if you want to add extra IVs (not determined deterministically 
from the block number) and authentication tag, you could store them in extra 
sectors which do not show up in the plaintext-device. Caching should be not too 
difficult. However, if the authentication tags / IVs cannot be read anymore due 
to an oops-up in the code, disk failure etc. you might run into serious 
problems, as now multiple sectors are affected.

Maybe there are other solutions?

Regards,
Ulrich

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


encrypted filesystem integrity threat-model (Re: Linux RNG paper)

2006-05-05 Thread Adam Back
I think an encrypted file system with builtin integrity is somewhat
interesting however the threat model is a bit broken if you are going
to boot off a potentially tampered with disk.

I mean the attacker doesnt have to tamper with the proposed
encrypted+MACed data, he just tampers with the boot sector/OS boot,
gets your password and modifies your data at will using the MAC keys.

I think you'd be better off building a boot USB key using DSL or some
other small link distro, checksumming your encrypted data (and the
rest of the disk) at boot; and having feature to store the
keyed-checksum of the disk on shutdown some place MACed such that the
USB key can verify it.  Then boot the real OS if that succeeds.

(Or better yet buy yourself one of those 32GB usb keys for $3,000 and
remove the hard disk, and just keep your encrypted disk on your
keyring :)


Of course an encrypted network filesystem has other problems... if
you're trying to actively use an encrypted filesystem backed in an
unsecured network file system then you're going to need MACs and
replay protection and other things normal encrypted file system modes
dont provide.

Adam

On Thu, May 04, 2006 at 01:44:48PM -0500, Travis H. wrote:
 On 5/4/06, markus reichelt [EMAIL PROTECTED] wrote:
 Agreed; but regarding unix systems, I know of none crypto
 implementation that does integrity checking. Not just de/encrypt the
 data, but verify that the encrypted data has not been tampered with.
 
 Are you sure?  There's a aes-cbc-essiv:sha256 cipher with dm-crypt.
 Are they using sha256 for something other than integrity?
 
 I guess perhaps the reason they don't do integrity checking is that it
 involves redundant data, so the encrypted volume would be smaller, or
 the block offsets don't line up, and perhaps that's trickier to handle
 than a 1:1 correspondence.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-05-05 Thread Victor Duchovni
On Thu, May 04, 2006 at 01:44:48PM -0500, Travis H. wrote:

 I guess perhaps the reason they don't do integrity checking is that it
 involves redundant data, so the encrypted volume would be smaller, or
 the block offsets don't line up, and perhaps that's trickier to handle
 than a 1:1 correspondence.

Exactly, many file systems rely on block devices with atomic single block
(sector) writes. If sector updates are not atomic, the file system needs
to be substantially more complex (unavoidable transaction logs to support
roll-back and roll-forward). Encrypted block device implementations that
are file system agnostic, cannot violate block update atomicity and so
MUST not offer integrity.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-05-05 Thread leichter_jerrold
|  I guess perhaps the reason they don't do integrity checking is that it
|  involves redundant data, so the encrypted volume would be smaller, or
|  the block offsets don't line up, and perhaps that's trickier to handle
|  than a 1:1 correspondence.
| 
| Exactly, many file systems rely on block devices with atomic single block
| (sector) writes. If sector updates are not atomic, the file system needs
| to be substantially more complex (unavoidable transaction logs to support
| roll-back and roll-forward). Encrypted block device implementations that
| are file system agnostic, cannot violate block update atomicity and so
| MUST not offer integrity.
That's way too strong.  Here's an implementation that preserves
block-level atomicity while providing integrity:  Corresponding to each
block, there are *two* checksums, A and B.

Read algorithm:  Read Block, A and B.  If checksum matches
either of A or B, return the value of the block;
otherwise, declare the block invalid.

Write algorithm:  Read current value of block.  If its
checksum matches A, write the checksum of the
new data to B; otherwise, write the checksum of
the new value to A.  After the checksum data is
known to be on the disk, write the data block.

Writes to a given block must be atomic with respect to each other.
(No synchronization is needed between reads and writes.)

Granted, this algorithm has other problems.  But it shows that the three
requirements - user block size matches disk block size; block level
atomicity; and authentication - are not mutually exclusive.  (Actually,
I suppose one should add a fourth requirement, which this scheme also
realizes:  The size of a user block identifier is the same as the size
of the block id passed to disk.  Otherwise, one can keep the checksum
with each block identifier.)
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-05-04 Thread markus reichelt
* Travis H. [EMAIL PROTECTED] wrote:

 1) In the paper, he mentions that the state file could be altered
 by an attacker, and then he'd know the state when it first came up. 
 Of course, if he could do that, he could simply install a trojan in
 the OS itself, so this is not really that much of a concern.  If
 your hard drives might be altered by malicious parties, you should
 be using some kind of cryptographic integrity check on the contents
 before using them.  This often comes for free when encrypting the
 contents.

Agreed; but regarding unix systems, I know of none crypto
implementation that does integrity checking. Not just de/encrypt the
data, but verify that the encrypted data has not been tampered with.

A however unlikely and far-fetched analogy would be someone altering
an encrypted root fs so that f.e. /etc/hosts.deny would decrypt
differently. Such things tend to stay unnoticed when not some kind of
IDS is used, for the very fact that all the common (more or less
skillfully crafted) crypto implementations simply fail to do
integrity checking; dm-crypt, loop-aes, mainline cryptoloop,
truecrypt, bestcrypt, CrossCrypt, ...

However, though preventing the unnoticed modification of an encrypted
device is undoubtedly a goal to strive for, this is not what those
crypto implementations try to achieve. They just work towards safely
and reliably de/encrypting one's data; some more, some less.

-- 
left blank, right bald
still, loop-aes is the way to go.


pgptJQI7AL3Qv.pgp
Description: PGP signature


Re: Linux RNG paper

2006-05-04 Thread Steven M. Bellovin
On Thu, 04 May 2006 18:14:09 +0200, markus reichelt [EMAIL PROTECTED]
wrote:

 * Travis H. [EMAIL PROTECTED] wrote:
 
  1) In the paper, he mentions that the state file could be altered
  by an attacker, and then he'd know the state when it first came up. 
  Of course, if he could do that, he could simply install a trojan in
  the OS itself, so this is not really that much of a concern.  If
  your hard drives might be altered by malicious parties, you should
  be using some kind of cryptographic integrity check on the contents
  before using them.  This often comes for free when encrypting the
  contents.
 
 Agreed; but regarding unix systems, I know of none crypto
 implementation that does integrity checking. Not just de/encrypt the
 data, but verify that the encrypted data has not been tampered with.
 
See Space-Efficient Block Storage Integrity, Alina Oprea, Mike Reiter,
Ke Yang, NDSS 2005,
http://www.isoc.org/isoc/conferences/ndss/05/proceedings/papers/storageint.pdf


--Steven M. Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-05-04 Thread Jason Holt



On Thu, 04 May 2006 18:14:09 +0200, markus reichelt [EMAIL PROTECTED]
wrote:

Agreed; but regarding unix systems, I know of none crypto
implementation that does integrity checking. Not just de/encrypt the
data, but verify that the encrypted data has not been tampered with.


There's also ecryptfs:

http://ecryptfs.sourceforge.net/

-J

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-04-10 Thread William Allen Simpson

Had a bit of time waiting for a file to download, and just read the paper
that's been sitting on my desktop.  The analysis of the weakness is new,
but sadly many of the problems werre already known, and several previously
discussed on this list!

The forward secrecy problem was identified circa 1995 by Phil Karn, who
therefore saved the changed state after generating each random key --
something similar to the paper's suggestion.

The lack of jitter in millisecond event time was also identified by Karn,
and he developed i386 code to determine microseconds from processor timing.
Sorry, I cannot remember whether it only worked on 386 and above, or also
186/286 we were using in cell phones at the time.  But I certainly used
it in a number of routers over the years

We also noticed the event jitter was more important for unpredictability
than the actual event values, and all my code just added the value to the
microsecond time.  The code was fast enough to handle very rapid interrupt
time events by leaving complex functions for later.  This assumes a
cryptographically strong output function will sufficiently hash the bits
that calculating and saving the jitter itself is a waste of effort.

We also always used any network checksum that came across the transom,
including packets, IP, UDP, and TCP.  Yes, it is externally visible, but
the microsecond time is not, and adding them makes the actual pool values
less predictable (although within a constrained range).

Also, rather than deciding the pool was full of entropy, we just kept
XOR'ing the new values with the old, as a circular buffer (again similar
to the paper's suggestion).

Finally, a lot of this was discussed in public, and both Karn's and my
code variants were publicly available.  I don't have my old email
backups online, but I'm sure it was discussed at places such as the
tcp-group and ipsec circa 1995.

After the first Yarrow draft, it was discussed on the old linux-ipsec list
circa 1999 April 22, and on this list circa 1999 August 17.

After much discussion, Theodore Y. Ts'o wrote
([EMAIL PROTECTED]):
Date: Sun, 15 Aug 1999 10:00:01 -0400
From: William Allen Simpson [EMAIL PROTECTED]
Catching up, and after talking with John Kelsey and Sandy Harris at
SAC'99, it seems clear that there is some consensus on these lists that
the semantics of /dev/urandom need improvement, and that some principles
of Yarrow should be incorporated.  I think that most posters can be
satisfied by making the functionality of /dev/random and /dev/urandom
more orthogonal.

 Bill, you're not the IETF working group chairman on /dev/random, and
 /dev/random isn't a working group subject to consensus.  I'm the author,
 with the sole responsibility to make decisions about what's best for the
 device driver.  Of course, if someone else wants to make an alternative
 /dev/random driver, they're free to use it in their system.  They can
 even petition Linus Torvalds to replace theirs with mine, although I
 doubt they'd get very far.

Unfortunately, the fact that Linux remains vulnerable to the iterative
guessing attack was really due to Ted's intransigence, and some personal
relationship that he enjoys with Linus.

Thank you for the independent analysis once again bringing this topic to
everybody's attention.  Hard to believe that another 7 years have passed.

--
William Allen Simpson
Key fingerprint =  17 40 5E 67 15 6F 31 26  DD 0D B9 9B 6A 15 2C 32

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-24 Thread leichter_jerrold
| Min-entropy of a probability distribution is 
| 
| -lg ( P[max] ), 
| 
| minus the base-two log of the maximum probability.  
| 
| The nice thing about min-entropy in the PRNG world is that it leads to
| a really clean relationship between how many bits of entropy we need
| to seed the PRNG, and how many bits of security (in terms of
| resistance to brute force guessing attack) we can get.
Interesting; I hadn't seen this definition before.  It's related to a
concept in traditional probability theory:  The probability of ruin.  If
I play some kind of gambling game, the usual analysis looks at the
value of the game strictly as my long-term expectation value.  If,
however, I have finite resources, it may be that I lose all of them
before I get to play long enough to make long-term a useful notion.
The current TV game show , Deal Or No Deal, is based on this:  I've yet
to see a banker's offer that equals, much less exceeds, the expected
value of the board.  However, given a player's finite resources - they
only get to play one game - the offers eventually become worth taking,
since the alternative is that you walk away with very little.  (For
that matter, insurance makes sense only because of this kind of
analysis:  The long-term expectation value of buying insurance *must*
be negative, or the insurance companies would go out of business -
but insurance can still be worth buying.)
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Linux RNG paper

2006-03-24 Thread Benny Pinkas
Following Travis' message, let me first describe the main results of the
paper.
The paper provides a concise algorithmic description of the Linux random
number generator (LRNG), which is quite complex and is based on the use of
shift registers and of several SHA-1 operations. Identifying the algorithm
was no simple task, due to the length of the code and the lack of
documentation.
 
The paper shows that the forward security of the LRNG algorithm is
susceptible to an attack, although at a cost of 2^64 to 2^96 operations.
This attack enables to compute, given the current state of the LRNG, several
previous outputs of the generator. This means that if you break into a Linux
machine you can learn about past keys which were generated by the LRNG,
which could affect SSL or SSH connections, etc.
(BTW, even without this attack, the algorithm used by the LRNG makes it
trivial to compute the last output of the generator given the current state,
even if the last output was computed a while ago.) 

In addition, the paper contains a simple analysis of the amount of entropy
which is added to the generator in one typical implementation, an analysis
of the security of an implementation on a disk-less system, and a
description of some vulnerabilities of the current implementation (including
an easy denial of service attack).


As for Travis' first comment, the paper mentions that an attacker which has
access to the hard disk can change the LRNG state file which is stored there
between reboots. This attack might be relevant to scenarios where the system
checks during reboot that device drivers etc. were not changed, but does not
perform a similar check of the LRNG state file. 
As for the second comment, it seems to reiterate the observation given in
the paper that if the data that is used to refresh the generator has little
entropy, then an attacker which knows the state of the generator at a
certain time and observes later LRNG outputs can find out what values were
used to refresh the generator. 

Benny

-Original Message-
From: Travis H. [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 23, 2006 9:56 AM
To: Heyman, Michael
Cc: cryptography@metzdowd.com; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Linux RNG paper

I have examined the LRNG paper and have a few comments.

CC'd to the authors so mind the followups.

1) In the paper, he mentions that the state file could be altered by
an attacker, and then he'd know the state when it first came up.  Of
course, if he could do that, he could simply install a trojan in the
OS itself, so this is not really that much of a concern.  If your hard
drives might be altered by malicious parties, you should be using some
kind of cryptographic integrity check on the contents before using
them.  This often comes for free when encrypting the contents.

2) His objection against using keyboard data is perhaps just an
indication that reseeding of the pool should occur with sufficient
entropy that the values cannot efficiently be guessed via brute force
search and forward operation of the PRNG.  If the reseeding is of
insufficient to deter brute force input space search, other bad things
can happen.  For example, in the next paragraph the author mentions
that random events may reseed the secondary pool directly if the
primary pool is full.  If an attacker were to learn the contents of
the secondary pool, he could guess the incremental updates to its
contents and compare results with the real PRNG, resulting in an
incremental state-tracking attack breaking backward security until a
reseed from the primary is generated (which appears to have a minimum
of 8 bytes, also perhaps too low).  The answer is more input, not
less.

It's annoying that the random number generator code calls the
unpredictable stuff entropy.  It's unpredictability that we're
concerned with, and Shannon entropy is just an upper bound on the
predictability.  Unpredictability cannot be measured based on outputs
of sources, it must be based on models of the source and attacker
themselves.  But we all know that.  Maybe we should invent a term? 
Untropy?

And now a random(3) tangent:

While we're on the subject of randomness, I was hoping that random(3)
used the old (TYPE_0) implementation by default... lots of DoS tools
use it to fill spoofed packet fields, and one 32-bit output defines
the entire state of the generator --- meaning that I could distinguish
DoS packets which had at least 32 bits of state in them from other
packets.  However, it appears that Linux and BSD both use a TYPE_3
pool, which makes such simple techniques invalid, and would probably
require identification of a packet stream, instead of testing packets
one by one.  Since use of a real pool has put it beyond my interest
and perhaps my ability, I'm giving the idea away.  Email me if you
find a really good use for PRNG analysis of this sort.

For a TYPE_0 generator, the equation is:
i' = (i * 1103515245 + 12345)  0x7fff

As far as low-hanging

Re: Linux RNG paper

2006-03-23 Thread Travis H.
I have examined the LRNG paper and have a few comments.

CC'd to the authors so mind the followups.

1) In the paper, he mentions that the state file could be altered by
an attacker, and then he'd know the state when it first came up.  Of
course, if he could do that, he could simply install a trojan in the
OS itself, so this is not really that much of a concern.  If your hard
drives might be altered by malicious parties, you should be using some
kind of cryptographic integrity check on the contents before using
them.  This often comes for free when encrypting the contents.

2) His objection against using keyboard data is perhaps just an
indication that reseeding of the pool should occur with sufficient
entropy that the values cannot efficiently be guessed via brute force
search and forward operation of the PRNG.  If the reseeding is of
insufficient to deter brute force input space search, other bad things
can happen.  For example, in the next paragraph the author mentions
that random events may reseed the secondary pool directly if the
primary pool is full.  If an attacker were to learn the contents of
the secondary pool, he could guess the incremental updates to its
contents and compare results with the real PRNG, resulting in an
incremental state-tracking attack breaking backward security until a
reseed from the primary is generated (which appears to have a minimum
of 8 bytes, also perhaps too low).  The answer is more input, not
less.

It's annoying that the random number generator code calls the
unpredictable stuff entropy.  It's unpredictability that we're
concerned with, and Shannon entropy is just an upper bound on the
predictability.  Unpredictability cannot be measured based on outputs
of sources, it must be based on models of the source and attacker
themselves.  But we all know that.  Maybe we should invent a term? 
Untropy?

And now a random(3) tangent:

While we're on the subject of randomness, I was hoping that random(3)
used the old (TYPE_0) implementation by default... lots of DoS tools
use it to fill spoofed packet fields, and one 32-bit output defines
the entire state of the generator --- meaning that I could distinguish
DoS packets which had at least 32 bits of state in them from other
packets.  However, it appears that Linux and BSD both use a TYPE_3
pool, which makes such simple techniques invalid, and would probably
require identification of a packet stream, instead of testing packets
one by one.  Since use of a real pool has put it beyond my interest
and perhaps my ability, I'm giving the idea away.  Email me if you
find a really good use for PRNG analysis of this sort.

For a TYPE_0 generator, the equation is:
i' = (i * 1103515245 + 12345)  0x7fff

As far as low-hanging fruit goes, the higher generator types still
never set the highest order bit (RAND_MAX is 0x7fff), and the
outputs are unaltered pool contents.
--
Security Guru for Hire http://www.lightconsulting.com/~travis/ --
GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-23 Thread David Malone
On Thu, Mar 23, 2006 at 01:55:30AM -0600, Travis H. wrote:
 It's annoying that the random number generator code calls the
 unpredictable stuff entropy.  It's unpredictability that we're
 concerned with, and Shannon entropy is just an upper bound on the
 predictability.  Unpredictability cannot be measured based on outputs
 of sources, it must be based on models of the source and attacker
 themselves.  But we all know that.  Maybe we should invent a term? 
 Untropy?

The problem is that we have to decide what out metric is before we
can give it a name. Shannon entropy is about the asympitic amount
of data needed to encode an average message. Kolmorogrov's entropy
(which got a mention here today) is about the shortest program that
can produce this string. These things aren't often important for
a PRNG or /dev/random like device.

One metric might be guessability (mean number of guesses required
or moments there of).  As you point out, Arikan and Massey have
shown that Shannon entropy are not particularly good estimates of
guessability. Generalisations of entropy, like Reni entropy do seem
to have meaning. The min-entropy mentioned in RFC 4086 seems
reasonable, though I don't think the rational given in the RFC is
actually correct.

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-23 Thread John Kelsey
From: David Malone [EMAIL PROTECTED]
Sent: Mar 23, 2006 3:43 PM
To: Travis H. [EMAIL PROTECTED]
Cc: Heyman, Michael [EMAIL PROTECTED], cryptography@metzdowd.com, [EMAIL 
PROTECTED], [EMAIL PROTECTED]
Subject: Re: Linux RNG paper

...
One metric might be guessability (mean number of guesses required
or moments there of).  As you point out, Arikan and Massey have
shown that Shannon entropy are not particularly good estimates of
guessability. Generalisations of entropy, like Reni entropy do seem
to have meaning. The min-entropy mentioned in RFC 4086 seems
reasonable, though I don't think the rational given in the RFC is
actually correct.

Starting clarification:  

Min-entropy of a probability distribution is 

-lg ( P[max] ), 

minus the base-two log of the maximum probability.  

The nice thing about min-entropy in the PRNG world is that it leads to
a really clean relationship between how many bits of entropy we need
to seed the PRNG, and how many bits of security (in terms of
resistance to brute force guessing attack) we can get.

Suppose I have a string S with 128 bits of min-entropy.  That means
that the highest probablity guess of S has probability 2^{-128}.  I
somehow hash S to derive a 128-bit key.  The question to ask is, could
you guess S more cheaply than you guess the key?  When the min-entropy
is 128, it can't be any cheaper to guess S than to guess the key.
That's true whether we're making one guess, two guesses, ten guesses,
or 2^{127} guesses.   

To see why this is so, consider the best case for an attacker: S is a
128-bit uniform random string.  Now, all possible values have the same
probability, and guessing S is exactly the same problem as guessing
the key.  (I'm ignoring any bad things that happen when we hash down
to a key, but those can be important to think about in a different
context.)  

Now, why is this the best case for an attacker?  Because it gives the
highest probability of guessing right on the nth guess.  If the
min-entropy is 128, then the highest probability symbol must have
prob. 2^{-128}.  If the next highest probability symbol has lower than
2^{-128} probability, then his second guess has lower probability.
And then the next highest probability symbol is constrained in the
same way.   

This makes it really easy, once you're working in min-entropy terms,
to answer questions like do I have enough entropy in this string to
initialize a PRNG based on running AES-128 in counter mode?

   David.

--John Kelsey, NIST


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-22 Thread Bill Frantz
On 3/21/06, [EMAIL PROTECTED] (Heyman, Michael) wrote:

Gutterman, Pinkas, and Reinman have produced a nice as-built-specification and 
analysis of the Linux 
random number generator.

From http://eprint.iacr.org/2006/086.pdf:

...

” Since randomness is often consumed in a multi-user environment, it makes 
sense to generalize the BH 
model to such environments. Ideally, each user should have its own 
random-number generator, and these 
generators should be refreshed with different data which is all derived from 
the entropy sources 
available to the system (perhaps after going through an additional PRNG). This 
architecture should 
prevent denial-of-service attacks, and prevent one user from learning about 
the randomness used by 
other users

One of my pet peeves: The idea that the user is the proper atom of
protection in an OS.

My threat model includes different programs run by one (human) user.  If
a Trojan, running as part of my userID, can learn something about the
random numbers harvested by my browser/gpg/ssh etc., then it can start
to attack the keys used by those applications, even if the OS does a
good job of keeping the memory spaces separate and protected.

Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-22 Thread Victor Duchovni
On Wed, Mar 22, 2006 at 02:31:37PM -0800, Bill Frantz wrote:

 One of my pet peeves: The idea that the user is the proper atom of
 protection in an OS.
 
 My threat model includes different programs run by one (human) user.  If
 a Trojan, running as part of my userID, can learn something about the
 random numbers harvested by my browser/gpg/ssh etc., then it can start
 to attack the keys used by those applications, even if the OS does a
 good job of keeping the memory spaces separate and protected.
 

Why would a trojan running in your security context bother with attacking
a PRNG? It can just read your files, record your keystrokes, change your
browser proxy settings, ...

If the trojan is a sand-box of some sort, the sand-box is a different
security context, and in that case, perhaps a different RNG view is
justified.

Some applications that consume a steady stream of RNG data, maintain
their own random pool, and use the public pool to periodically mix in
some fresh state. These are less vulnerable to snooping/exhaustion of
the public stream.

The Postfix tlsmgr(8) process proxies randomness for the rest of the
system in this fashion...

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]