Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-26 Thread Phillip Susi

Marc Haber wrote:

On Tue, Dec 11, 2007 at 10:42:49AM -0500, Bill Davidsen wrote:
The original point was that urandom draws entropy from random, and that 
it is an an inobvious and unintentional drain on the entropy pool. At 
least that's how I read it.


And you are reading it correct. At least one of the major TLS
libraries does it this way, putting unnecessary stress on the kernel
entropy pool. While I now consider this a bug in the library, there
surely are gazillions of similiarily flawed applications out there in
the wild.


It seems to me that reading from (u)random disturbs the entropy pool, so 
the more consumers reading from the pool in unpredictable ways, the 
better.  As it is currently implemented, it lowers the entropy estimate, 
but the pool will have MORE entropy if several applications keep reading 
/dev/random periodically when they need random bytes instead of just 
reading it once to seed their own prng.  IMHO, it is the entropy 
estimate that is broken, not the TLS library.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-26 Thread Phillip Susi

Marc Haber wrote:

On Tue, Dec 11, 2007 at 10:42:49AM -0500, Bill Davidsen wrote:
The original point was that urandom draws entropy from random, and that 
it is an an inobvious and unintentional drain on the entropy pool. At 
least that's how I read it.


And you are reading it correct. At least one of the major TLS
libraries does it this way, putting unnecessary stress on the kernel
entropy pool. While I now consider this a bug in the library, there
surely are gazillions of similiarily flawed applications out there in
the wild.


It seems to me that reading from (u)random disturbs the entropy pool, so 
the more consumers reading from the pool in unpredictable ways, the 
better.  As it is currently implemented, it lowers the entropy estimate, 
but the pool will have MORE entropy if several applications keep reading 
/dev/random periodically when they need random bytes instead of just 
reading it once to seed their own prng.  IMHO, it is the entropy 
estimate that is broken, not the TLS library.



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-20 Thread Marc Haber
On Tue, Dec 11, 2007 at 10:42:49AM -0500, Bill Davidsen wrote:
> The original point was that urandom draws entropy from random, and that 
> it is an an inobvious and unintentional drain on the entropy pool. At 
> least that's how I read it.

And you are reading it correct. At least one of the major TLS
libraries does it this way, putting unnecessary stress on the kernel
entropy pool. While I now consider this a bug in the library, there
surely are gazillions of similiarily flawed applications out there in
the wild.

Greetings
Marc

-- 
-
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things."Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-20 Thread Marc Haber
On Tue, Dec 11, 2007 at 10:42:49AM -0500, Bill Davidsen wrote:
 The original point was that urandom draws entropy from random, and that 
 it is an an inobvious and unintentional drain on the entropy pool. At 
 least that's how I read it.

And you are reading it correct. At least one of the major TLS
libraries does it this way, putting unnecessary stress on the kernel
entropy pool. While I now consider this a bug in the library, there
surely are gazillions of similiarily flawed applications out there in
the wild.

Greetings
Marc

-- 
-
Marc Haber | I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things.Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread David Schwartz

Phillip Susi wrote:

> What good does using multiple levels of RNG do?  Why seed one RNG from
> another?  Wouldn't it be better to have just one RNG that everybody
> uses?  Doesn't the act of reading from the RNG add entropy to it, since
> no one reader has any idea how often and at what times other readers are
> stirring the pool?

No, unfortunately. The problem is that while in most typical cases may be
true, the estimate of how much entropy we have has to be based on the
assumption that everything we've done up to that point has been carefully
orchestrated by the mortal enemy of whatever is currently asking us for
entropy.

While I don't have any easy solutions with obvious irrefutable technical
brilliance or that will make everyone happy, I do think that one of the
problems is that neither /dev/random nor /dev/urandom are guaranteed to
provide what most people want. In the most common use case, you want
crypographically-strong randomness even under the assumption that all
previous activity is orchestrated by the enemy. Unfortunately, /dev/urandom
will happily give you randomness worse than this while /dev/random will
block even when you have it.

DS


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread Ray Lee
On Dec 11, 2007 11:46 AM, Phillip Susi <[EMAIL PROTECTED]> wrote:
> Theodore Tso wrote:
> > Note that even paranoid applicatons should not be using /dev/random
> > for session keys; again, /dev/random isn't magic, and entropy isn't
> > unlimited. Instead, such an application should pull 16 bytes or so,
> > and then use it to seed a cryptographic random number generator.
>
> What good does using multiple levels of RNG do?  Why seed one RNG from
> another?  Wouldn't it be better to have just one RNG that everybody
> uses?

Not all applications need cryptographically secure random numbers.
Sometimes, you just want a random number to seed your game RNG or a
monte carlo simulator.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread Phillip Susi

Theodore Tso wrote:

Note that even paranoid applicatons should not be using /dev/random
for session keys; again, /dev/random isn't magic, and entropy isn't
unlimited. Instead, such an application should pull 16 bytes or so,
and then use it to seed a cryptographic random number generator.


What good does using multiple levels of RNG do?  Why seed one RNG from 
another?  Wouldn't it be better to have just one RNG that everybody 
uses?  Doesn't the act of reading from the RNG add entropy to it, since 
no one reader has any idea how often and at what times other readers are 
stirring the pool?



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread Bill Davidsen

Adrian Bunk wrote:

On Thu, Dec 06, 2007 at 02:32:05PM -0500, Bill Davidsen wrote:
  

...
Sounds like a local DoS attack point to me...



As long as /dev/random is readable for all users there's no reason to 
use /dev/urandom for a local DoS...
  


The original point was that urandom draws entropy from random, and that 
it is an an inobvious and unintentional drain on the entropy pool. At 
least that's how I read it. I certainly have programs which draw on 
urandom simply because it's a convenient source of meaningless data. I 
have several fewer since this discussion started, though, now that I 
have looked at the easy alternatives.


--
Bill Davidsen <[EMAIL PROTECTED]>
 "Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over..." Otto von Bismark 



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread Pavel Machek
On Wed 2007-12-05 09:49:34, Theodore Tso wrote:
> On Wed, Dec 05, 2007 at 08:26:19AM -0600, Mike McGrath wrote:
> >
> > Ok, whats going on here is an issue with how the smolt RPM installs the 
> > UUID and how Fedora's Live CD does an install.  It's a complete false alarm 
> > on the kernel side, sorry for the confusion.
> 
> BTW, You may be better off using "uuidgen -t" to generate the UUID in
> the smolt RPM, since that will use 12 bits of randomness from
> /dev/random, plus the MAC, address and timestamp.  So even if there is
> zero randomness in /dev/random, and the time is January 1, 1970, at
> least the MAC will contribute some uniqueness to the UUID.

I thought that /dev/random blocks when 0 entropy is available...? I'd
expect uuid generation to block, not to issue duplicates.

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread Pavel Machek
On Wed 2007-12-05 09:49:34, Theodore Tso wrote:
 On Wed, Dec 05, 2007 at 08:26:19AM -0600, Mike McGrath wrote:
 
  Ok, whats going on here is an issue with how the smolt RPM installs the 
  UUID and how Fedora's Live CD does an install.  It's a complete false alarm 
  on the kernel side, sorry for the confusion.
 
 BTW, You may be better off using uuidgen -t to generate the UUID in
 the smolt RPM, since that will use 12 bits of randomness from
 /dev/random, plus the MAC, address and timestamp.  So even if there is
 zero randomness in /dev/random, and the time is January 1, 1970, at
 least the MAC will contribute some uniqueness to the UUID.

I thought that /dev/random blocks when 0 entropy is available...? I'd
expect uuid generation to block, not to issue duplicates.

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread Bill Davidsen

Adrian Bunk wrote:

On Thu, Dec 06, 2007 at 02:32:05PM -0500, Bill Davidsen wrote:
  

...
Sounds like a local DoS attack point to me...



As long as /dev/random is readable for all users there's no reason to 
use /dev/urandom for a local DoS...
  


The original point was that urandom draws entropy from random, and that 
it is an an inobvious and unintentional drain on the entropy pool. At 
least that's how I read it. I certainly have programs which draw on 
urandom simply because it's a convenient source of meaningless data. I 
have several fewer since this discussion started, though, now that I 
have looked at the easy alternatives.


--
Bill Davidsen [EMAIL PROTECTED]
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread Phillip Susi

Theodore Tso wrote:

Note that even paranoid applicatons should not be using /dev/random
for session keys; again, /dev/random isn't magic, and entropy isn't
unlimited. Instead, such an application should pull 16 bytes or so,
and then use it to seed a cryptographic random number generator.


What good does using multiple levels of RNG do?  Why seed one RNG from 
another?  Wouldn't it be better to have just one RNG that everybody 
uses?  Doesn't the act of reading from the RNG add entropy to it, since 
no one reader has any idea how often and at what times other readers are 
stirring the pool?



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread Ray Lee
On Dec 11, 2007 11:46 AM, Phillip Susi [EMAIL PROTECTED] wrote:
 Theodore Tso wrote:
  Note that even paranoid applicatons should not be using /dev/random
  for session keys; again, /dev/random isn't magic, and entropy isn't
  unlimited. Instead, such an application should pull 16 bytes or so,
  and then use it to seed a cryptographic random number generator.

 What good does using multiple levels of RNG do?  Why seed one RNG from
 another?  Wouldn't it be better to have just one RNG that everybody
 uses?

Not all applications need cryptographically secure random numbers.
Sometimes, you just want a random number to seed your game RNG or a
monte carlo simulator.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: Why does reading from /dev/urandom deplete entropy so much?

2007-12-11 Thread David Schwartz

Phillip Susi wrote:

 What good does using multiple levels of RNG do?  Why seed one RNG from
 another?  Wouldn't it be better to have just one RNG that everybody
 uses?  Doesn't the act of reading from the RNG add entropy to it, since
 no one reader has any idea how often and at what times other readers are
 stirring the pool?

No, unfortunately. The problem is that while in most typical cases may be
true, the estimate of how much entropy we have has to be based on the
assumption that everything we've done up to that point has been carefully
orchestrated by the mortal enemy of whatever is currently asking us for
entropy.

While I don't have any easy solutions with obvious irrefutable technical
brilliance or that will make everyone happy, I do think that one of the
problems is that neither /dev/random nor /dev/urandom are guaranteed to
provide what most people want. In the most common use case, you want
crypographically-strong randomness even under the assumption that all
previous activity is orchestrated by the enemy. Unfortunately, /dev/urandom
will happily give you randomness worse than this while /dev/random will
block even when you have it.

DS


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-10 Thread Theodore Tso
On Mon, Dec 10, 2007 at 05:35:25PM -0600, Matt Mackall wrote:
> > I must have missed this. Can you please explain again? For a layman it
> > looks like a paranoid application cannot read 500 Bytes from
> > /dev/random without blocking if some other application has previously
> > read 10 Kilobytes from /dev/urandom.
> 
> /dev/urandom always leaves enough entropy in the input pool for
> /dev/random to reseed. Thus, as long as entropy is coming in, it is
> not possible for /dev/urandom readers to starve /dev/random readers.
> But /dev/random readers may still block temporarily and they should
> damn well expect to block if they read 500 bytes out of a 512 byte
> pool.

A paranoid application should only need to read ~500 bytes if it is
generating a long-term RSA private key, and in that case, it would do
well to use a non-blocking read, and if it can't get enough bytes, it
should prompt the user to move the mouse around or bang on the
keyboard.  /dev/random is *not* magic where you can assume that you
will always get an unlimited amount of good randomness.  Applications
who assume this are broken, and it has nothing to do with DOS attacks.

Note that even paranoid applicatons should not be using /dev/random
for session keys; again, /dev/random isn't magic, and entropy isn't
unlimited. Instead, such an application should pull 16 bytes or so,
and then use it to seed a cryptographic random number generator.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-10 Thread Matt Mackall
On Tue, Dec 11, 2007 at 12:06:43AM +0100, Marc Haber wrote:
> On Sun, Dec 09, 2007 at 10:16:05AM -0600, Matt Mackall wrote:
> > On Sun, Dec 09, 2007 at 01:42:00PM +0100, Marc Haber wrote:
> > > On Wed, Dec 05, 2007 at 03:26:47PM -0600, Matt Mackall wrote:
> > > > The distinction between /dev/random and /dev/urandom boils down to one
> > > > word: paranoia. If you are not paranoid enough to mistrust your
> > > > network, then /dev/random IS NOT FOR YOU. Use /dev/urandom.
> > > 
> > > But currently, people who use /dev/urandom to obtain low-quality
> > > entropy do a DoS for the paranoid people.
> > 
> > Not true, as I've already pointed out in this thread.
> 
> I must have missed this. Can you please explain again? For a layman it
> looks like a paranoid application cannot read 500 Bytes from
> /dev/random without blocking if some other application has previously
> read 10 Kilobytes from /dev/urandom.

/dev/urandom always leaves enough entropy in the input pool for
/dev/random to reseed. Thus, as long as entropy is coming in, it is
not possible for /dev/urandom readers to starve /dev/random readers.
But /dev/random readers may still block temporarily and they should
damn well expect to block if they read 500 bytes out of a 512 byte
pool.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-10 Thread Marc Haber
On Sun, Dec 09, 2007 at 10:16:05AM -0600, Matt Mackall wrote:
> On Sun, Dec 09, 2007 at 01:42:00PM +0100, Marc Haber wrote:
> > On Wed, Dec 05, 2007 at 03:26:47PM -0600, Matt Mackall wrote:
> > > The distinction between /dev/random and /dev/urandom boils down to one
> > > word: paranoia. If you are not paranoid enough to mistrust your
> > > network, then /dev/random IS NOT FOR YOU. Use /dev/urandom.
> > 
> > But currently, people who use /dev/urandom to obtain low-quality
> > entropy do a DoS for the paranoid people.
> 
> Not true, as I've already pointed out in this thread.

I must have missed this. Can you please explain again? For a layman it
looks like a paranoid application cannot read 500 Bytes from
/dev/random without blocking if some other application has previously
read 10 Kilobytes from /dev/urandom.

Greetings
Marc

-- 
-
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things."Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-10 Thread Marc Haber
On Sun, Dec 09, 2007 at 10:16:05AM -0600, Matt Mackall wrote:
 On Sun, Dec 09, 2007 at 01:42:00PM +0100, Marc Haber wrote:
  On Wed, Dec 05, 2007 at 03:26:47PM -0600, Matt Mackall wrote:
   The distinction between /dev/random and /dev/urandom boils down to one
   word: paranoia. If you are not paranoid enough to mistrust your
   network, then /dev/random IS NOT FOR YOU. Use /dev/urandom.
  
  But currently, people who use /dev/urandom to obtain low-quality
  entropy do a DoS for the paranoid people.
 
 Not true, as I've already pointed out in this thread.

I must have missed this. Can you please explain again? For a layman it
looks like a paranoid application cannot read 500 Bytes from
/dev/random without blocking if some other application has previously
read 10 Kilobytes from /dev/urandom.

Greetings
Marc

-- 
-
Marc Haber | I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things.Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-10 Thread Matt Mackall
On Tue, Dec 11, 2007 at 12:06:43AM +0100, Marc Haber wrote:
 On Sun, Dec 09, 2007 at 10:16:05AM -0600, Matt Mackall wrote:
  On Sun, Dec 09, 2007 at 01:42:00PM +0100, Marc Haber wrote:
   On Wed, Dec 05, 2007 at 03:26:47PM -0600, Matt Mackall wrote:
The distinction between /dev/random and /dev/urandom boils down to one
word: paranoia. If you are not paranoid enough to mistrust your
network, then /dev/random IS NOT FOR YOU. Use /dev/urandom.
   
   But currently, people who use /dev/urandom to obtain low-quality
   entropy do a DoS for the paranoid people.
  
  Not true, as I've already pointed out in this thread.
 
 I must have missed this. Can you please explain again? For a layman it
 looks like a paranoid application cannot read 500 Bytes from
 /dev/random without blocking if some other application has previously
 read 10 Kilobytes from /dev/urandom.

/dev/urandom always leaves enough entropy in the input pool for
/dev/random to reseed. Thus, as long as entropy is coming in, it is
not possible for /dev/urandom readers to starve /dev/random readers.
But /dev/random readers may still block temporarily and they should
damn well expect to block if they read 500 bytes out of a 512 byte
pool.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-10 Thread Theodore Tso
On Mon, Dec 10, 2007 at 05:35:25PM -0600, Matt Mackall wrote:
  I must have missed this. Can you please explain again? For a layman it
  looks like a paranoid application cannot read 500 Bytes from
  /dev/random without blocking if some other application has previously
  read 10 Kilobytes from /dev/urandom.
 
 /dev/urandom always leaves enough entropy in the input pool for
 /dev/random to reseed. Thus, as long as entropy is coming in, it is
 not possible for /dev/urandom readers to starve /dev/random readers.
 But /dev/random readers may still block temporarily and they should
 damn well expect to block if they read 500 bytes out of a 512 byte
 pool.

A paranoid application should only need to read ~500 bytes if it is
generating a long-term RSA private key, and in that case, it would do
well to use a non-blocking read, and if it can't get enough bytes, it
should prompt the user to move the mouse around or bang on the
keyboard.  /dev/random is *not* magic where you can assume that you
will always get an unlimited amount of good randomness.  Applications
who assume this are broken, and it has nothing to do with DOS attacks.

Note that even paranoid applicatons should not be using /dev/random
for session keys; again, /dev/random isn't magic, and entropy isn't
unlimited. Instead, such an application should pull 16 bytes or so,
and then use it to seed a cryptographic random number generator.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-09 Thread Matt Mackall
On Sun, Dec 09, 2007 at 01:42:00PM +0100, Marc Haber wrote:
> On Wed, Dec 05, 2007 at 03:26:47PM -0600, Matt Mackall wrote:
> > The distinction between /dev/random and /dev/urandom boils down to one
> > word: paranoia. If you are not paranoid enough to mistrust your
> > network, then /dev/random IS NOT FOR YOU. Use /dev/urandom.
> 
> But currently, people who use /dev/urandom to obtain low-quality
> entropy do a DoS for the paranoid people.

Not true, as I've already pointed out in this thread.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-09 Thread Ismail Dönmez
Sunday 09 December 2007 14:31:47 tarihinde Theodore Tso şunları yazmıştı:
> On Sun, Dec 09, 2007 at 08:21:16AM +0200, Ismail Dönmez wrote:
> > My understanding was if you can drain entropy from /dev/urandom any
> > futher reads from /dev/urandom will result in data which is not random at
> > all. Is that wrong?
>
> Past a certain point /dev/urandom will stat returning results which
> are cryptographically random.  At that point, you are depending on the
> strength of the SHA hash algorithm, and actually being able to not
> just to find hash collisions, but being able to trivially find all or
> most possible pre-images for a particular SHA hash algorithm.  If that
> were to happen, it's highly likely that all digital signatures and
> openssh would be totally broken.

Thats very good news, thanks for the detailed explanation. Time to update 
common misconceptions.

Regards,
ismail

-- 
Never learn by your mistakes, if you do you may never dare to try again.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-09 Thread Marc Haber
On Wed, Dec 05, 2007 at 03:26:47PM -0600, Matt Mackall wrote:
> The distinction between /dev/random and /dev/urandom boils down to one
> word: paranoia. If you are not paranoid enough to mistrust your
> network, then /dev/random IS NOT FOR YOU. Use /dev/urandom.

But currently, people who use /dev/urandom to obtain low-quality
entropy do a DoS for the paranoid people.

Greetings
Marc

-- 
-
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things."Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-09 Thread Theodore Tso
On Sun, Dec 09, 2007 at 08:21:16AM +0200, Ismail Dönmez wrote:
> My understanding was if you can drain entropy from /dev/urandom any futher 
> reads from /dev/urandom will result in data which is not random at all. Is 
> that wrong?

Past a certain point /dev/urandom will stat returning results which
are cryptographically random.  At that point, you are depending on the
strength of the SHA hash algorithm, and actually being able to not
just to find hash collisions, but being able to trivially find all or
most possible pre-images for a particular SHA hash algorithm.  If that
were to happen, it's highly likely that all digital signatures and
openssh would be totally broken.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-09 Thread Theodore Tso
On Sun, Dec 09, 2007 at 08:21:16AM +0200, Ismail Dönmez wrote:
 My understanding was if you can drain entropy from /dev/urandom any futher 
 reads from /dev/urandom will result in data which is not random at all. Is 
 that wrong?

Past a certain point /dev/urandom will stat returning results which
are cryptographically random.  At that point, you are depending on the
strength of the SHA hash algorithm, and actually being able to not
just to find hash collisions, but being able to trivially find all or
most possible pre-images for a particular SHA hash algorithm.  If that
were to happen, it's highly likely that all digital signatures and
openssh would be totally broken.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-09 Thread Marc Haber
On Wed, Dec 05, 2007 at 03:26:47PM -0600, Matt Mackall wrote:
 The distinction between /dev/random and /dev/urandom boils down to one
 word: paranoia. If you are not paranoid enough to mistrust your
 network, then /dev/random IS NOT FOR YOU. Use /dev/urandom.

But currently, people who use /dev/urandom to obtain low-quality
entropy do a DoS for the paranoid people.

Greetings
Marc

-- 
-
Marc Haber | I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things.Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-09 Thread Ismail Dönmez
Sunday 09 December 2007 14:31:47 tarihinde Theodore Tso şunları yazmıştı:
 On Sun, Dec 09, 2007 at 08:21:16AM +0200, Ismail Dönmez wrote:
  My understanding was if you can drain entropy from /dev/urandom any
  futher reads from /dev/urandom will result in data which is not random at
  all. Is that wrong?

 Past a certain point /dev/urandom will stat returning results which
 are cryptographically random.  At that point, you are depending on the
 strength of the SHA hash algorithm, and actually being able to not
 just to find hash collisions, but being able to trivially find all or
 most possible pre-images for a particular SHA hash algorithm.  If that
 were to happen, it's highly likely that all digital signatures and
 openssh would be totally broken.

Thats very good news, thanks for the detailed explanation. Time to update 
common misconceptions.

Regards,
ismail

-- 
Never learn by your mistakes, if you do you may never dare to try again.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-09 Thread Matt Mackall
On Sun, Dec 09, 2007 at 01:42:00PM +0100, Marc Haber wrote:
 On Wed, Dec 05, 2007 at 03:26:47PM -0600, Matt Mackall wrote:
  The distinction between /dev/random and /dev/urandom boils down to one
  word: paranoia. If you are not paranoid enough to mistrust your
  network, then /dev/random IS NOT FOR YOU. Use /dev/urandom.
 
 But currently, people who use /dev/urandom to obtain low-quality
 entropy do a DoS for the paranoid people.

Not true, as I've already pointed out in this thread.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jon Masters
On Sun, 2007-12-09 at 06:21 +0100, Willy Tarreau wrote:

> Wouldn't it be possible to mix the data with the pid+uid of the reading
> process so that even if another one tries to collect data from urandom,
> he cannot predict what another process will get ? BTW, I think that the
> tuple (pid,uid,timestamp of open) is unpredictable and uncontrollable
> enough to provide one or even a few bits of entropy by itself.

Timestamp perhaps, but pid/uid are trivially guessable in automated
environments, such as LiveCDs. And if you're also running on an embedded
system without a RTC (common, folks like to save a few cents) then it's
all pretty much "trivially" guessable on some level.

Jon.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Ismail Dönmez
Sunday 09 December 2007 01:46:12 tarihinde Theodore Tso şunları yazmıştı:
> On Sun, Dec 09, 2007 at 12:10:10AM +0200, Ismail Dönmez wrote:
> > > As long as /dev/random is readable for all users there's no reason to
> > > use /dev/urandom for a local DoS...
> >
> > Draining entropy in /dev/urandom means that insecure and possibly not
> > random data will be used and well thats a security bug if not a DoS bug.
>
> Actually in modern 2.6 kernels there are two separate output entropy
> pools for /dev/random and /dev/urandom.  So assuming that the
> adversary doesn't know the contents of the current state of the
> entropy pool (i.e., the RNG is well seeded with entropy), you can read
> all you want from /dev/urandom and that won't give an adversary
> successful information to attack /dev/random.

My understanding was if you can drain entropy from /dev/urandom any futher 
reads from /dev/urandom will result in data which is not random at all. Is 
that wrong?

Regards,
ismail

-- 
Never learn by your mistakes, if you do you may never dare to try again.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Willy Tarreau
On Sat, Dec 08, 2007 at 06:46:12PM -0500, Theodore Tso wrote:
> On Sun, Dec 09, 2007 at 12:10:10AM +0200, Ismail Dönmez wrote:
> > > As long as /dev/random is readable for all users there's no reason to
> > > use /dev/urandom for a local DoS...
> > 
> > Draining entropy in /dev/urandom means that insecure and possibly not 
> > random 
> > data will be used and well thats a security bug if not a DoS bug.
> 
> Actually in modern 2.6 kernels there are two separate output entropy
> pools for /dev/random and /dev/urandom.  So assuming that the
> adversary doesn't know the contents of the current state of the
> entropy pool (i.e., the RNG is well seeded with entropy), you can read
> all you want from /dev/urandom and that won't give an adversary
> successful information to attack /dev/random.

Wouldn't it be possible to mix the data with the pid+uid of the reading
process so that even if another one tries to collect data from urandom,
he cannot predict what another process will get ? BTW, I think that the
tuple (pid,uid,timestamp of open) is unpredictable and uncontrollable
enough to provide one or even a few bits of entropy by itself.

Regards,
Willy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Jon Masters

On Sat, 2007-12-08 at 18:47 -0500, Theodore Tso wrote:
> On Sat, Dec 08, 2007 at 09:42:39PM +0100, Willy Tarreau wrote:
> > I remember having installed openssh on an AIX machines years ago, and
> > being amazed by the number of sources it collected entropy from. Simple
> > commands such as "ifconfig -a", "netstat -i" and "du -a", "ps -ef", "w"
> > provided a lot of entropy.
> 
> Well not as many bits of entropy as you might think.  But every
> little bit helps, especially if some of it is not available to
> adversary.

I was always especially fond of the "du" entropy source with Solaris
installations of OpenSSH (the PRNG used commands like "du" too). It was
always amusing that a single network outage at the University would
prevent anyone from ssh'ing into the "UNIX" machines. So yeah, if we
want to take a giant leap backwards, I suggest jumping at this.

Lots of these are not actually random - you can guess the free space on
a network drive in some certain cases, you know what processes are
likely to be created on a LiveCD, and many dmesg outputs are very
similar, especially when there aren't precie timestamps included.

But I do think it's time some of this got addressed :-)

Cheers,

Jon.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 09:42:39PM +0100, Willy Tarreau wrote:
> I remember having installed openssh on an AIX machines years ago, and
> being amazed by the number of sources it collected entropy from. Simple
> commands such as "ifconfig -a", "netstat -i" and "du -a", "ps -ef", "w"
> provided a lot of entropy.

Well not as many bits of entropy as you might think.  But every
little bit helps, especially if some of it is not available to
adversary.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sun, Dec 09, 2007 at 12:10:10AM +0200, Ismail Dönmez wrote:
> > As long as /dev/random is readable for all users there's no reason to
> > use /dev/urandom for a local DoS...
> 
> Draining entropy in /dev/urandom means that insecure and possibly not random 
> data will be used and well thats a security bug if not a DoS bug.

Actually in modern 2.6 kernels there are two separate output entropy
pools for /dev/random and /dev/urandom.  So assuming that the
adversary doesn't know the contents of the current state of the
entropy pool (i.e., the RNG is well seeded with entropy), you can read
all you want from /dev/urandom and that won't give an adversary
successful information to attack /dev/random.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Ismail Dönmez
Sunday 09 December 2007 00:03:45 tarihinde Adrian Bunk şunları yazmıştı:
> On Thu, Dec 06, 2007 at 02:32:05PM -0500, Bill Davidsen wrote:
> >...
> > Sounds like a local DoS attack point to me...
>
> As long as /dev/random is readable for all users there's no reason to
> use /dev/urandom for a local DoS...

Draining entropy in /dev/urandom means that insecure and possibly not random 
data will be used and well thats a security bug if not a DoS bug.

And yes this is by design, sigh.

-- 
Never learn by your mistakes, if you do you may never dare to try again.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Adrian Bunk
On Thu, Dec 06, 2007 at 02:32:05PM -0500, Bill Davidsen wrote:
>...
> Sounds like a local DoS attack point to me...

As long as /dev/random is readable for all users there's no reason to 
use /dev/urandom for a local DoS...

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Willy Tarreau
On Sat, Dec 08, 2007 at 02:19:54PM -0600, Matt Mackall wrote:
> On Sat, Dec 08, 2007 at 03:04:32PM -0500, Jeff Garzik wrote:
> > Matt Mackall wrote:
> > >On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:
> > >>As an aside...
> > >>
> > >>Speaking as the maintainer rng-tools, which is the home of the hardware 
> > >>RNG entropy gathering daemon...
> > >>
> > >>I wish somebody (not me) would take rngd and several other projects, and 
> > >>combine them into a single actively maintained "entropy gathering" 
> > >>package.
> > >
> > >I think we should re-evaluate having an internal path from the hwrngs
> > >to /dev/[u]random, which will reduce the need for userspace config
> > >that can go wrong.
> > 
> > That's a bit of a tangent on a tangent.  :)  Most people don't have a 
> > hardware RNG.
> > 
> > But as long as there are adequate safeguards against common hardware 
> > failures (read: FIPS testing inside the kernel), go for it.
> 
> We can do some internal whitening and some other basic tests
> (obviously not the full FIPS battery). The basic von Neumann whitening
> will do a great job of shutting off the spigot when an RNG fails in a
> non-nefarious way. And FIPS stuff is no defense against the nefarious
> failures anyway.
> 
> But I think simply dividing our entropy estimate by 10 or so will go
> an awfully long way.

Agreed. The example program you posted does a very good job. I intuitively
thought that it would show best results where CPU clock >>> system clock,
but even with a faster clock (gettimeofday()) and a few tricks, it provides
an excellent whitened output even at high speed. In fact, it has the advantage
of automatically adjusting its speed to the source clock resolution, which
ensures we don't return long runs of zeroes or ones. Here's my slightly
modified version to extract large amounts of data from gettimeofday(),
followed by test results :

#include 
#include 
#include 
int get_clock()
{
struct timeval tv;
unsigned i;

gettimeofday(, NULL);

i = tv.tv_usec ^ tv.tv_sec;
i = (i ^ (i >> 16)) & 0x;
i = (i ^ (i >> 8)) & 0xff;
i = (i ^ (i >> 4)) & 0xf;
i = (i ^ (i >> 2)) & 0x3;
i = (i ^ (i >> 1)) & 0x1;
return i;
}

int get_raw_timing_bit(void)
{
int parity = 0;
int start = get_clock();

while(start == get_clock()) {
parity++;
}
return parity & 1;
}

int get_whitened_timing_bit(void) {
int a, b;

while (1) {
// ensure we restart without the time offset from the
// failed tests.
get_raw_timing_bit();
a = get_raw_timing_bit();
b = get_raw_timing_bit();
if (a > b)
return 1;
if (b > a)
return 0;
}
}

int main(void)
{
int i;

while (1) {
for (i = 0; i < 64; i++) {
int j, k;
// variable-length eating 2N values per bit, looking
// for changing values.
do {
j = get_whitened_timing_bit();
k = get_whitened_timing_bit();
} while (j == k);
printf("%d", j);
}

printf("\n");
}
}

On my athlon 1.5 GHz with HZ=250, it produces about 40 kb/second. On an
IXP420 at 266 MHz with HZ=100, it produces about 6 kb/s. On a VAX VLC4000
at around 60 MHz under openbsd, it produces about 6 bits/s. In all cases,
the output data looks well distributed :

[EMAIL PROTECTED]:~$ for i in entropy.out.*; do echo $i :;z=$(tr -cd '0' <$i|wc 
-c); o=$(tr -cd '1' <$i|wc -c); echo $z zeroes, $o ones;done
entropy.out.k7 :
159811 zeroes, 166861 ones
entropy.out.nslu2 :
23786 zeroes, 24610 ones
entropy.out.vax :
687 zeroes, 657 ones

And there are very few long runs, the data is not compressible :

[EMAIL PROTECTED]:~$ for i in entropy.out.*; do echo -n "$i : ";u=$(tr -d '01' 
-c <$i|wc -c); c=$(tr -d '01' -c <$i | gzip -c9|wc -c); echo $(echo $u/$c|bc 
-l) digits/gzip byte;done
entropy.out.k7 : 6.67672246407913830809 digits/gzip byte
entropy.out.nslu2 : 6.27460132244262932711 digits/gzip byte
entropy.out.vax : 4.74911660777385159010 digits/gzip byte

Here are the 4 first output lines of the k7 version :
010001001100111011100100100011011101010000001000
10110101011010101110111010001011010001101101000101011011
110000100101111101001110101001010110110001111000
01110010100110100010010010111000100100101100011010011100

I found no unique line out of 1. I think the fact that the
clock source used by gettimeofday() is not completely coupled
with the TSC makes this possible. If we had used rdtsc() instead
of gettimeofday(), we might have gotten really strange patterns.
It's possible that 

Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Jeff Garzik

Theodore Tso wrote:

I think the userspace config problems were mainly due to the fact that
there wasn't a single official userspace utility package for the
random number package.  Comments in drivers/char/random.c for how to
set up /etc/init.d/random is Just Not Enough.


Absolutely.



If we had a single, official random number generator package that
contained the configuration, init.d script, as well as the daemon that
can do all sorts of different things that you really, Really, REALLY
want to do in userspace, including:

  * FIPS testing (as Jeff suggested --- making sure what you think is 
randomness isn't 60Hz hum is a Really Good Idea :-)

  * access to TPM (if available --- I have a vague memory that you may
need access to the TPM key to access any of its functions, and the
the TPM key is stored in the filesystem)


+1 agreed

(not volunteering, but I will cheer on the hearty soul who undertakes 
this endeavor...)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Willy Tarreau
On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:
> 
> As an aside...
> 
> Speaking as the maintainer rng-tools, which is the home of the hardware 
> RNG entropy gathering daemon...
> 
> I wish somebody (not me) would take rngd and several other projects, and 
> combine them into a single actively maintained "entropy gathering" package.
> 
> IMO entropy gathering has been a long-standing need for headless network 
> servers (and now virtual machines).
> 
> In addition to rngd for hardware RNGs, I've been daemons out there that 
> gather from audio and video sources (generally open wires/channels with 
> nothing plugged in), thermal sources, etc.  There is a lot of entropy 
> that could be gathered via userland, if you think creatively.

I remember having installed openssh on an AIX machines years ago, and
being amazed by the number of sources it collected entropy from. Simple
commands such as "ifconfig -a", "netstat -i" and "du -a", "ps -ef", "w"
provided a lot of entropy.

Regards,
Willy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 03:04:32PM -0500, Jeff Garzik wrote:
> That's a bit of a tangent on a tangent.  :)  Most people don't have a 
> hardware RNG.

Actually, most Business class laptops from IBM/Lenovo, HP, Dell,
Fujitsu, and Sony laptops *do* have TPM chips that among other things,
contain a slow but (supposedly, if the TPM microprocessors are to be
believed) secure hardware random number generator for use as a session
key generator.  This is thanks to various US legal mandates, such as
HIPPA for the medical industry, and not just the paranoid ravings of
the MPAA and RIAA.  :-)

The problem is enabling the TPM isn't trivial, and life gets harder if
you want the TPM chip to simultaneously work on dual-boot machines for
both Windows and Linux, but it is certainly doable.

>> I think we should re-evaluate having an internal path from the hwrngs
>> to /dev/[u]random, which will reduce the need for userspace config
>> that can go wrong.

I think the userspace config problems were mainly due to the fact that
there wasn't a single official userspace utility package for the
random number package.  Comments in drivers/char/random.c for how to
set up /etc/init.d/random is Just Not Enough.

If we had a single, official random number generator package that
contained the configuration, init.d script, as well as the daemon that
can do all sorts of different things that you really, Really, REALLY
want to do in userspace, including:

  * FIPS testing (as Jeff suggested --- making sure what you think is 
randomness isn't 60Hz hum is a Really Good Idea :-)
  * access to TPM (if available --- I have a vague memory that you may
need access to the TPM key to access any of its functions, and the
the TPM key is stored in the filesystem)

So  anyone interested in belling the metaphorical cat?   :-)

  - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread David Schwartz

> heh, along those lines you could also do
>
>   dmesg > /dev/random
>
> 
>
> dmesg often has machine-unique identifiers of all sorts (including the
> MAC address, if you have an ethernet driver loaded)
>
>   Jeff

A good three-part solution would be:

1) Encourage distributions to do "dmesg > /dev/random" in their startup
scripts. This could even be added to the kernel (as a one-time dump of the
kernel message buffer just before init is started).

2) Encourage drivers to output any unique information to the kernel log. I
believe all/most Ethernet drivers already do this with MAC addresses.
Perhaps we can get the kernel to include CPU serial numbers and we can get
the IDE/SATA drivers to include hard drive serial numbers. We can also use
the TSC, where available, in early bootup, which measures exactly how long
it took to get the kernel going, which should have some entropy in it.

3) Add more entropy to the kernel's pool at early startup, even if the
quality of that entropy is low. Track it appropriately, of course.

This should be enough to get cryptographically-strong random numbers that
would hold up against anyone who didn't have access to the 'dmesg' output.

DS


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Matt Mackall
On Sat, Dec 08, 2007 at 03:04:32PM -0500, Jeff Garzik wrote:
> Matt Mackall wrote:
> >On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:
> >>As an aside...
> >>
> >>Speaking as the maintainer rng-tools, which is the home of the hardware 
> >>RNG entropy gathering daemon...
> >>
> >>I wish somebody (not me) would take rngd and several other projects, and 
> >>combine them into a single actively maintained "entropy gathering" 
> >>package.
> >
> >I think we should re-evaluate having an internal path from the hwrngs
> >to /dev/[u]random, which will reduce the need for userspace config
> >that can go wrong.
> 
> That's a bit of a tangent on a tangent.  :)  Most people don't have a 
> hardware RNG.
> 
> But as long as there are adequate safeguards against common hardware 
> failures (read: FIPS testing inside the kernel), go for it.

We can do some internal whitening and some other basic tests
(obviously not the full FIPS battery). The basic von Neumann whitening
will do a great job of shutting off the spigot when an RNG fails in a
non-nefarious way. And FIPS stuff is no defense against the nefarious
failures anyway.

But I think simply dividing our entropy estimate by 10 or so will go
an awfully long way.

--
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Jeff Garzik

Matt Mackall wrote:

On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:

As an aside...

Speaking as the maintainer rng-tools, which is the home of the hardware 
RNG entropy gathering daemon...


I wish somebody (not me) would take rngd and several other projects, and 
combine them into a single actively maintained "entropy gathering" package.


I think we should re-evaluate having an internal path from the hwrngs
to /dev/[u]random, which will reduce the need for userspace config
that can go wrong.


That's a bit of a tangent on a tangent.  :)  Most people don't have a 
hardware RNG.


But as long as there are adequate safeguards against common hardware 
failures (read: FIPS testing inside the kernel), go for it.


Jeff



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Matt Mackall
On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:
> 
> As an aside...
> 
> Speaking as the maintainer rng-tools, which is the home of the hardware 
> RNG entropy gathering daemon...
> 
> I wish somebody (not me) would take rngd and several other projects, and 
> combine them into a single actively maintained "entropy gathering" package.

I think we should re-evaluate having an internal path from the hwrngs
to /dev/[u]random, which will reduce the need for userspace config
that can go wrong.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Jeff Garzik


As an aside...

Speaking as the maintainer rng-tools, which is the home of the hardware 
RNG entropy gathering daemon...


I wish somebody (not me) would take rngd and several other projects, and 
combine them into a single actively maintained "entropy gathering" package.


IMO entropy gathering has been a long-standing need for headless network 
servers (and now virtual machines).


In addition to rngd for hardware RNGs, I've been daemons out there that 
gather from audio and video sources (generally open wires/channels with 
nothing plugged in), thermal sources, etc.  There is a lot of entropy 
that could be gathered via userland, if you think creatively.


Jeff



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jeff Garzik

Theodore Tso wrote:

On Sat, Dec 08, 2007 at 11:33:57AM -0600, Mike McGrath wrote:

Huh?  What's the concern?  All you are submitting is a list of
hardware devices in your system.  That's hardly anything sensitive
We actually had a very vocal minority about all of that which ended up 
putting us in the unfortunate position of generating a random UUID instead 
of using a hardware UUID from hal :-/


Tinfoil hat responses indeed!  Ok, if those folks are really that
crazy, my suggestion then would be to do a "ifconfig -a > /dev/random"


heh, along those lines you could also do

dmesg > /dev/random



dmesg often has machine-unique identifiers of all sorts (including the 
MAC address, if you have an ethernet driver loaded)


Jeff



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 12:15:25PM -0600, Matt Mackall wrote:
> 
> It might be better for us to just improve the pool initialization.
> That'll improve the out of the box experience for everyone.
> 

Yeah, I agree.  Although keep in mind, doing things like mixing in MAC
address and DMI information (which we can either do in the kernel or
by trying to get all of the distro's to add that into their
/etc/init.d/random script --- all several hundred or thousand distro's
in the world :-), will help improve things like UUID uniqueness, it
doesn't necessarily guarantee /dev/urandom and UUID
*unpredictability*.  In order to do that we really do need to improve
the amount of hardware entropy we can mix into the system.  This is a
hard problem, but as more people are relying on these facilities, it's
something we need to think about quite a bit more!

   - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Matt Mackall
On Sat, Dec 08, 2007 at 12:49:08PM -0500, Theodore Tso wrote:
> On Sat, Dec 08, 2007 at 11:33:57AM -0600, Mike McGrath wrote:
> >> Huh?  What's the concern?  All you are submitting is a list of
> >> hardware devices in your system.  That's hardly anything sensitive
> >
> > We actually had a very vocal minority about all of that which ended up 
> > putting us in the unfortunate position of generating a random UUID instead 
> > of using a hardware UUID from hal :-/
> 
> Tinfoil hat responses indeed!  Ok, if those folks are really that
> crazy, my suggestion then would be to do a "ifconfig -a > /dev/random"
> before generating the UUID, and/or waiting until you just about to
> send the first profile, and/or if you don't yet have a UUID,
> generating it at that very moment.  The first will mix in the MAC
> address into the random pool, which will help guarantee uniqueness,
> and waiting until just before you send the result will mean it is much
> more likely that the random pool will have collected some entropy from
> user I/O, thus making the random UUID not only unique, but also
> unpredictable.

It might be better for us to just improve the pool initialization.
That'll improve the out of the box experience for everyone.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 11:43:43AM -0600, Matt Mackall wrote:
> > Huh?  What's the concern?  All you are submitting is a list of
> > hardware devices in your system.  That's hardly anything sensitive
> 
> Using MAC addresses -does- de-anonymize things though and presumably
> anonymous collection is a stated goal.

True, but for many machines, the MAC address is enough for someone
knowledgeable to (at least) determine what the manufacturer of your
machine is, and in many cases, the model number of your laptop (since
MAC addresses are assigned sequentially) and thus people can have a
very good idea of the contents of your PCI tree  if for some
reason anyone would even care, of course!

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jon Masters

On Sat, 2007-12-08 at 12:49 -0500, Theodore Tso wrote:
> On Sat, Dec 08, 2007 at 11:33:57AM -0600, Mike McGrath wrote:
> >> Huh?  What's the concern?  All you are submitting is a list of
> >> hardware devices in your system.  That's hardly anything sensitive
> >
> > We actually had a very vocal minority about all of that which ended up 
> > putting us in the unfortunate position of generating a random UUID instead 
> > of using a hardware UUID from hal :-/
> 
> Tinfoil hat responses indeed!  Ok, if those folks are really that
> crazy, my suggestion then would be to do a "ifconfig -a > /dev/random"
> before generating the UUID, and/or waiting until you just about to
> send the first profile, and/or if you don't yet have a UUID,
> generating it at that very moment.  The first will mix in the MAC
> address into the random pool, which will help guarantee uniqueness,
> and waiting until just before you send the result will mean it is much
> more likely that the random pool will have collected some entropy from
> user I/O, thus making the random UUID not only unique, but also
> unpredictable.

I do like that idea, and it could be combined with the DMI data for the
system containing things like asset tracking numbers, etc. Could use HAL
to generate a UUID based on hardware IDs and feed that in as entropy ;-)

Jon.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 11:33:57AM -0600, Mike McGrath wrote:
>> Huh?  What's the concern?  All you are submitting is a list of
>> hardware devices in your system.  That's hardly anything sensitive
>
> We actually had a very vocal minority about all of that which ended up 
> putting us in the unfortunate position of generating a random UUID instead 
> of using a hardware UUID from hal :-/

Tinfoil hat responses indeed!  Ok, if those folks are really that
crazy, my suggestion then would be to do a "ifconfig -a > /dev/random"
before generating the UUID, and/or waiting until you just about to
send the first profile, and/or if you don't yet have a UUID,
generating it at that very moment.  The first will mix in the MAC
address into the random pool, which will help guarantee uniqueness,
and waiting until just before you send the result will mean it is much
more likely that the random pool will have collected some entropy from
user I/O, thus making the random UUID not only unique, but also
unpredictable.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jon Masters

On Sat, 2007-12-08 at 11:43 -0600, Matt Mackall wrote:
> On Sat, Dec 08, 2007 at 12:32:04PM -0500, Theodore Tso wrote:
> > On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
> > > > BTW, You may be better off using "uuidgen -t" to generate the UUID in
> > > > the smolt RPM, since that will use 12 bits of randomness from
> > > > /dev/random, plus the MAC, address and timestamp.  So even if there is
> > > > zero randomness in /dev/random, and the time is January 1, 1970, at
> > > > least the MAC will contribute some uniqueness to the UUID.
> > > 
> > > I haven't checked how uuidgen uses the MAC, but I would suggest that
> > > that is not something Fedora should jump at doing - although it would
> > > help ensure unique UUIDs, it also contributes to the tinfoil hat
> > > responses that usually come up with things like smolt.
> > 
> > Huh?  What's the concern?  All you are submitting is a list of
> > hardware devices in your system.  That's hardly anything sensitive
> 
> Using MAC addresses -does- de-anonymize things though and presumably
> anonymous collection is a stated goal.

Right. And the more I think about it, the more I think the solution is
going to be for the smolt server to generate the UUID and tell the
client about it. Anything else seems to be based on hope, especially
when you're installing via kickstart or similar type of process.

Jon.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Matt Mackall
On Sat, Dec 08, 2007 at 12:32:04PM -0500, Theodore Tso wrote:
> On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
> > > BTW, You may be better off using "uuidgen -t" to generate the UUID in
> > > the smolt RPM, since that will use 12 bits of randomness from
> > > /dev/random, plus the MAC, address and timestamp.  So even if there is
> > > zero randomness in /dev/random, and the time is January 1, 1970, at
> > > least the MAC will contribute some uniqueness to the UUID.
> > 
> > I haven't checked how uuidgen uses the MAC, but I would suggest that
> > that is not something Fedora should jump at doing - although it would
> > help ensure unique UUIDs, it also contributes to the tinfoil hat
> > responses that usually come up with things like smolt.
> 
> Huh?  What's the concern?  All you are submitting is a list of
> hardware devices in your system.  That's hardly anything sensitive

Using MAC addresses -does- de-anonymize things though and presumably
anonymous collection is a stated goal.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jon Masters

On Sat, 2007-12-08 at 12:32 -0500, Theodore Tso wrote:
> On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
> > > BTW, You may be better off using "uuidgen -t" to generate the UUID in
> > > the smolt RPM, since that will use 12 bits of randomness from
> > > /dev/random, plus the MAC, address and timestamp.  So even if there is
> > > zero randomness in /dev/random, and the time is January 1, 1970, at
> > > least the MAC will contribute some uniqueness to the UUID.
> > 
> > I haven't checked how uuidgen uses the MAC, but I would suggest that
> > that is not something Fedora should jump at doing - although it would
> > help ensure unique UUIDs, it also contributes to the tinfoil hat
> > responses that usually come up with things like smolt.
> 
> Huh?  What's the concern?  All you are submitting is a list of
> hardware devices in your system.  That's hardly anything sensitive

Right, but the MAC is globally unique (well, that's normally true) so it
is an identifiable user characteristic, moreso than just a list of PCI
device IDs sitting on a particular bus. It's silly, but there it is.

Jon.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Mike McGrath

Theodore Tso wrote:

On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
  

BTW, You may be better off using "uuidgen -t" to generate the UUID in
the smolt RPM, since that will use 12 bits of randomness from
/dev/random, plus the MAC, address and timestamp.  So even if there is
zero randomness in /dev/random, and the time is January 1, 1970, at
least the MAC will contribute some uniqueness to the UUID.
  

I haven't checked how uuidgen uses the MAC, but I would suggest that
that is not something Fedora should jump at doing - although it would
help ensure unique UUIDs, it also contributes to the tinfoil hat
responses that usually come up with things like smolt.



Huh?  What's the concern?  All you are submitting is a list of
hardware devices in your system.  That's hardly anything sensitive
  


We actually had a very vocal minority about all of that which ended up 
putting us in the unfortunate position of generating a random UUID 
instead of using a hardware UUID from hal :-/


   -Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
> > BTW, You may be better off using "uuidgen -t" to generate the UUID in
> > the smolt RPM, since that will use 12 bits of randomness from
> > /dev/random, plus the MAC, address and timestamp.  So even if there is
> > zero randomness in /dev/random, and the time is January 1, 1970, at
> > least the MAC will contribute some uniqueness to the UUID.
> 
> I haven't checked how uuidgen uses the MAC, but I would suggest that
> that is not something Fedora should jump at doing - although it would
> help ensure unique UUIDs, it also contributes to the tinfoil hat
> responses that usually come up with things like smolt.

Huh?  What's the concern?  All you are submitting is a list of
hardware devices in your system.  That's hardly anything sensitive

 - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
  BTW, You may be better off using uuidgen -t to generate the UUID in
  the smolt RPM, since that will use 12 bits of randomness from
  /dev/random, plus the MAC, address and timestamp.  So even if there is
  zero randomness in /dev/random, and the time is January 1, 1970, at
  least the MAC will contribute some uniqueness to the UUID.
 
 I haven't checked how uuidgen uses the MAC, but I would suggest that
 that is not something Fedora should jump at doing - although it would
 help ensure unique UUIDs, it also contributes to the tinfoil hat
 responses that usually come up with things like smolt.

Huh?  What's the concern?  All you are submitting is a list of
hardware devices in your system.  That's hardly anything sensitive

 - Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Mike McGrath

Theodore Tso wrote:

On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
  

BTW, You may be better off using uuidgen -t to generate the UUID in
the smolt RPM, since that will use 12 bits of randomness from
/dev/random, plus the MAC, address and timestamp.  So even if there is
zero randomness in /dev/random, and the time is January 1, 1970, at
least the MAC will contribute some uniqueness to the UUID.
  

I haven't checked how uuidgen uses the MAC, but I would suggest that
that is not something Fedora should jump at doing - although it would
help ensure unique UUIDs, it also contributes to the tinfoil hat
responses that usually come up with things like smolt.



Huh?  What's the concern?  All you are submitting is a list of
hardware devices in your system.  That's hardly anything sensitive
  


We actually had a very vocal minority about all of that which ended up 
putting us in the unfortunate position of generating a random UUID 
instead of using a hardware UUID from hal :-/


   -Mike
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Matt Mackall
On Sat, Dec 08, 2007 at 12:32:04PM -0500, Theodore Tso wrote:
 On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
   BTW, You may be better off using uuidgen -t to generate the UUID in
   the smolt RPM, since that will use 12 bits of randomness from
   /dev/random, plus the MAC, address and timestamp.  So even if there is
   zero randomness in /dev/random, and the time is January 1, 1970, at
   least the MAC will contribute some uniqueness to the UUID.
  
  I haven't checked how uuidgen uses the MAC, but I would suggest that
  that is not something Fedora should jump at doing - although it would
  help ensure unique UUIDs, it also contributes to the tinfoil hat
  responses that usually come up with things like smolt.
 
 Huh?  What's the concern?  All you are submitting is a list of
 hardware devices in your system.  That's hardly anything sensitive

Using MAC addresses -does- de-anonymize things though and presumably
anonymous collection is a stated goal.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jon Masters

On Sat, 2007-12-08 at 12:32 -0500, Theodore Tso wrote:
 On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
   BTW, You may be better off using uuidgen -t to generate the UUID in
   the smolt RPM, since that will use 12 bits of randomness from
   /dev/random, plus the MAC, address and timestamp.  So even if there is
   zero randomness in /dev/random, and the time is January 1, 1970, at
   least the MAC will contribute some uniqueness to the UUID.
  
  I haven't checked how uuidgen uses the MAC, but I would suggest that
  that is not something Fedora should jump at doing - although it would
  help ensure unique UUIDs, it also contributes to the tinfoil hat
  responses that usually come up with things like smolt.
 
 Huh?  What's the concern?  All you are submitting is a list of
 hardware devices in your system.  That's hardly anything sensitive

Right, but the MAC is globally unique (well, that's normally true) so it
is an identifiable user characteristic, moreso than just a list of PCI
device IDs sitting on a particular bus. It's silly, but there it is.

Jon.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jon Masters

On Sat, 2007-12-08 at 11:43 -0600, Matt Mackall wrote:
 On Sat, Dec 08, 2007 at 12:32:04PM -0500, Theodore Tso wrote:
  On Sat, Dec 08, 2007 at 02:37:57AM -0500, Jon Masters wrote:
BTW, You may be better off using uuidgen -t to generate the UUID in
the smolt RPM, since that will use 12 bits of randomness from
/dev/random, plus the MAC, address and timestamp.  So even if there is
zero randomness in /dev/random, and the time is January 1, 1970, at
least the MAC will contribute some uniqueness to the UUID.
   
   I haven't checked how uuidgen uses the MAC, but I would suggest that
   that is not something Fedora should jump at doing - although it would
   help ensure unique UUIDs, it also contributes to the tinfoil hat
   responses that usually come up with things like smolt.
  
  Huh?  What's the concern?  All you are submitting is a list of
  hardware devices in your system.  That's hardly anything sensitive
 
 Using MAC addresses -does- de-anonymize things though and presumably
 anonymous collection is a stated goal.

Right. And the more I think about it, the more I think the solution is
going to be for the smolt server to generate the UUID and tell the
client about it. Anything else seems to be based on hope, especially
when you're installing via kickstart or similar type of process.

Jon.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 11:33:57AM -0600, Mike McGrath wrote:
 Huh?  What's the concern?  All you are submitting is a list of
 hardware devices in your system.  That's hardly anything sensitive

 We actually had a very vocal minority about all of that which ended up 
 putting us in the unfortunate position of generating a random UUID instead 
 of using a hardware UUID from hal :-/

Tinfoil hat responses indeed!  Ok, if those folks are really that
crazy, my suggestion then would be to do a ifconfig -a  /dev/random
before generating the UUID, and/or waiting until you just about to
send the first profile, and/or if you don't yet have a UUID,
generating it at that very moment.  The first will mix in the MAC
address into the random pool, which will help guarantee uniqueness,
and waiting until just before you send the result will mean it is much
more likely that the random pool will have collected some entropy from
user I/O, thus making the random UUID not only unique, but also
unpredictable.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jon Masters

On Sat, 2007-12-08 at 12:49 -0500, Theodore Tso wrote:
 On Sat, Dec 08, 2007 at 11:33:57AM -0600, Mike McGrath wrote:
  Huh?  What's the concern?  All you are submitting is a list of
  hardware devices in your system.  That's hardly anything sensitive
 
  We actually had a very vocal minority about all of that which ended up 
  putting us in the unfortunate position of generating a random UUID instead 
  of using a hardware UUID from hal :-/
 
 Tinfoil hat responses indeed!  Ok, if those folks are really that
 crazy, my suggestion then would be to do a ifconfig -a  /dev/random
 before generating the UUID, and/or waiting until you just about to
 send the first profile, and/or if you don't yet have a UUID,
 generating it at that very moment.  The first will mix in the MAC
 address into the random pool, which will help guarantee uniqueness,
 and waiting until just before you send the result will mean it is much
 more likely that the random pool will have collected some entropy from
 user I/O, thus making the random UUID not only unique, but also
 unpredictable.

I do like that idea, and it could be combined with the DMI data for the
system containing things like asset tracking numbers, etc. Could use HAL
to generate a UUID based on hardware IDs and feed that in as entropy ;-)

Jon.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Matt Mackall
On Sat, Dec 08, 2007 at 12:49:08PM -0500, Theodore Tso wrote:
 On Sat, Dec 08, 2007 at 11:33:57AM -0600, Mike McGrath wrote:
  Huh?  What's the concern?  All you are submitting is a list of
  hardware devices in your system.  That's hardly anything sensitive
 
  We actually had a very vocal minority about all of that which ended up 
  putting us in the unfortunate position of generating a random UUID instead 
  of using a hardware UUID from hal :-/
 
 Tinfoil hat responses indeed!  Ok, if those folks are really that
 crazy, my suggestion then would be to do a ifconfig -a  /dev/random
 before generating the UUID, and/or waiting until you just about to
 send the first profile, and/or if you don't yet have a UUID,
 generating it at that very moment.  The first will mix in the MAC
 address into the random pool, which will help guarantee uniqueness,
 and waiting until just before you send the result will mean it is much
 more likely that the random pool will have collected some entropy from
 user I/O, thus making the random UUID not only unique, but also
 unpredictable.

It might be better for us to just improve the pool initialization.
That'll improve the out of the box experience for everyone.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 12:15:25PM -0600, Matt Mackall wrote:
 
 It might be better for us to just improve the pool initialization.
 That'll improve the out of the box experience for everyone.
 

Yeah, I agree.  Although keep in mind, doing things like mixing in MAC
address and DMI information (which we can either do in the kernel or
by trying to get all of the distro's to add that into their
/etc/init.d/random script --- all several hundred or thousand distro's
in the world :-), will help improve things like UUID uniqueness, it
doesn't necessarily guarantee /dev/urandom and UUID
*unpredictability*.  In order to do that we really do need to improve
the amount of hardware entropy we can mix into the system.  This is a
hard problem, but as more people are relying on these facilities, it's
something we need to think about quite a bit more!

   - Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jeff Garzik

Theodore Tso wrote:

On Sat, Dec 08, 2007 at 11:33:57AM -0600, Mike McGrath wrote:

Huh?  What's the concern?  All you are submitting is a list of
hardware devices in your system.  That's hardly anything sensitive
We actually had a very vocal minority about all of that which ended up 
putting us in the unfortunate position of generating a random UUID instead 
of using a hardware UUID from hal :-/


Tinfoil hat responses indeed!  Ok, if those folks are really that
crazy, my suggestion then would be to do a ifconfig -a  /dev/random


heh, along those lines you could also do

dmesg  /dev/random

grin

dmesg often has machine-unique identifiers of all sorts (including the 
MAC address, if you have an ethernet driver loaded)


Jeff



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 11:43:43AM -0600, Matt Mackall wrote:
  Huh?  What's the concern?  All you are submitting is a list of
  hardware devices in your system.  That's hardly anything sensitive
 
 Using MAC addresses -does- de-anonymize things though and presumably
 anonymous collection is a stated goal.

True, but for many machines, the MAC address is enough for someone
knowledgeable to (at least) determine what the manufacturer of your
machine is, and in many cases, the model number of your laptop (since
MAC addresses are assigned sequentially) and thus people can have a
very good idea of the contents of your PCI tree  if for some
reason anyone would even care, of course!

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Jeff Garzik


As an aside...

Speaking as the maintainer rng-tools, which is the home of the hardware 
RNG entropy gathering daemon...


I wish somebody (not me) would take rngd and several other projects, and 
combine them into a single actively maintained entropy gathering package.


IMO entropy gathering has been a long-standing need for headless network 
servers (and now virtual machines).


In addition to rngd for hardware RNGs, I've been daemons out there that 
gather from audio and video sources (generally open wires/channels with 
nothing plugged in), thermal sources, etc.  There is a lot of entropy 
that could be gathered via userland, if you think creatively.


Jeff



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Matt Mackall
On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:
 
 As an aside...
 
 Speaking as the maintainer rng-tools, which is the home of the hardware 
 RNG entropy gathering daemon...
 
 I wish somebody (not me) would take rngd and several other projects, and 
 combine them into a single actively maintained entropy gathering package.

I think we should re-evaluate having an internal path from the hwrngs
to /dev/[u]random, which will reduce the need for userspace config
that can go wrong.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Jeff Garzik

Matt Mackall wrote:

On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:

As an aside...

Speaking as the maintainer rng-tools, which is the home of the hardware 
RNG entropy gathering daemon...


I wish somebody (not me) would take rngd and several other projects, and 
combine them into a single actively maintained entropy gathering package.


I think we should re-evaluate having an internal path from the hwrngs
to /dev/[u]random, which will reduce the need for userspace config
that can go wrong.


That's a bit of a tangent on a tangent.  :)  Most people don't have a 
hardware RNG.


But as long as there are adequate safeguards against common hardware 
failures (read: FIPS testing inside the kernel), go for it.


Jeff



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Matt Mackall
On Sat, Dec 08, 2007 at 03:04:32PM -0500, Jeff Garzik wrote:
 Matt Mackall wrote:
 On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:
 As an aside...
 
 Speaking as the maintainer rng-tools, which is the home of the hardware 
 RNG entropy gathering daemon...
 
 I wish somebody (not me) would take rngd and several other projects, and 
 combine them into a single actively maintained entropy gathering 
 package.
 
 I think we should re-evaluate having an internal path from the hwrngs
 to /dev/[u]random, which will reduce the need for userspace config
 that can go wrong.
 
 That's a bit of a tangent on a tangent.  :)  Most people don't have a 
 hardware RNG.
 
 But as long as there are adequate safeguards against common hardware 
 failures (read: FIPS testing inside the kernel), go for it.

We can do some internal whitening and some other basic tests
(obviously not the full FIPS battery). The basic von Neumann whitening
will do a great job of shutting off the spigot when an RNG fails in a
non-nefarious way. And FIPS stuff is no defense against the nefarious
failures anyway.

But I think simply dividing our entropy estimate by 10 or so will go
an awfully long way.

--
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread David Schwartz

 heh, along those lines you could also do

   dmesg  /dev/random

 grin

 dmesg often has machine-unique identifiers of all sorts (including the
 MAC address, if you have an ethernet driver loaded)

   Jeff

A good three-part solution would be:

1) Encourage distributions to do dmesg  /dev/random in their startup
scripts. This could even be added to the kernel (as a one-time dump of the
kernel message buffer just before init is started).

2) Encourage drivers to output any unique information to the kernel log. I
believe all/most Ethernet drivers already do this with MAC addresses.
Perhaps we can get the kernel to include CPU serial numbers and we can get
the IDE/SATA drivers to include hard drive serial numbers. We can also use
the TSC, where available, in early bootup, which measures exactly how long
it took to get the kernel going, which should have some entropy in it.

3) Add more entropy to the kernel's pool at early startup, even if the
quality of that entropy is low. Track it appropriately, of course.

This should be enough to get cryptographically-strong random numbers that
would hold up against anyone who didn't have access to the 'dmesg' output.

DS


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 03:04:32PM -0500, Jeff Garzik wrote:
 That's a bit of a tangent on a tangent.  :)  Most people don't have a 
 hardware RNG.

Actually, most Business class laptops from IBM/Lenovo, HP, Dell,
Fujitsu, and Sony laptops *do* have TPM chips that among other things,
contain a slow but (supposedly, if the TPM microprocessors are to be
believed) secure hardware random number generator for use as a session
key generator.  This is thanks to various US legal mandates, such as
HIPPA for the medical industry, and not just the paranoid ravings of
the MPAA and RIAA.  :-)

The problem is enabling the TPM isn't trivial, and life gets harder if
you want the TPM chip to simultaneously work on dual-boot machines for
both Windows and Linux, but it is certainly doable.

 I think we should re-evaluate having an internal path from the hwrngs
 to /dev/[u]random, which will reduce the need for userspace config
 that can go wrong.

I think the userspace config problems were mainly due to the fact that
there wasn't a single official userspace utility package for the
random number package.  Comments in drivers/char/random.c for how to
set up /etc/init.d/random is Just Not Enough.

If we had a single, official random number generator package that
contained the configuration, init.d script, as well as the daemon that
can do all sorts of different things that you really, Really, REALLY
want to do in userspace, including:

  * FIPS testing (as Jeff suggested --- making sure what you think is 
randomness isn't 60Hz hum is a Really Good Idea :-)
  * access to TPM (if available --- I have a vague memory that you may
need access to the TPM key to access any of its functions, and the
the TPM key is stored in the filesystem)

So  anyone interested in belling the metaphorical cat?   :-)

  - Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Jeff Garzik

Theodore Tso wrote:

I think the userspace config problems were mainly due to the fact that
there wasn't a single official userspace utility package for the
random number package.  Comments in drivers/char/random.c for how to
set up /etc/init.d/random is Just Not Enough.


Absolutely.



If we had a single, official random number generator package that
contained the configuration, init.d script, as well as the daemon that
can do all sorts of different things that you really, Really, REALLY
want to do in userspace, including:

  * FIPS testing (as Jeff suggested --- making sure what you think is 
randomness isn't 60Hz hum is a Really Good Idea :-)

  * access to TPM (if available --- I have a vague memory that you may
need access to the TPM key to access any of its functions, and the
the TPM key is stored in the filesystem)


+1 agreed

(not volunteering, but I will cheer on the hearty soul who undertakes 
this endeavor...)


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Willy Tarreau
On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:
 
 As an aside...
 
 Speaking as the maintainer rng-tools, which is the home of the hardware 
 RNG entropy gathering daemon...
 
 I wish somebody (not me) would take rngd and several other projects, and 
 combine them into a single actively maintained entropy gathering package.
 
 IMO entropy gathering has been a long-standing need for headless network 
 servers (and now virtual machines).
 
 In addition to rngd for hardware RNGs, I've been daemons out there that 
 gather from audio and video sources (generally open wires/channels with 
 nothing plugged in), thermal sources, etc.  There is a lot of entropy 
 that could be gathered via userland, if you think creatively.

I remember having installed openssh on an AIX machines years ago, and
being amazed by the number of sources it collected entropy from. Simple
commands such as ifconfig -a, netstat -i and du -a, ps -ef, w
provided a lot of entropy.

Regards,
Willy

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Willy Tarreau
On Sat, Dec 08, 2007 at 02:19:54PM -0600, Matt Mackall wrote:
 On Sat, Dec 08, 2007 at 03:04:32PM -0500, Jeff Garzik wrote:
  Matt Mackall wrote:
  On Sat, Dec 08, 2007 at 02:36:33PM -0500, Jeff Garzik wrote:
  As an aside...
  
  Speaking as the maintainer rng-tools, which is the home of the hardware 
  RNG entropy gathering daemon...
  
  I wish somebody (not me) would take rngd and several other projects, and 
  combine them into a single actively maintained entropy gathering 
  package.
  
  I think we should re-evaluate having an internal path from the hwrngs
  to /dev/[u]random, which will reduce the need for userspace config
  that can go wrong.
  
  That's a bit of a tangent on a tangent.  :)  Most people don't have a 
  hardware RNG.
  
  But as long as there are adequate safeguards against common hardware 
  failures (read: FIPS testing inside the kernel), go for it.
 
 We can do some internal whitening and some other basic tests
 (obviously not the full FIPS battery). The basic von Neumann whitening
 will do a great job of shutting off the spigot when an RNG fails in a
 non-nefarious way. And FIPS stuff is no defense against the nefarious
 failures anyway.
 
 But I think simply dividing our entropy estimate by 10 or so will go
 an awfully long way.

Agreed. The example program you posted does a very good job. I intuitively
thought that it would show best results where CPU clock  system clock,
but even with a faster clock (gettimeofday()) and a few tricks, it provides
an excellent whitened output even at high speed. In fact, it has the advantage
of automatically adjusting its speed to the source clock resolution, which
ensures we don't return long runs of zeroes or ones. Here's my slightly
modified version to extract large amounts of data from gettimeofday(),
followed by test results :

#include stdio.h
#include stdlib.h
#include sys/time.h
int get_clock()
{
struct timeval tv;
unsigned i;

gettimeofday(tv, NULL);

i = tv.tv_usec ^ tv.tv_sec;
i = (i ^ (i  16))  0x;
i = (i ^ (i  8))  0xff;
i = (i ^ (i  4))  0xf;
i = (i ^ (i  2))  0x3;
i = (i ^ (i  1))  0x1;
return i;
}

int get_raw_timing_bit(void)
{
int parity = 0;
int start = get_clock();

while(start == get_clock()) {
parity++;
}
return parity  1;
}

int get_whitened_timing_bit(void) {
int a, b;

while (1) {
// ensure we restart without the time offset from the
// failed tests.
get_raw_timing_bit();
a = get_raw_timing_bit();
b = get_raw_timing_bit();
if (a  b)
return 1;
if (b  a)
return 0;
}
}

int main(void)
{
int i;

while (1) {
for (i = 0; i  64; i++) {
int j, k;
// variable-length eating 2N values per bit, looking
// for changing values.
do {
j = get_whitened_timing_bit();
k = get_whitened_timing_bit();
} while (j == k);
printf(%d, j);
}

printf(\n);
}
}

On my athlon 1.5 GHz with HZ=250, it produces about 40 kb/second. On an
IXP420 at 266 MHz with HZ=100, it produces about 6 kb/s. On a VAX VLC4000
at around 60 MHz under openbsd, it produces about 6 bits/s. In all cases,
the output data looks well distributed :

[EMAIL PROTECTED]:~$ for i in entropy.out.*; do echo $i :;z=$(tr -cd '0' $i|wc 
-c); o=$(tr -cd '1' $i|wc -c); echo $z zeroes, $o ones;done
entropy.out.k7 :
159811 zeroes, 166861 ones
entropy.out.nslu2 :
23786 zeroes, 24610 ones
entropy.out.vax :
687 zeroes, 657 ones

And there are very few long runs, the data is not compressible :

[EMAIL PROTECTED]:~$ for i in entropy.out.*; do echo -n $i : ;u=$(tr -d '01' 
-c $i|wc -c); c=$(tr -d '01' -c $i | gzip -c9|wc -c); echo $(echo $u/$c|bc 
-l) digits/gzip byte;done
entropy.out.k7 : 6.67672246407913830809 digits/gzip byte
entropy.out.nslu2 : 6.27460132244262932711 digits/gzip byte
entropy.out.vax : 4.74911660777385159010 digits/gzip byte

Here are the 4 first output lines of the k7 version :
010001001100111011100100100011011101010000001000
10110101011010101110111010001011010001101101000101011011
110000100101111101001110101001010110110001111000
01110010100110100010010010111000100100101100011010011100

I found no unique line out of 1. I think the fact that the
clock source used by gettimeofday() is not completely coupled
with the TSC makes this possible. If we had used rdtsc() instead
of gettimeofday(), we might have gotten really strange patterns.
It's possible that peeking around out-of-cache memory data would
add random bus latency to the 

Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Adrian Bunk
On Thu, Dec 06, 2007 at 02:32:05PM -0500, Bill Davidsen wrote:
...
 Sounds like a local DoS attack point to me...

As long as /dev/random is readable for all users there's no reason to 
use /dev/urandom for a local DoS...

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Ismail Dönmez
Sunday 09 December 2007 00:03:45 tarihinde Adrian Bunk şunları yazmıştı:
 On Thu, Dec 06, 2007 at 02:32:05PM -0500, Bill Davidsen wrote:
 ...
  Sounds like a local DoS attack point to me...

 As long as /dev/random is readable for all users there's no reason to
 use /dev/urandom for a local DoS...

Draining entropy in /dev/urandom means that insecure and possibly not random 
data will be used and well thats a security bug if not a DoS bug.

And yes this is by design, sigh.

-- 
Never learn by your mistakes, if you do you may never dare to try again.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Theodore Tso
On Sun, Dec 09, 2007 at 12:10:10AM +0200, Ismail Dönmez wrote:
  As long as /dev/random is readable for all users there's no reason to
  use /dev/urandom for a local DoS...
 
 Draining entropy in /dev/urandom means that insecure and possibly not random 
 data will be used and well thats a security bug if not a DoS bug.

Actually in modern 2.6 kernels there are two separate output entropy
pools for /dev/random and /dev/urandom.  So assuming that the
adversary doesn't know the contents of the current state of the
entropy pool (i.e., the RNG is well seeded with entropy), you can read
all you want from /dev/urandom and that won't give an adversary
successful information to attack /dev/random.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Theodore Tso
On Sat, Dec 08, 2007 at 09:42:39PM +0100, Willy Tarreau wrote:
 I remember having installed openssh on an AIX machines years ago, and
 being amazed by the number of sources it collected entropy from. Simple
 commands such as ifconfig -a, netstat -i and du -a, ps -ef, w
 provided a lot of entropy.

Well not as many bits of entropy as you might think.  But every
little bit helps, especially if some of it is not available to
adversary.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: entropy gathering (was Re: Why does reading from /dev/urandom deplete entropy so much?)

2007-12-08 Thread Jon Masters

On Sat, 2007-12-08 at 18:47 -0500, Theodore Tso wrote:
 On Sat, Dec 08, 2007 at 09:42:39PM +0100, Willy Tarreau wrote:
  I remember having installed openssh on an AIX machines years ago, and
  being amazed by the number of sources it collected entropy from. Simple
  commands such as ifconfig -a, netstat -i and du -a, ps -ef, w
  provided a lot of entropy.
 
 Well not as many bits of entropy as you might think.  But every
 little bit helps, especially if some of it is not available to
 adversary.

I was always especially fond of the du entropy source with Solaris
installations of OpenSSH (the PRNG used commands like du too). It was
always amusing that a single network outage at the University would
prevent anyone from ssh'ing into the UNIX machines. So yeah, if we
want to take a giant leap backwards, I suggest jumping at this.

Lots of these are not actually random - you can guess the free space on
a network drive in some certain cases, you know what processes are
likely to be created on a LiveCD, and many dmesg outputs are very
similar, especially when there aren't precie timestamps included.

But I do think it's time some of this got addressed :-)

Cheers,

Jon.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Willy Tarreau
On Sat, Dec 08, 2007 at 06:46:12PM -0500, Theodore Tso wrote:
 On Sun, Dec 09, 2007 at 12:10:10AM +0200, Ismail Dönmez wrote:
   As long as /dev/random is readable for all users there's no reason to
   use /dev/urandom for a local DoS...
  
  Draining entropy in /dev/urandom means that insecure and possibly not 
  random 
  data will be used and well thats a security bug if not a DoS bug.
 
 Actually in modern 2.6 kernels there are two separate output entropy
 pools for /dev/random and /dev/urandom.  So assuming that the
 adversary doesn't know the contents of the current state of the
 entropy pool (i.e., the RNG is well seeded with entropy), you can read
 all you want from /dev/urandom and that won't give an adversary
 successful information to attack /dev/random.

Wouldn't it be possible to mix the data with the pid+uid of the reading
process so that even if another one tries to collect data from urandom,
he cannot predict what another process will get ? BTW, I think that the
tuple (pid,uid,timestamp of open) is unpredictable and uncontrollable
enough to provide one or even a few bits of entropy by itself.

Regards,
Willy

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Ismail Dönmez
Sunday 09 December 2007 01:46:12 tarihinde Theodore Tso şunları yazmıştı:
 On Sun, Dec 09, 2007 at 12:10:10AM +0200, Ismail Dönmez wrote:
   As long as /dev/random is readable for all users there's no reason to
   use /dev/urandom for a local DoS...
 
  Draining entropy in /dev/urandom means that insecure and possibly not
  random data will be used and well thats a security bug if not a DoS bug.

 Actually in modern 2.6 kernels there are two separate output entropy
 pools for /dev/random and /dev/urandom.  So assuming that the
 adversary doesn't know the contents of the current state of the
 entropy pool (i.e., the RNG is well seeded with entropy), you can read
 all you want from /dev/urandom and that won't give an adversary
 successful information to attack /dev/random.

My understanding was if you can drain entropy from /dev/urandom any futher 
reads from /dev/urandom will result in data which is not random at all. Is 
that wrong?

Regards,
ismail

-- 
Never learn by your mistakes, if you do you may never dare to try again.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-08 Thread Jon Masters
On Sun, 2007-12-09 at 06:21 +0100, Willy Tarreau wrote:

 Wouldn't it be possible to mix the data with the pid+uid of the reading
 process so that even if another one tries to collect data from urandom,
 he cannot predict what another process will get ? BTW, I think that the
 tuple (pid,uid,timestamp of open) is unpredictable and uncontrollable
 enough to provide one or even a few bits of entropy by itself.

Timestamp perhaps, but pid/uid are trivially guessable in automated
environments, such as LiveCDs. And if you're also running on an embedded
system without a RTC (common, folks like to save a few cents) then it's
all pretty much trivially guessable on some level.

Jon.



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-07 Thread Jon Masters

On Wed, 2007-12-05 at 09:49 -0500, Theodore Tso wrote:
> On Wed, Dec 05, 2007 at 08:26:19AM -0600, Mike McGrath wrote:
> >
> > Ok, whats going on here is an issue with how the smolt RPM installs the 
> > UUID and how Fedora's Live CD does an install.  It's a complete false alarm 
> > on the kernel side, sorry for the confusion.
> 
> BTW, You may be better off using "uuidgen -t" to generate the UUID in
> the smolt RPM, since that will use 12 bits of randomness from
> /dev/random, plus the MAC, address and timestamp.  So even if there is
> zero randomness in /dev/random, and the time is January 1, 1970, at
> least the MAC will contribute some uniqueness to the UUID.

I haven't checked how uuidgen uses the MAC, but I would suggest that
that is not something Fedora should jump at doing - although it would
help ensure unique UUIDs, it also contributes to the tinfoil hat
responses that usually come up with things like smolt.

Jon.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-07 Thread Jon Masters

On Wed, 2007-12-05 at 09:49 -0500, Theodore Tso wrote:
 On Wed, Dec 05, 2007 at 08:26:19AM -0600, Mike McGrath wrote:
 
  Ok, whats going on here is an issue with how the smolt RPM installs the 
  UUID and how Fedora's Live CD does an install.  It's a complete false alarm 
  on the kernel side, sorry for the confusion.
 
 BTW, You may be better off using uuidgen -t to generate the UUID in
 the smolt RPM, since that will use 12 bits of randomness from
 /dev/random, plus the MAC, address and timestamp.  So even if there is
 zero randomness in /dev/random, and the time is January 1, 1970, at
 least the MAC will contribute some uniqueness to the UUID.

I haven't checked how uuidgen uses the MAC, but I would suggest that
that is not something Fedora should jump at doing - although it would
help ensure unique UUIDs, it also contributes to the tinfoil hat
responses that usually come up with things like smolt.

Jon.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-06 Thread Bill Davidsen

Matt Mackall wrote:

On Tue, Dec 04, 2007 at 08:54:52AM -0800, Ray Lee wrote:

(Why hasn't anyone been cc:ing Matt on this?)

On Dec 4, 2007 8:18 AM, Adrian Bunk <[EMAIL PROTECTED]> wrote:

On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:


While debugging Exim4's GnuTLS interface, I recently found out that
reading from /dev/urandom depletes entropy as much as reading from
/dev/random would. This has somehow surprised me since I have always
believed that /dev/urandom has lower quality entropy than /dev/random,
but lots of it.

man 4 random


This also means that I can "sabotage" applications reading from
/dev/random just by continuously reading from /dev/urandom, even not
meaning to do any harm.

Before I file a bug on bugzilla,
...

The bug would be closed as invalid.

No matter what you consider as being better, changing a 12 years old and
widely used userspace interface like /dev/urandom is simply not an
option.

You seem to be confused. He's not talking about changing any userspace
interface, merely how the /dev/urandom data is generated.

For Matt's benefit, part of the original posting:


Before I file a bug on bugzilla, can I ask why /dev/urandom wasn't
implemented as a PRNG which is periodically (say, every 1024 bytes or
even more) seeded from /dev/random? That way, /dev/random has a much
higher chance of holding enough entropy for applications that really
need "good" entropy.

A PRNG is clearly unacceptable. But roughly restated, why not have
/dev/urandom supply merely cryptographically strong random numbers,
rather than a mix between the 'true' random of /dev/random down to the
cryptographically strong stream it'll provide when /dev/random is
tapped? In principle, this'd leave more entropy available for
applications that really need it, especially on platforms that don't
generate a lot of entropy in the first place (servers).


The original /dev/urandom behavior was to use all the entropy that was
available, and then degrade into a pure PRNG when it was gone. The
intent is for /dev/urandom to be precisely as strong as /dev/random
when entropy is readily available.

The current behavior is to deplete the pool when there is a large
amount of entropy, but to always leave enough entropy for /dev/random
to be read. This means we never completely starve the /dev/random
side. The default amount is twice the read wakeup threshold (128
bits), settable in /proc/sys/kernel/random/.

In another post I suggested having a minimum bound (use not entropy) and 
a maximum bound (grab some entropy) with the idea that between these 
values some limited entropy could be used. I have to wonder if the 
entropy available is at least as unpredictable as the entropy itself.



But there's really not much point in changing this threshold. If
you're reading the /dev/random side at the same rate or more often
that entropy is appearing, you'll run out regardless of how big your
buffer is.

Right, my thought is to throttle user + urandom use such that the total 
stays below the available entropy. I had forgotten that that was a lower 
bound, although it's kind of an on-off toggle rather than proportional. 
Clearly if you care about this a *lot* you will use a hardware RNG.


Thanks for the reminder on read_wakeup.

--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-06 Thread Bill Davidsen

Adrian Bunk wrote:

On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:


While debugging Exim4's GnuTLS interface, I recently found out that
reading from /dev/urandom depletes entropy as much as reading from
/dev/random would. This has somehow surprised me since I have always
believed that /dev/urandom has lower quality entropy than /dev/random,
but lots of it.


man 4 random


This also means that I can "sabotage" applications reading from
/dev/random just by continuously reading from /dev/urandom, even not
meaning to do any harm.

Before I file a bug on bugzilla,
...


The bug would be closed as invalid.

No matter what you consider as being better, changing a 12 years old and 
widely used userspace interface like /dev/urandom is simply not an 
option.


I don't see that he is proposing to change the interface, just how it 
gets the data it provides. Any program which depends on the actual data 
values it gets from urandom is pretty broken, anyway. I think that 
getting some entropy from network is a good thing, even if it's used 
only in urandom, and I would like a rational discussion of checking the 
random pool available when urandom is about to get random data, and 
perhaps having a lower and upper bound for pool size.


That is, if there is more than Nmax random data urandom would take some, 
if there was less than Nmin it wouldn't, and between them it would take 
data, but less often. This would improve the urandom quality in the best 
case, and protect against depleting the /dev/random entropy in low 
entropy systems. Where's the downside?


There has also been a lot of discussion over the years about improving 
the quality of urandom data, I don't personally think making the quality 
higher constitutes "changing a 12 years old and widely used userspace 
interface like /dev/urandom" either.


Sounds like a local DoS attack point to me...

--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-06 Thread Matt Mackall
On Thu, Dec 06, 2007 at 08:02:33AM +0100, Eric Dumazet wrote:
> Matt Mackall a ?crit :
> >On Tue, Dec 04, 2007 at 07:17:58PM +0100, Eric Dumazet wrote:
> >>Alan Cox a ?crit :
> No matter what you consider as being better, changing a 12 years old 
> and widely used userspace interface like /dev/urandom is simply not an 
> option.
>    
> >>>Fixing it to be more efficient in its use of entropy and also fixing the
> >>>fact its not actually a good random number source would be worth looking
> >>>at however.
> >>> 
> >>Yes, since current behavior on network irq is very pessimistic.
> >
> >No, it's very optimistic. The network should not be trusted.
> 
> You keep saying that. I am refering to your previous attempts last year to 
> remove net drivers from sources of entropy. No real changes were done.

Dave and I are both a bit stubborn on this point. I've been meaning to
respin those patches..

> If the network should not be trusted, then a patch should make sure network 
> interrupts feed /dev/urandom but not /dev/random at all. (ie not calling 
> credit_entropy_store() at all)

Yes. My plan is to change the interface from SA_SAMPLE_RANDOM to
add_network_entropy. The SA_SAMPLE_RANDOM interface sucks because it
doesn't tell the core what kind of source it's dealing with.

> There is a big difference on get_cycles() and jiffies. You should try to 
> measure it on a typical x86_64 platform.

I'm well aware of that. We'd use get_cycles() exclusively, but it
returns zero on lots of platforms. We used to use sched_clock(), I
can't remember why that got changed.

> >Also, for future reference, patches for /dev/random go through me, not
> >through Dave.
> 
> Why ? David is the network maintainer, and he was the one who rejected your 
> previous patches.

Because I'm the /dev/random maintainer and it's considered the polite
thing to do, damnit.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-06 Thread Matt Mackall
On Thu, Dec 06, 2007 at 08:02:33AM +0100, Eric Dumazet wrote:
 Matt Mackall a ?crit :
 On Tue, Dec 04, 2007 at 07:17:58PM +0100, Eric Dumazet wrote:
 Alan Cox a ?crit :
 No matter what you consider as being better, changing a 12 years old 
 and widely used userspace interface like /dev/urandom is simply not an 
 option.

 Fixing it to be more efficient in its use of entropy and also fixing the
 fact its not actually a good random number source would be worth looking
 at however.
  
 Yes, since current behavior on network irq is very pessimistic.
 
 No, it's very optimistic. The network should not be trusted.
 
 You keep saying that. I am refering to your previous attempts last year to 
 remove net drivers from sources of entropy. No real changes were done.

Dave and I are both a bit stubborn on this point. I've been meaning to
respin those patches..

 If the network should not be trusted, then a patch should make sure network 
 interrupts feed /dev/urandom but not /dev/random at all. (ie not calling 
 credit_entropy_store() at all)

Yes. My plan is to change the interface from SA_SAMPLE_RANDOM to
add_network_entropy. The SA_SAMPLE_RANDOM interface sucks because it
doesn't tell the core what kind of source it's dealing with.

 There is a big difference on get_cycles() and jiffies. You should try to 
 measure it on a typical x86_64 platform.

I'm well aware of that. We'd use get_cycles() exclusively, but it
returns zero on lots of platforms. We used to use sched_clock(), I
can't remember why that got changed.

 Also, for future reference, patches for /dev/random go through me, not
 through Dave.
 
 Why ? David is the network maintainer, and he was the one who rejected your 
 previous patches.

Because I'm the /dev/random maintainer and it's considered the polite
thing to do, damnit.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-06 Thread Bill Davidsen

Adrian Bunk wrote:

On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:


While debugging Exim4's GnuTLS interface, I recently found out that
reading from /dev/urandom depletes entropy as much as reading from
/dev/random would. This has somehow surprised me since I have always
believed that /dev/urandom has lower quality entropy than /dev/random,
but lots of it.


man 4 random


This also means that I can sabotage applications reading from
/dev/random just by continuously reading from /dev/urandom, even not
meaning to do any harm.

Before I file a bug on bugzilla,
...


The bug would be closed as invalid.

No matter what you consider as being better, changing a 12 years old and 
widely used userspace interface like /dev/urandom is simply not an 
option.


I don't see that he is proposing to change the interface, just how it 
gets the data it provides. Any program which depends on the actual data 
values it gets from urandom is pretty broken, anyway. I think that 
getting some entropy from network is a good thing, even if it's used 
only in urandom, and I would like a rational discussion of checking the 
random pool available when urandom is about to get random data, and 
perhaps having a lower and upper bound for pool size.


That is, if there is more than Nmax random data urandom would take some, 
if there was less than Nmin it wouldn't, and between them it would take 
data, but less often. This would improve the urandom quality in the best 
case, and protect against depleting the /dev/random entropy in low 
entropy systems. Where's the downside?


There has also been a lot of discussion over the years about improving 
the quality of urandom data, I don't personally think making the quality 
higher constitutes changing a 12 years old and widely used userspace 
interface like /dev/urandom either.


Sounds like a local DoS attack point to me...

--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-06 Thread Bill Davidsen

Matt Mackall wrote:

On Tue, Dec 04, 2007 at 08:54:52AM -0800, Ray Lee wrote:

(Why hasn't anyone been cc:ing Matt on this?)

On Dec 4, 2007 8:18 AM, Adrian Bunk [EMAIL PROTECTED] wrote:

On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:


While debugging Exim4's GnuTLS interface, I recently found out that
reading from /dev/urandom depletes entropy as much as reading from
/dev/random would. This has somehow surprised me since I have always
believed that /dev/urandom has lower quality entropy than /dev/random,
but lots of it.

man 4 random


This also means that I can sabotage applications reading from
/dev/random just by continuously reading from /dev/urandom, even not
meaning to do any harm.

Before I file a bug on bugzilla,
...

The bug would be closed as invalid.

No matter what you consider as being better, changing a 12 years old and
widely used userspace interface like /dev/urandom is simply not an
option.

You seem to be confused. He's not talking about changing any userspace
interface, merely how the /dev/urandom data is generated.

For Matt's benefit, part of the original posting:


Before I file a bug on bugzilla, can I ask why /dev/urandom wasn't
implemented as a PRNG which is periodically (say, every 1024 bytes or
even more) seeded from /dev/random? That way, /dev/random has a much
higher chance of holding enough entropy for applications that really
need good entropy.

A PRNG is clearly unacceptable. But roughly restated, why not have
/dev/urandom supply merely cryptographically strong random numbers,
rather than a mix between the 'true' random of /dev/random down to the
cryptographically strong stream it'll provide when /dev/random is
tapped? In principle, this'd leave more entropy available for
applications that really need it, especially on platforms that don't
generate a lot of entropy in the first place (servers).


The original /dev/urandom behavior was to use all the entropy that was
available, and then degrade into a pure PRNG when it was gone. The
intent is for /dev/urandom to be precisely as strong as /dev/random
when entropy is readily available.

The current behavior is to deplete the pool when there is a large
amount of entropy, but to always leave enough entropy for /dev/random
to be read. This means we never completely starve the /dev/random
side. The default amount is twice the read wakeup threshold (128
bits), settable in /proc/sys/kernel/random/.

In another post I suggested having a minimum bound (use not entropy) and 
a maximum bound (grab some entropy) with the idea that between these 
values some limited entropy could be used. I have to wonder if the 
entropy available is at least as unpredictable as the entropy itself.



But there's really not much point in changing this threshold. If
you're reading the /dev/random side at the same rate or more often
that entropy is appearing, you'll run out regardless of how big your
buffer is.

Right, my thought is to throttle user + urandom use such that the total 
stays below the available entropy. I had forgotten that that was a lower 
bound, although it's kind of an on-off toggle rather than proportional. 
Clearly if you care about this a *lot* you will use a hardware RNG.


Thanks for the reminder on read_wakeup.

--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-05 Thread Eric Dumazet

Matt Mackall a écrit :

On Tue, Dec 04, 2007 at 07:17:58PM +0100, Eric Dumazet wrote:

Alan Cox a ?crit :
No matter what you consider as being better, changing a 12 years old and 
widely used userspace interface like /dev/urandom is simply not an 
option.
   

Fixing it to be more efficient in its use of entropy and also fixing the
fact its not actually a good random number source would be worth looking
at however.
 

Yes, since current behavior on network irq is very pessimistic.


No, it's very optimistic. The network should not be trusted.


You keep saying that. I am refering to your previous attempts last year to 
remove net drivers from sources of entropy. No real changes were done.


If the network should not be trusted, then a patch should make sure network 
interrupts feed /dev/urandom but not /dev/random at all. (ie not calling 
credit_entropy_store() at all)




The distinction between /dev/random and /dev/urandom boils down to one
word: paranoia. If you are not paranoid enough to mistrust your
network, then /dev/random IS NOT FOR YOU. Use /dev/urandom. Do not
send patches to make /dev/random less paranoid, kthxbye.


I have many tg3 adapters on my servers, receiving thousand of interrupts per 
second, and calling add_timer_randomness(). I would like to either :


- Make sure this stuff is doing usefull job.
- Make improvements to reduce cpu time used.

I do not use /dev/urandom or/and /dev/random, but I know David wont accept a 
patch to remove IRQF_SAMPLE_RANDOM from tg3.c


Currently, I see that current implementation is suboptimal because it calls 
credit_entropy_store( nbits=0) forever.




If you have some trafic, (ie more than HZ/2  interrupts per second), 
then add_timer_randomness() feeds
some entropy but gives no credit (calling credit_entropy_store() with 
nbits=0)


This is because we take into account only the jiffies difference, and 
not the get_cycles() that should give

us more entropy on most plaforms.


If we cannot measure a difference, we should nonetheless assume there
is one?


There is a big difference on get_cycles() and jiffies. You should try to 
measure it on a typical x86_64 platform.


 
In this patch, I suggest that we feed only one u32 word of entropy, 
combination of the previous distinct
words (with some of them being constant or so), so that the nbits 
estimation is less pessimistic, but also to

avoid injecting false entropy.


Umm.. no, that's not how it works at all.

Also, for future reference, patches for /dev/random go through me, not
through Dave.



Why ? David is the network maintainer, and he was the one who rejected your 
previous patches.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-05 Thread Matt Mackall
On Tue, Dec 04, 2007 at 07:17:58PM +0100, Eric Dumazet wrote:
> Alan Cox a ?crit :
> >>No matter what you consider as being better, changing a 12 years old and 
> >>widely used userspace interface like /dev/urandom is simply not an 
> >>option.
> >>
> >
> >Fixing it to be more efficient in its use of entropy and also fixing the
> >fact its not actually a good random number source would be worth looking
> >at however.
> >  
> Yes, since current behavior on network irq is very pessimistic.

No, it's very optimistic. The network should not be trusted.

The distinction between /dev/random and /dev/urandom boils down to one
word: paranoia. If you are not paranoid enough to mistrust your
network, then /dev/random IS NOT FOR YOU. Use /dev/urandom. Do not
send patches to make /dev/random less paranoid, kthxbye.

> If you have some trafic, (ie more than HZ/2  interrupts per second), 
> then add_timer_randomness() feeds
> some entropy but gives no credit (calling credit_entropy_store() with 
> nbits=0)
> 
> This is because we take into account only the jiffies difference, and 
> not the get_cycles() that should give
> us more entropy on most plaforms.

If we cannot measure a difference, we should nonetheless assume there
is one?
 
> In this patch, I suggest that we feed only one u32 word of entropy, 
> combination of the previous distinct
> words (with some of them being constant or so), so that the nbits 
> estimation is less pessimistic, but also to
> avoid injecting false entropy.

Umm.. no, that's not how it works at all.

Also, for future reference, patches for /dev/random go through me, not
through Dave.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-05 Thread Marc Haber
On Wed, Dec 05, 2007 at 08:33:20AM -0500, Theodore Tso wrote:
> BTW, note that it would be a polite thing for GnuTLS when it is
> encrpyting data, which represents information which might not be
> available to an adversary, and SHA1 hash it (out of paranoia) and feed
> it to /dev/random.  
> 
> This won't give any "credits" to the random entropy counter, but to
> the extent that is information that isn't available to the adversary,
> it adds additional uncertainty to the random pool.

I have filed this as https://savannah.gnu.org/support/index.php?106113

Thanks for suggesting.

Greetings
Marc

-- 
-
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things."Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-05 Thread Theodore Tso
On Wed, Dec 05, 2007 at 08:26:19AM -0600, Mike McGrath wrote:
>
> Ok, whats going on here is an issue with how the smolt RPM installs the 
> UUID and how Fedora's Live CD does an install.  It's a complete false alarm 
> on the kernel side, sorry for the confusion.

BTW, You may be better off using "uuidgen -t" to generate the UUID in
the smolt RPM, since that will use 12 bits of randomness from
/dev/random, plus the MAC, address and timestamp.  So even if there is
zero randomness in /dev/random, and the time is January 1, 1970, at
least the MAC will contribute some uniqueness to the UUID.

 - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-05 Thread Mike McGrath

Matt Mackall wrote:

On Tue, Dec 04, 2007 at 04:23:12PM -0600, Mike McGrath wrote:
  

Matt Mackall wrote:


On Tue, Dec 04, 2007 at 03:18:27PM -0600, Mike McGrath wrote:
 
  

Matt Mackall wrote:
   


which would have been in v2.6.22-rc4 through the normal CVE process.
The only other bits in there are wall time and utsname, so systems
with no CMOS clock would behave repeatably. Can we find out what
kernels are affected?


 
  
We can but it will likely take a few weeks to get a good sampling. UUID 
is unique in the db so when someone checks in with the same UUID, the 
old one gets overwritten.
   


We can probably assume that for whatever reason the two things with
duplicate UUID had the same seed. If not, we've got -much- bigger
problems.
 
  
Ok, I think I see whats going on here. I have some further investigation 
to do but it seems that the way our Live CD installer works is causing 
these issues. I'm going to try to grab some live CD's and hardware to 
confirm but at this point it seems thats whats going on.



Alright, keep me posted. We probably need a scheme to make the initial
seed more robust regardless of what you find out


Ok, whats going on here is an issue with how the smolt RPM installs the 
UUID and how Fedora's Live CD does an install.  It's a complete false 
alarm on the kernel side, sorry for the confusion.


   -Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-05 Thread Theodore Tso
On Wed, Dec 05, 2007 at 01:29:12PM +0100, Marc Haber wrote:
> On Tue, Dec 04, 2007 at 05:18:11PM +0100, Adrian Bunk wrote:
> > On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:
> > > While debugging Exim4's GnuTLS interface, I recently found out that
> > > reading from /dev/urandom depletes entropy as much as reading from
> > > /dev/random would. This has somehow surprised me since I have always
> > > believed that /dev/urandom has lower quality entropy than /dev/random,
> > > but lots of it.
> > 
> > man 4 random
> 
> Thanks for this pointer, I was not aware of the documentation. After
> reading this thread and the docs, I am now convinced that GnuTLS
> should seed a PRNG from /dev/(u)random instead of using the entropy
> directly. I will go filing a bug against GnuTLS.

BTW, note that it would be a polite thing for GnuTLS when it is
encrpyting data, which represents information which might not be
available to an adversary, and SHA1 hash it (out of paranoia) and feed
it to /dev/random.  

This won't give any "credits" to the random entropy counter, but to
the extent that is information that isn't available to the adversary,
it adds additional uncertainty to the random pool.

 - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-05 Thread Marc Haber
On Tue, Dec 04, 2007 at 05:18:11PM +0100, Adrian Bunk wrote:
> On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:
> > While debugging Exim4's GnuTLS interface, I recently found out that
> > reading from /dev/urandom depletes entropy as much as reading from
> > /dev/random would. This has somehow surprised me since I have always
> > believed that /dev/urandom has lower quality entropy than /dev/random,
> > but lots of it.
> 
> man 4 random

Thanks for this pointer, I was not aware of the documentation. After
reading this thread and the docs, I am now convinced that GnuTLS
should seed a PRNG from /dev/(u)random instead of using the entropy
directly. I will go filing a bug against GnuTLS.

Greetings
Marc

-- 
-
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things."Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why does reading from /dev/urandom deplete entropy so much?

2007-12-05 Thread Marc Haber
On Tue, Dec 04, 2007 at 08:54:52AM -0800, Ray Lee wrote:
> (Why hasn't anyone been cc:ing Matt on this?)

I didn't because I am not a regularly enough visitor of this mailing
list to know who is in charge of what.

> > On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:
> A PRNG is clearly unacceptable.

Would a PRNG that is frequently re-seeded from true entropy be
unacceptable as well?

Greetings
Marc

-- 
-
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things."Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


  1   2   >