Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-21 Thread Arnold G. Reinhold

At 11:39 AM -0500 8/13/99, Jim Thompson wrote:
  This thread started over concerns about diskless nodes that want to
 run IPsec.  Worst case, these boxes would not have any slots or other
 expansion capability. The only source of entropy would be network
 transactions, which makes me nervous...

 An interesting alternative, I think, is an add-on RNG which could go on a
 serial or parallel port.  The bandwidth achievable without loading down
 the machine is limited, but we don't need tremendous speeds, and many PCs
 used as routers, firewalls, etc. have such ports sitting idle.  Even
 semi-dedicated diskless boxes would *often* have one of those.

Of course, such a box already exists.  The complete details of its design
are available, and purchasing the box gives you the right to reproduce
the design (once) such that you can, indeed, verify that you're getting
random bits out of the box.

I spent some time searching the Web for hardware randomness sources 
and I have summarized what I found at 
http://www.world.std.com/~reinhold/truenoise.html.  I located several 
serial port RNG devices and some good sources of white noise that can 
be plugged into a sound port. I don't think I found the box Mr. 
Thompson refers to, but I would be glad to add it to the list.  I 
also included serial and USB video cameras, which may be a good 
source of randomness due to digitization noise, if nothing else.

I still feel strongly that diskless machines that are likely to use 
IPsec or other security software (e.g. SSL) should have a built-in 
source of randomness, a la the Pentium III. If the other 
microprocessor manufacturers won't comply, a TRNG should be included 
on one of the support chips. Randomness generation is so critical to 
public key cryptography that we should insist it be engineered in, 
not pasted on.

Arnold Reinhold




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-21 Thread David Honig

At 09:11 PM 8/17/99 -0700, Nick Szabo wrote:
how it was prepared.  There simply *cannot* be an all-purpose statistical
test.

Quite so.  I'd like to see what Maurer's "universal" test
says about the entropy of completely predictable sequences
like the following:

(1) pi
(2) Champernowne's number (0.12345678901011121314151617181920...)


Look, no test can distinguish between an arbitrarily
large-state PRNG and a 'real' RNG.  Pi's digits will 
appear fully entropic, under MUST, Diehard, etc.  Even
though its Kolomogorov/Chaitin-complexity is simple (ie, the program that
computes Pi is short).  Pi is not random,
though its digits (and all N-tuples of digits, etc.) are evenly
distributed.  This *is* a profound point.

Dunno about C's number, suspect its the same.

Maurer, BTW, points out that his test is only 
useful if you know you have a real random bitstream
generator.  (Any faults in this exposition are my own.
I have never met or corresponded with Maurer, in fact.)

But if you cut through the philosophical boolsheet,
and elegant computation-theory definitions of complexity, 
you are left with a problem: how to measure the entropy
of a sample of a source, e.g., /dev/random's input.  And it comes down to F
log F no matter what algorithm you use to approximate it.

The only philosophy you need is this: the Adversary doesn't
know the internal (e.g., atomic) state of your hardware.
Therefore, the measured state is unpredictable; but it probably isn't
uniformly distributed.  So you distill it.  Until 
you've got fully independent bits.  And you hash and stir
it when you read it, for 'security in depth', ie, extra
layers of protection.  

.

Again, the question is, what is the alternative?

I'm willing to discuss e.g., a function of raw vs. gzip-compressed file
size as a measure of entropy.  I think
my major point is, measure it as best you can.  

I stumbled upon MUST; its a measure, so easier to handle
than a multidimensional spectrum like Diehard; more informative
than FIPS-140 binary tests.  I am open
to suggestions as to how to quantitatively evaluate
RNGs, for /dev/r or otherwise.

Cheers,

David Honig








  







Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-17 Thread John Denker

Hi Ted --

At 11:41 PM 8/14/99 -0400, you wrote: 
 
standard Mathematician's style --- encrypted by formulae 
guaranteed to make it opaque to all but those who are trained in the
peculiar style of Mathematics' papers. 
 ...
someone tried to pursuade me to use Maurer's test
...
too memory intensive and too CPU intensive

You are very wise to be skeptical of mathematical mumbo-jumbo.

You mentioned questions about efficiency, but I would like to call into
question whether the entropy estimate provided by Maurer's Universal
Statistical Test (MUST) would be suitable for our purposes, even if it
could be computed for free.

Don't be fooled by the Universal name.  If you looked it up in a real-world
dictionary, you might conclude that Universal means all-purpose or
general-purpose.  But if you look it up in a mathematical dictionary, you
will find that a Universal probability distribution has the property that
if we compare it to some other distribution, it is not lower by more than
some constant factor.  Alas, the "constant" depends on what two
distributions are being compared, and there is no uniform bound on it!  Oooops!

In the language of entropy, a Universal entropy-estimator overestimates the
entropy by no more than a constant -- but beware, there is no uniform upper
bound on the constant.

To illustrate this point, I have invented Denker's Universal Statistical
Test (DUST) which I hereby disclose and place in the public domain:
According to DUST, the entropy of a string is equal to its length.  That's
it!  Now you may not *like* this test, and you may quite rightly decide
that it is not suitable for your purposes, but my point is that according
to the mathematical definitions, DUST is just as Universal as MUST.

There are profound theoretical reasons to believe it is impossible to
calculate a useful lower bound on the entropy of a string without knowing
how it was prepared.  There simply *cannot* be an all-purpose statistical test.

If you were to make the mistake of treating a Universal estimator as an
all-purpose estimator, and then applying it in a situation where the input
might (in whole or in part) be coming from an adversary, you would lay
yourself open to a chosen-seed attack (analogous to a chosen-plaintext attack).

On the other side of the same coin, if you *do* know something about how
the input was prepared, there obviously are things you can do to improve
your estimate of its entropy.  For example, in the early stages of a
hardware RNG, you could use two input channels, sending the
differential-mode signal to the next stage, and using the common-mode
signal only for error checking.  This is a good way to get rid of a certain
type of interference, and could be quite useful in the appropriate
circumstances.  Returning to the ugly side of the coin, you can see that a
small change in the way the inputs were prepared would make this
differencing scheme worthless, possibly leading to wild overestimates of
the entropy.

BOTTOM LINE:  
 *) Incorporating an all-purpose entropy-estimator into /dev/random is
impossible.
 *) Incorporating something that *pretends* to be an all-purpose estimator
is a Really Bad Idea.
 *) The present design appears to be the only sound design:  whoever
provides the inputs is responsible for providing the estimate of the
entropy thereof.  If no estimate is provided, zero entropy is attributed.

Cheers --- jsd




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-17 Thread Thomas Roessler

On 1999-08-14 12:27:30 -0700, Bill Frantz wrote:

 It bothers me when people who are in favor of strong crypto
 automatically assume that anything which makes strong crypto easier
 will automatically be export controlled.  This assertion is clearly
 wrong.  The thing which most makes strong crypto easier is the
 (slow) general purpose CPU.  These have never been export
 controlled.

In DuD 2/1998 (I recall, one of the Roth articles on export
control), a case is quoted in which re-exporting a US-fabricated
i386 PC to Poland in 1990 is said to have lead to a conviction.






Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-15 Thread Enzo Michelangeli

- Original Message -
From: Henry Spencer [EMAIL PROTECTED]
To: Derek Atkins [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, August 15, 1999 12:33 AM
Subject: Re: linux-ipsec: Re: Summary re: /dev/random


[...]
  and also they are not running in Intel hardware with a Linux OS...

 Speak for yourself.  Mine are.  I think you'd be surprised at how common
 this has become, in applications which are not severely pressed for
 performance.  The sort of PC that people are discarding, in favor of
 faster and more modern ones, can route/firewall/security-gateway a T1
 quite well.  Anything much faster than that probably does need custom
 hardware... this year.

Trere are also commercial products implementing encrypting firewalls on top
of a Linux kernel. The vendor of one of them, Watchguard Technologies, has
gone public one month ago (http://quote.yahoo.com/q?s=WGRDd=3m).

Enzo





RE: linux-ipsec: Re: Summary re: /dev/random

1999-08-14 Thread Anonymous

   Except that if you are paranoid enough to be worried about some
   unknown entity flooding your machine with network packets to
   manipulate the output of /dev/urandom, you are likely to not
   trust Intel to do RNG in such a way that it can't be fooled with.
  
  And if you're that paranoid, you'll soon understand that there is a 60hz
  (in the US, 50hz many other places) signal present in anything powered
  from the wall.

 But if you hang an antenna and a 60hz notch filter off of the RNG 
 circuit, you can increase the gain of the other noise (power supply
 fan, network cable, printer running, telephone ringer, air 
 conditioner, neighbor's Frigidair, etc.) to the point that the 
 60 cycle element is less significant.

If you read the report on the Intel RNG co-authored by crypto expert
Paul Kocher at http://www.cryptography.com/intelRNG.pdf, you will see
that Intel has anticipated and designed against this type of noise.

The Intel RNG uses the same basic principle as many of the entropy sources
which have been discussed here: a relatively low-frequency event occurs
and is sampled by a high-frequency timer.  The low bits (low bit, in this
case) of the high frequency timer is then effectivelly random as long
as there is enough variation in the timing of the low frequency event.

In this case, the high-frequency "timer" is simply an oscillator, so
that we sample it at either a 0 or a 1 state.  The low-frequency event
is caused by an oscillator itself, one which runs at approximately 1/100
the rate of the high frequency timer.  The low frequency oscillator
is frequency-modulated by the resistance measured across an undriven
resistor.  This resistance will vary due to thermal noise, which is the
ultimate source of the entropy produced by the chip.

In order to reduce environmental interference, the thermal noise is
actually measured across a pair of resistors, laid out side by side on
the chip, and the difference is used.  Environmental signals will affect
both resistors (almost) identically, and by taking the difference any
effects from the environment are almost eliminated.

Even where environmental noise remains, it is being added to the thermal
noise of the resistors, and can only add further variation to the period
of the low frequency oscillator.  As a general principle of information
theory, adding a known signal to a random signal will still produce
a fully random signal.  Known sources of environmental noise will not
reduce the randomness output by the chip.  And to the extent that the
environmental noise is unknown, it actually increases the entropy.

For more details, see the report cited above.  The chip contains
additional stages and design precautions to further improve the quality
of the random values produced.  The output of the chip has been analyzed
with a battery of randomness tests and looks very good.

Here is the conclusion from the cryptography.com report:

   In producing the RNG, Intel applied conservative design,
   implementation, and testing approaches. Design assumptions about the
   random source, sampling method, system consistency, and algorithm
   appear appropriate.  Careful attention was paid to analyze and avoid
   likely failure modes.

   We believe that the Intel RNG is well-suited for use in cryptographic
   applications. Direct use of Intel's software libraries should simplify
   the design and evaluation process for security products. Alternatively,
   developers can combine data from the Intel RNG with data from
   other sources. For example, data from the Intel RNG can be safely
   exclusive-ORed with output from any independent RNG. The Intel RNG
   will help designers avoid relying on proprietary entropy gathering
   techniques in critical security routines. We believe the Intel RNG
   will prevent many RNG failures and improve the integrity and security
   of cryptographic applications.

   Cryptographically, we believe that the Intel RNG is strong and that
   it is unlikely that any computationally feasible test will be found
   to distinguish data produced by Intel's RNG library from output from
   a perfect RNG. As a result, we believe that the RNG is by far the
   most reliable source of secure random data available in the PC.



Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-14 Thread David Honig

At 02:39 PM 8/11/99 -0400, Henry Spencer wrote:

And will those hardware RNGs be subject to export control?  Betcha they
will, assuming export control survives legal challenges.  If this isn't
"enabling technology", I don't know what is...

Hey, there are *legitimate* civilian uses for RNGs.  For testing various
kinds of communications gear.  For true-random dithering.  For monte-carlo  
verification.  For soothing-sound generators to help you sleep...







Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-14 Thread Gary E. Miller

Yo Derek!

I know a lot of people that use diskless, keyboardless computers
as routers and terminal servers.  I think a few small companies like 
Cisco, Ascend, Bay Networks, etc. make these things. :-)

They have even been known to sell them as VPN gateways to encrypt
local LAN traffic as they route it on to the internet.  A few
smaller copmanies like Shiva have been known to dabble in them.

RGDS
GARY


On 13 Aug 1999, Derek Atkins wrote:

 Date: 13 Aug 1999 18:18:03 -0400
 From: Derek Atkins [EMAIL PROTECTED]
 To: Arnold G. Reinhold [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED], [EMAIL PROTECTED],
 Bill Stewart [EMAIL PROTECTED]
 Subject: linux-ipsec: Re: Summary re: /dev/random
 
 Um, pardon my ignorance, but what is the point of a diskless,
 keyboardless computer that requires such high security?  If the only
 interface is the network, what good is it?  I can see being diskless
 (although why anyone would build a diskless machine in today's world,
 I have no idea -- it certainly doesn't significantly affect the cost
 of the machine).  I used to have a diskless sun as my workstation.
 But it still had a keyboard.
 
 Did you have a specific usage in mind, here?  I certainly cannot
 imagine a use for such a beast today.  Even my palmpilot has user
 input.
 
 -derek
 
 "Arnold G. Reinhold" [EMAIL PROTECTED] writes:
 
  
  At 12:25 PM -0400 8/11/99, Theodore Y. Ts'o wrote:
 Date: Tue, 10 Aug 1999 11:05:44 -0400
 From: "Arnold G. Reinhold" [EMAIL PROTECTED]
  
 A hardware RNG can also be added at the board level. This takes
 careful engineering, but is not that expensive. The review of the
 Pentium III RNG on www.cryptography.com seems to imply that Intel is
 only claiming patent protection on its whitening circuit, which is
 superfluous, if not harmful. If so, their RNG design could be copied.
  
  I've always thought there was a major opportunity for someone to come up
  with an ISA (or perhaps even a PCI) board which had one or more circuits
  (you want more than one for redundancy) that contained a noise diode
  hooked up to a digitizing circuit.  As long as the hardware interface
  was open, all of the hard parts of a hardware RNG, could be done in
  software.
  
  This thread started over concerns about diskless nodes that want to 
  run IPsec.  Worst case, these boxes would not have any slots or other 
  expansion capability. The only source of entropy would be network 
  transactions, which makes me nervous. That is why I feel we should 
  pressure manufacturers of such boards to include hardware RNG 
  capability in one form or another.
  
  Generic PC's these days come with audio input or can have a sound 
  card added easily. Open software that would characterize, monitor and 
  whiten the output of an analog noise source connected to the audio-in 
  port would meet a lot of needs.
  
  Arnold Reinhold
  
  
 
 -- 
Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
Member, MIT Student Information Processing Board  (SIPB)
URL: http://web.mit.edu/warlord/  PP-ASEL  N1NWH
[EMAIL PROTECTED]PGP key available
 

RGDS
GARY
---
Gary E. Miller Rellim 20340 Empire Ave, Suite E-3, Bend, OR 97701
[EMAIL PROTECTED]  Tel:+1(541)382-8588 Fax: +1(541)382-8676




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-14 Thread Henry Spencer

On 14 Aug 1999, Derek Atkins wrote:
 Routers and Firewalls are not IPSec endpoints...

Firewalls can easily be IPSEC endpoints, if they double as security
gateways, which is likely to be common.  (Making your firewall speak
IPSEC is considerably easier than making all the equipment behind it
do likewise.)

It is admittedly unlikely for a router to be an IPSEC endpoint except
for an administrative channel... unless it is doubling as a security
gateway, which is possible.

 and also they are not running in Intel hardware with a Linux OS...

Speak for yourself.  Mine are.  I think you'd be surprised at how common
this has become, in applications which are not severely pressed for
performance.  The sort of PC that people are discarding, in favor of
faster and more modern ones, can route/firewall/security-gateway a T1
quite well.  Anything much faster than that probably does need custom
hardware... this year.

Some of the people who've talked to us about various aspects of Linux
FreeS/WAN have had very interesting large-volume applications in mind.
Open source, strong security, and cheap Intel hardware are a pretty
versatile combination.

 ...However, there are always
 multiple network interfaces in a SG (at least one 'inside' and one
 'outside' the secure network), so you have the timings of network
 packets on each network, as well as the timing of packets between the
 networks.

There is considerable debate about whether packet timings are a good
source for entropy, since they are at least potentially observable by
outsiders.

And, again, this is probably not running Linux.

Again, speak for yourself.  Linux use in that area is growing quickly.

 Seriously, how many 'inexpensive specialized devices' are going to
 need strong security?

Almost all of them, before too very long.  Try making a list of network
devices which definitely *do not* need strong security; it's short. 

 Also, a router is certainly not 'inexpensive'...

If you're trying to route multiple T3s, true.  Otherwise, again, you're
behind the times -- routing no longer requires massive horsepower.  (Of
course, you can still pay a bundle for it if you really insist.)

 ...Besides, why not just add a
 hardware RNG?  The pieces aren't very expensive, the parts don't
 really wear out, and then you always have a strong source of random
 numbers.

It's an option, but not always the most attractive one.  Being able to
do without would be useful.

  Henry Spencer
   [EMAIL PROTECTED]
 ([EMAIL PROTECTED])





Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-14 Thread Henry Spencer

On 13 Aug 1999, Derek Atkins wrote:
 Um, pardon my ignorance, but what is the point of a diskless,
 keyboardless computer that requires such high security?  If the only
 interface is the network, what good is it?

There are gadgets called "routers" and "firewalls" whose whole reason to
exist is their network interfaces.  They usually lack keyboards and often
lack disks, and the people who install them tend to think about security
quite a bit.

 (although why anyone would build a diskless machine in today's world,
 I have no idea -- it certainly doesn't significantly affect the cost
 of the machine).

It does affect its noise level, power consumption, and reliability.  And
you are underestimating how much it can affect the cost of inexpensive
specialized devices.

  Henry Spencer
   [EMAIL PROTECTED]
 ([EMAIL PROTECTED])




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-13 Thread Anonymous

Paul Koning writes:

 The most straightforward way to do what's proposed seems to be like
 this:

 1. Make two pools, one for /dev/random, one for /dev/urandom.  The
 former needs an entropy counter, the latter doesn't need it.

 2. Create a third pool, which doesn't ned to be big.  That's the
 entropy staging area.  It too has an entropy counter.

 3. Have the add entropy function stir into that third pool, and credit 
 its entropy counter.

 4. Whenever the entropy counter of the staging pool exceeds N bits (a
 good value for N is probably the hash length), draw N bits from it,
 and debit its entropy counter by N.

 If the entropy counter of the /dev/random pool is below K% of its
 upper bound (K = 75 has been suggested) stir these N bits into the
 /dev/random pool.  Otherwise, alternate between the two pools.  Credit 
 the pool's entropy counter by N.

Some suggested modifications:

The third pool, the entropy staging area, doesn't have to be big.
In fact, it doesn't have to be any bigger than the amount of entropy it
buffers, perhaps 100 bits or so.  This size need only be large enough
to prevent exhaustive search by an attacker.  80 or even 60 bits should
be enough in practice, but a multiple of 32 like 96 or 128 would be
more convenient for some algorithmss.  Probably it would want to use a
different mechanism than that used in the main random pool since it is
so much smaller.  A SHA hash context could be used as in Yarrow, but
that may be somewhat slow.  A 96 bit CRC would be another good choice.
Cryptographic strength is not an issue here, just mixing.

Having the two pools for /dev/random and /dev/urandom sounds like the
right thing to do.  However the proposal to favor /dev/random over
/dev/urandom ignores the fact that /dev/random is seldom used.

The description above calls for entropy to be given preferentially
to /dev/random until it is 75% full.  But this will starve urandom
during the crucial startup phase.  As was proposed earlier, it would
be better to get initial randomness into /dev/urandom so that it can
be well seeded and start producing good random numbers.  This should
be about one staging-pool size, about 100 bits.  Once you have this,
you can give entropy to both pools as suggested above.

In operation, it is likely that the random pool will be filled and
virtually never drawn upon, and it is unnecessary to keep putting more
entropy into that pool.  The urandom pool will be much more heavily used.
It would make sense to have the algorithm for distributing entropy
between the pools be aware of this.

One possible mechanism would be to keep an entropy counter for both pools.
Put the first buffer of entropy into the urandom pool so that it gets
off to a good start.  Then divide incoming entropy between the pools
proportionally to how far they are from full.  If both pools are full,
divide it equally.  If one is full and the other is not, all incoming
entropy goes to fill the smaller one.  If neither is full, entropy is
divided proportionally, so that if one is 100 bits from full and the
other is 200 bits from full, the second is twice as likely to get the
input.

This will cause both pools to constantly be refreshed when the machine
is quiescent and not using randomness.  When it is active and using
/dev/urandom, that pool will get all the incoming entropy once /dev/random
is full.  This makes the most efficient use of incoming entropy and does
not waste it by giving it to an already-full /dev/random pool, which
would discard entropy that is already there.  Entropy is a scarce and
valuable resource in many configurations and it should not be thrown away.



Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-13 Thread Henry Spencer

On Wed, 11 Aug 1999, Anonymous wrote:
 Everyone seems to be ignoring the fact that there will be a hardware RNG,
 well designed and carefully analyzed, installed on nearly every Intel
 based system that is manufactured after 1999.  There is no need for a
 third party board, at least not on Intel architectures.

At least not on Intel architectures *with Intel(TM) processors*.  And even
that assumes that this feature will *continue* to exist on the processors,
which is by no means guaranteed.  (I'm told that Intel is already mumbling
about moving it into the support chipset instead, and I can easily see
that it might exist only in some variants.)

 ...Within the next few years,
 any system configured as a crypto server or gateway will have built in
 hardware RNGs provided by the manufacturer.

That would be nice.  It's a little too early to be sure of that yet.  Oh,
and by the way, crypto belongs on all the machines, not just the servers
and gateways.  (Or the machines originally configured as such -- one nice
thing about Linux crypto software is that you can turn cast-off desktop
machines into excellent crypto gateways.)

And will those hardware RNGs be subject to export control?  Betcha they
will, assuming export control survives legal challenges.  If this isn't
"enabling technology", I don't know what is...

  Henry Spencer
   [EMAIL PROTECTED]
 ([EMAIL PROTECTED])




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-13 Thread Henry Spencer

On Wed, 11 Aug 1999, Arnold G. Reinhold wrote:
 This thread started over concerns about diskless nodes that want to 
 run IPsec.  Worst case, these boxes would not have any slots or other 
 expansion capability. The only source of entropy would be network 
 transactions, which makes me nervous...

An interesting alternative, I think, is an add-on RNG which could go on a
serial or parallel port.  The bandwidth achievable without loading down
the machine is limited, but we don't need tremendous speeds, and many PCs
used as routers, firewalls, etc. have such ports sitting idle.  Even
semi-dedicated diskless boxes would *often* have one of those.

The problem with slots is, what flavor do you pick?  PCI is, I gather,
rather complicated to interface to.  Also, since it's the preferred
technology for fast networking boards, and tends to come in limited
numbers, the PCI slots often are fully spoken for.  ISA is a lot simpler,
but its days now seem to be numbered. 

  Henry Spencer
   [EMAIL PROTECTED]
 ([EMAIL PROTECTED])




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-11 Thread Osma Ahvenlampi

Arnold G. Reinhold [EMAIL PROTECTED] writes:
 1. Mr. Kelsey's argument that entropy should only be added in large 
 quanta is compelling, but I wonder if it goes far enough. I would 
 argue that entropy collected from different sources (disk, network, 
 sound card, user input, etc.) should be collected in separate pools, 
 with each pool taped only when enough entropy has been collected in 
 that pool.

You have to realize that /dev/random entropy collection doesn't get
one bit, add it to the pool, and increment the entropy counter. What
happens is that it gets a notification for an interrupt along with the 
interrupt number, the keyboard scancode, or similar, reads a
high-resolution clock (and gets 32 bits from there), and mixes these
two numbers (40 bits, usually, I believe) to the pool, and tries to
estimate how much entropy the time contained (by calculating first,
second and third-order deltas and taking the smallest, I recall).

So, for each 40 bits mixed into the pool, a few bits of entropy is
credited. How do you propose quantizing this? Collecting all of the
bits in a staging area and adding them when the entropy count is big
enough? That could mean a kilobit or more of staging area, and per
your suggestion the driver would have to have several of them. Gets
pretty unwieldy, quickly.

Also, this design means that there's always at least 32 bits mixed
into the pool at once, and it might not always increase the entropy
count at all. In a sense, /dev/random already does quantized
collection.

-- 
Osma Ahvenlampi




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-11 Thread Paul Koning

 "Osma" == Osma Ahvenlampi [EMAIL PROTECTED] writes:

 Osma Arnold G. Reinhold [EMAIL PROTECTED] writes:
  1. Mr. Kelsey's argument that entropy should only be added in
  large quanta is compelling, but I wonder if it goes far enough. I
  would argue that entropy collected from different sources (disk,
  network, sound card, user input, etc.) should be collected in
  separate pools, with each pool taped only when enough entropy has
  been collected in that pool.

 Osma You have to realize that /dev/random entropy collection doesn't
 Osma get one bit, add it to the pool, and increment the entropy
 Osma counter

 Osma So, for each 40 bits mixed into the pool, a few bits of entropy
 Osma is credited. How do you propose quantizing this?

I think this is pretty simple.

Right now there's one pool, which is where new stuff is stirred in and 
then a hash is done over it (that's the outline, the details are a bit 
more involved).

The most straightforward way to do what's proposed seems to be like
this:

1. Make two pools, one for /dev/random, one for /dev/urandom.  The
former needs an entropy counter, the latter doesn't need it.

2. Create a third pool, which doesn't ned to be big.  That's the
entropy staging area.  It too has an entropy counter.

3. Have the add entropy function stir into that third pool, and credit 
its entropy counter.

4. Whenever the entropy counter of the staging pool exceeds N bits (a
good value for N is probably the hash length), draw N bits from it,
and debit its entropy counter by N.

If the entropy counter of the /dev/random pool is below K% of its
upper bound (K = 75 has been suggested) stir these N bits into the
/dev/random pool.  Otherwise, alternate between the two pools.  Credit 
the pool's entropy counter by N.

The above retains the basic structure, its mixing algorithms, entropy
bookkeeping, etc.  The major delta is the multiple pools and the
carrying of entropy from the staging pool to the others.

paul



Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-10 Thread Paul Koning

 "Arnold" == Arnold G Reinhold [EMAIL PROTECTED] writes:

 Arnold I have found this discussion very stimulating and
 Arnold enlightening. I'd like to make a couple of comments:

 Arnold 1. Mr. Kelsey's argument that entropy should only be added in
 Arnold large quanta is compelling, but I wonder if it goes far
 Arnold enough. I would argue that entropy collected from different
 Arnold sources (disk, network, sound card, user input, etc.) should
 Arnold be collected in separate pools, with each pool taped only
 Arnold when enough entropy has been collected in that pool.

 Arnold Mixing sources gives an attacker added opportunities. For
 Arnold example, say entropy is being mixed from disk accesses and
 Arnold from network activity. An attacker could flood his target
 Arnold with network packets he controlled, insuring that there would
 Arnold be few disk entropy deposits in any given quanta release. On
 Arnold the other hand, if the entropy were collected separately,
 Arnold disk activity entropy would completely rekey the PRNG
 Arnold whenever enough accumulated, regardless of network
 Arnold manipulation.  Similarly, in a system with a hardware entropy
 Arnold source, adding disk entropy in a mixing mode would serve
 Arnold little purpose, but if the pools were kept separate, disk
 Arnold entropy would be a valuable backup in case the hardware
 Arnold source failed or were compromised.

I think this makes sense only if the "entropy source" under
consideration isn't actually any good.  If if is reasonably sound (and 
in particular, its entropy amount estimated conservatively) then there 
isn't a problem.  For example, if an attacker floods with network
messages, and you use network timing as an entropy source, the design
job was to pick a conservative lower bound of entropy per arrival
given that the arrivals may all be controlled by an attacker.  If
you've done that, then the attack doesn't hurt.

 Arnold 2. It seems clear that the best solution combines strong
 Arnold crypto primitives with entropy collection. I wonder how much
 Arnold of the resistance expressed in this thread by has to do with
 Arnold concerns about performance. For this reason, I think RC4
 Arnold deserves further consideration. It is very fast and has a
 Arnold natural entropy pool built in. With some care, I believe RC4
 Arnold can be used in such a way that attacks on the PRNG can be
 Arnold equated to an attacks on RC4 as a cipher.  The cryproanalytic
 Arnold significance of RC4's imperfect whiteness is questionable and
 Arnold can be addressed in a number of ways, if needed.  I have some
 Arnold thoughts on a fairly simple and efficient multi-pool PRNG
 Arnold design based on RC4, if anyone is interested.

Well, yes, but /dev/{u,}random already does use strong crypto (a
strong cryptographic hash, to be precise).  I expect RC4 could do the
job but is there any reason to replace what's there now (MD5 and
SHA-1) with RC4 or anything else?

 Arnold 3. With regard to diskless nodes, I suggest that the
 Arnold cryptographic community should push back by saying that some
 Arnold entropy source is a requirement and come up with a
 Arnold specification (minimum bit rate, maximum acceptable color,
 Arnold testability, open design, etc.). An entropy source spec would
 Arnold reward Intel for doing the right thing and encourage other
 Arnold processor manufacturers to follow their lead.

Obviously an entropy source is required, but I'm not prepared to
translate that into a requirement for dedicated hardware.  I still
believe (based on experiments -- though not on PC hardware) that
network arrival timing done with low order bits from a CPU cycle
counter supply non-zero entropy.

 Arnold A hardware RNG can also be added at the board level. This
 Arnold takes careful engineering, but is not that expensive. The
 Arnold review of the Pentium III RNG on www.cryptography.com seems
 Arnold to imply that Intel is only claiming patent protection on its
 Arnold whitening circuit, which is superfluous, if not harmful. If
 Arnold so, their RNG design could be copied.

There are probably plenty of designs; at the block diagram level they
are pretty simple and pretty obvious.  The devil is in the details.

By the way, various crypto accelerator chips now come with an RNG
built-in.  Some may be subject to export control, which would make
them unuseable in a Linux contents, but perhaps not all of them.

paul



Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-06 Thread Crispin Cowan

"Theodore Y. Ts'o" wrote:

 I'd certainly agree that having a standard user-space library would be a
 Good Thing.  The real question in my mind is should the code live in
 user space or in kernel space.

Definitely kernel space.  Precisely because a good source of entropy is:

   * not computable, you need to get it from a device
   * essential for assorted security applications

it needs to be in kernel space, where it can talk to raw devices, and be
protected from corruption  spoofing.

Crispin
-
 Crispin Cowan, Research Assistant Professor of Computer Science, OGI
NEW:  Protect Your Linux Host with StackGuard'd Programs  :FREE
   http://www.cse.ogi.edu/DISC/projects/immunix/StackGuard/




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-04 Thread Henry Spencer

On Tue, 3 Aug 1999, bram wrote:
 The goal is to make it so that any time someone wants random numbers they
 can go to /dev/random, with no required studying of entropy and threat
 models and all that yadda yadda yadda which most developers will
 rightfully recoil from getting into when all they want is a few random
 bytes.

That, surely, is what /dev/urandom is for.  (Maybe /dev/random ought to
be mode rw---, so that only root applications can use it?)

  Henry Spencer
   [EMAIL PROTECTED]
 ([EMAIL PROTECTED])




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-04 Thread Paul Koning

 "Osma" == Osma Ahvenlampi [EMAIL PROTECTED] writes:

 Osma Looking at this discussing going round and round, I'm very
 Osma inclined to fetch the latest freeswan-snapshot, grep for
 Osma /dev/random, and replace all reads with a routine that has it's
 Osma own internal Yarrow-like SHA mixer that gets reseeded from
 Osma /dev/random at semi-frequent intervals, and in the meantime
 Osma returns random numbers from the current SHA value. That's how I
 Osma believe /dev/random was intended to be used, anyway...

No, that's how /dev/urandom was intended to be used.

What you describe duplicates the functionality of /dev/urandom.  Why
do it?

I agree with Ted that there may well be people that misuse
/dev/random.  If so, the obvious comment is RT*M.  Perhaps the
documentation may want to emphasize the intended use of /dev/random
more strongly.  (Come to think of it, it's not clear to me especially
after reading the Yarrow paper that there really *are* cases where the 
use of /dev/random rather than /dev/urandom is actually warranted.)

Re Henry Spencer's comment:
On Tue, 3 Aug 1999, bram wrote:
 The goal is to make it so that any time someone wants random numbers they
 can go to /dev/random, with no required studying of entropy and threat
 models and all that yadda yadda yadda which most developers will
 rightfully recoil from getting into when all they want is a few random
 bytes.

 That, surely, is what /dev/urandom is for.  (Maybe /dev/random ought to
 be mode rw---, so that only root applications can use it?)

That may reduce the number of applications that blindly use
/dev/random without knowing why this isn't the right thing to do.  On
the other hand, it won't prevent applications that read /dev/urandom
from causing those that use /dev/random to block (so long as both
continue to use the same pool.

Then again, if the valid uses of /dev/random are somewhere between
rare and non-existent, which seems to be the case, this is a
non-issue.

Finally, from Bram:

 5) a (very small) amount of persistent memory to keep pool state in (or at
 least periodically put some random bytes in to put in the pool at next
 reboot.) It would have to be plugged into a trusted piece of hardware to
 give it real randomness at least once, of course, but that wouldn't be a
 big deal.

That doesn't solve the issue of entropy sources on diskless UI-less
systems.  All it does is let you carry whatever you got across
reboots.  If you have none to carry, you still have an issue.

I do agree that using any available NV memory for keeping pool state
across reboots is a good thing.  

paul



Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-03 Thread Paul Koning

 "Paul" == Paul Koning [EMAIL PROTECTED] writes:

 Paul 2. Pool size.  /dev/random has a fairly small pool normally but
 Paul can be made to use a bigger one.  Yarrow argues that it makes
 Paul no sense to use a pool larger than N bits if an N bit mixing
 Paul function is used, so it uses a 160 bit pool given that it uses
 Paul SHA-1.  I can see that this argument makes sense.  (That
 Paul suggests that the notion of increasing the /dev/random pool
 Paul size is not really useful.)

Correction... I reread the Yarrow paper, and it seems I misquoted it.

Yarrow uses the SHA-1 context (5 word hash accumulator) as its "pool"
so it certainly has a 160 bit entropy limit.  But /dev/random uses a
much larger pool, which is in effect the input to a SHA-1 or MD5 hash,
the output of which is (a) fed back into the pool to change its state,
and (b) after some further munging becomes the output bitstream.

In that case, the possible entropy should be as high as the bit count
of the pool, not the length of the hash, so cancel my comment #2...

paul