Re: The Shining Cryptographers Net

2001-01-21 Thread John Denker

At 10:10 AM 1/20/01 -0800, [EMAIL PROTECTED] wrote:

This analysis will focus on one particular kind of attack.  Eve will make
measurements of the photon polarization angle as it travels through the
network and attempt to deduce information about the signals being sent
by the participants.

This appears to be a correct analysis of this particular attack.  However, 
this is not Eve's strongest attack.  So let's move the focus.

A much better strategy for Eve is to _not_ make so many 
measurements.  Rather, she should preserve the photon in all its analog, 
quantum-mechanical glory and recirculate it back to Bob, bypassing the 
other participants in the ring.

Then Bob, in blissful ignorance, will decrypt his own signal.  We have 
reduced the problem to the trivial case of the one-person ring;  in such a 
ring it is obvious whether Bob sent a message or not.

The contrast with the conventional Dining Cryptographer's ring is 
illuminating:  In the DC ring, Bob depends on somebody else (indeed 
everybody else) to undo the transformations that he applies, so that if Eve 
attempts to spoof, short-circuit, or partition the ring, the results will 
be cryptologically random.

The SC net appears to have a problem at the algorithm level (not at the 
physics level), namely it doesn't involve the other participants in the 
right way.  It is too easy for Eve to simulate the other 
participants.  This could be patched up by adding macroscopic (i.e. 
non-quantum) authentication protocols, but the cost of doing this would 
probably be comparable to the cost of implementing the classical DC 
network.  So it's not clear what is the advantage of the SC network.



One could imagine a hybrid scheme:
   1) The participants exchange keys, as in the conventional DC net, and
   2) The participants process the signal by rotating the polarization, or 
shifting the quantum phase, or other unconventional, non-Boolean 
transformations.
   3) They could recirculate the signal C1 times if desired.

Right now this seems like a solution in search of a problem;  that is, I 
don't know any problems for which the solution requires ideas (2) and (3), 
but they seem like interesting ideas that should be good for something.





Re: The Shining Cryptographers Net

2001-01-19 Thread John Denker

At 02:04 PM 1/18/01 -0800, [EMAIL PROTECTED] wrote:

the rotation stations could
somehow count or limit the number of photons going through so that they
would know when there were extra.  I think this is possible in theory;

Right, it is.  Here's a Gedankenexperiment:  temporarily trap the signal in 
a cylindrical waveguide resonator (organ pipe).  The pressure on the 
end-caps is proportional to photon number and independent of polarization 
angle.  From this we conclude we can measure number in a way that commutes 
with polarization.

I went overboard when previously I said "any" attempt at integrity-checking 
would mess up the signal.  Still, integrity-checking of a single photon 
would be hard.

  I don't think she could learn much with a single photon,

I'm not so sure about that.  Remember, photon counters (which measure 
A_dagger A) are not the only measuring devices in the world.  There are 
also voltmeters (which measure A_dagger plus A).  For low-amplitude analog 
signals, the voltmeter is vastly more informative.  I have not yet cobbled 
up a believable apparatus for measuring the polarization angle of a single 
photon, but I don't think it would be terribly hard to do so.





Re: The Shining Cryptographers Net

2001-01-18 Thread John Denker

At 11:20 PM 1/17/01 -0800, [EMAIL PROTECTED] wrote in part:
The probability that Eve's measurement will leave the result unchanged is 
3/4, and therefore the probability that she will perturb the result is 1/4.

OK so far.  Then, for the case of two measurements,

Eve's chances of perturbing the measurement have increased from
1/4 to 3/8 by doing two measurements rather than one. Increasing the 
number of measurements to three reduces the chance of
success to 9/16, with a 7/16 chance of perturbation.

That's not the right way to analyze it.  My previous remarks on this 
subject were partly unclear and partly wrong... and in any case there is a 
better way to look at it.  So let me try again from scratch:

There is one distinguished participant;  call him Arthur because he sits at 
the head of the Round Table.  In broad outline, the procedure is:
   a) Arthur emits a photon
   b) The photon circulates around the ring C times
   c) Arthur catches the photon and publishes the final result.

It simplifies the discussion somewhat if Arthur is not one of the 
participants;  he just reaches in to insert the photon at the beginning, 
and reaches in to extract it at the end.

Note that each of the participants is supposed to just rotate the 
photon.  They just choose the settings on their rotators (Kerr-effect cells 
or whatever) and wait for the photon to whizz through.  They cannot do any 
additional processing without messing up the algorithm.  In particular, any 
attempt at integrity checking, no matter how well-intentioned, would damage 
the signal the same way eavesdropping would.

We can summarize what we know so far:
   1) The algorithm uses physics to more-or-less exclude passive 
attacks;  that is its strength.
   2) On the other side of the same coin, this introduces a weakness:  it 
limits the ability to detect active attacks.

Therefore, if Eve is smart, she will use an active attack.  So let's 
consider an aggressive, hyper-active attack.

Eve need not limit herself to snooping "the signal".  What she really wants 
to know is the "state of mind" of the participants, i.e. the settings of 
their rotators.  If she knows that, she knows everything.  She can, as a 
final step, synthesize a mockup of the final result and feed it to Arthur.

Eve can mount a known-plaintext attack against each rotator.  That is, she 
can send in a known photon, or if necessary multiple known photons, and see 
what comes out.

It would not be easy for the participants to detect such an attack 
directly.  They could defend against it to some degree by pre-arranging 
strict timing requirements on their signals... but they would need to keep 
these arrangements secret from Eve.  At this point AFAICT the whole scheme 
is in danger of losing its elegance, and perhaps of losing its raison d'etre.

Or does somebody have a good defense against this hyper-active attack?





Re: The Shining Cryptographers Net

2001-01-17 Thread John Denker

At 08:35 PM 1/16/01 -0800, [EMAIL PROTECTED] wrote:

To recap, a group of cryptographers wants to communicate anonymously,
without the sender of a message being traced.

To recap in more detail, as I understand it:
   1) The desired result is a plain broadcast message, open to the world 
(including Eve).
   2) Another desired property is that nobody can determine who in the 
group originated the message.
   3a) For the original dining philosophers, there is a first phase where 
participants exchange random keys pairwise in private.
   3b) The point of _shining_ philosophers is that this phase is absent.
   4) Thereafter there is a second phase wherein open messages are passed 
among the participants.  Eve can tap these messages in any way permitted by 
the laws of physics.

If this is not a correct statement of the problem, please clarify.

In the case of circulation counts greater than 1, each individual rotation
can be chosen in such a way that it is uniformly distributed between 0
and 180 degrees.

Fine.  We are using the physics of photons to do modular arithmetic, mod 
180 degrees.

Now we asssume that Eve, the eavesdropper, has corrupted some of the
cryptographers and is able to make them behave improperly.  She wants
to determine who is sending a given message by making extra measurements
on the photon as it passes through the stations she has corrupted.

IMHO that's an odd threat model.  If she has corrupted the actual sender, 
the problem is trivial.  If she has corrupted all stations except the 
actual sender, the problem is trivial.  If she has corrupted M out of the N 
total stations, she can narrow down the sender to one of the N-M 
uncorrupted stations.

Based on Hal's statements below, I assume the threat model also includes 
attempts by Eve to tap the phase-2 communications between the 
participants.  I assume this was just accidentally not mentioned above.

Note that photon polarization is a two-state system.  Once a basis has
been chosen for measuring the polarization, any such measurement collapses
the photon into one of the two pure states of that basis.  Eve has the
power to choose the basis she will use for her measurement, but she cannot
avoid collapsing the photon state.

That is not a fully correct statement of the physics.  We agree that there 
exist a class of measurement operators ("strong" measurements) which do 
behave as described above.  However, there also exist "weak" measurements 
which couple only weakly to the signal being measured.  They return less 
information than a strong measurement, and perturb the signal to a lesser 
degree.

This is important because any real-world quantum computer would have to 
make allowances for imperfections in its own apparatus.  A skillful 
eavesdropper could conceal her actions by making them look like only a 
small increase in the natural noise.

Classical algorithms do not share the same vulnerability, since they can 
make sure that each piece of the apparatus is very reliable.

Eve's effect on the photon does not depend on where
she makes the measurement, and for simplicity we can consider the case
where the measures the photon immediately before it is measured by the
final cryptographer.

This seems to overlook the possibility of multiple weak 
measurements.  Beware, the laws of physics do not exclude this.

The first result I have is that ...

The aforementioned quibbles about the physics, and about the threat model, 
somewhat undermine the conclusions.  It may be possible to re-establish the 
main conclusions, but it appears a more detailed argument is necessary.





Re: The Shining Cryptographers Net

2001-01-16 Thread John Denker

At 10:35 PM 1/15/01 -0800, [EMAIL PROTECTED] wrote:
Here is a rough idea for a quantum-cryptography variant on the DC Net,
the Dining Cryptographers Net invented by David Chaum.

The photon starts off with vertical polarization.  Each cryptographer
manages a station through which the photon passes, which can be configured
to either rotate the photon polarization 90 degrees, or to leave it alone.

At the end, the photon polarization is measured by attempting to pass it
through a vertical polarizer.  If it passes, the photon has not been
rotated, while if it is absorbed, it was rotated.  In this way the
message bit is recovered.

Anonymity derives from the inability of an attacker to measure the photon
without destroying it, unless he can guess its state.


Hmmm.  This seems like a mistake in the physics.  If the attacker, Eve, 
knows that a photon has either vertical (0 degrees) or horizontal (90 
degrees) polarization, she can measure it at any point in the ring without 
destroying any information, and therefore without risk of detection.

In fancy physics language, these two measurements are 
"compatible".  Measurement operators can be compatible
   a) if they are completely unrelated, or
   b) if they are completely correlated.
Case (b) applies here;  they are 100% anti-correlated.  One can write the 
operator equation for projection onto the two polarization states:
 P_0 + P_90 = 1
and one can implement this in practice to high accuracy using e.g. a 
Brewster-angle beam splitter.

Quantum cryptography relies on measurements of _incompatible_ 
variables.  In this case polarization along a 45-degree axis would be an 
example of something incompatible with measurements along the vertical and 
horizontal axes.

It may or may not be possible to salvage the underlying idea of "shining 
cryptographers" by using 45-degree rotations (not just 90-degree 
rotations).  Alas I don't immediately see how.





Re: audio keyboard snooping

2001-01-13 Thread John Denker

At 01:37 PM 1/12/01 -0800, Ray Dillinger mentioned:
interferometry to get the exact locations
on a keyboard of keystrokes from the sound of someone typing.

Whereupon Perry conjectured:

A quick contemplation of the wavelength of the sounds in question
would put an end to that speculation I suspect.

Also At 04:40 PM 1/12/01 -0800, Perry asked:
Remember your basic science: you can't resolve something smaller than
half a wavelength. (Well, you can, with certain techniques, but things
get seriously hairy at that point, and in general the limit is half a
wavelength.) Given this, it is unlikely that you're going to figure
out whether the g or the h key was struck. If I'm wrong here, I'd like
to hear a detailed counterargument or evidence.

So.

1) Basic assumptions:  What wavelengths should we consider?  Just because a 
radio-shack microphone is limited to 20kHz doesn't mean a determined 
adversary can't get a microphone with vastly more bandwidth.  The 
microphone is not a limitation.

The most fundamental limitation is the risetime of the clicks emitted by 
the keyboard.  I'm sure this varies widely from keyboard to keyboard.

2) Basic science:  A time-domain analysis (in terms of risetimes et cetera) 
is probably more illuminating than a frequency-domain analysis.

The acoustic propagation time from one key to another is 50 microsec (17.2 
mm key spacing, 345 m/s speed of sound) assuming the adversary has a 
favorable geometry.  Divide by 2 if you like as an estimate of GDoP 
(geometric dilution of precision).  Having a click with a 25 microsec 
risetime is certainly not implausible.

Conclusion:  A careful contemplation of the acoustics does not, in general, 
rule out this form of eavesdropping.  OTOH a careful spook could buy a 
non-clicky keyboard.

3) MORE IMPORTANTLY, the analysis seems a bit pointless, rather like 
picking the lock on the side door while the front door stands open.  That 
is, if I have a clicky keyboard, it is likely that certain keys emit 
systematically different clicks.  Certainly that is true for the keyboard I 
am using at the moment.  If we consider these clicks to be the codetext 
alphabet, then only a rather simple substitution cipher, with perhaps some 
lossy compression, stands between the adversary and my secrets (plaintexts 
as well as keys).





Re: recurrence relation (iterated nonlinear map)

2000-03-25 Thread John Denker

At 12:50 PM 3/25/00 -0800, Bram Cohen wrote:

Given that f(x+1) = f(x) * f(x) + c, does anybody know how to express f(x)
in closed form?

Well... That's an example of an iterated nonlinear map.  Such things have 
been extensively studied.  For some values of c, for some initial 
conditions, the iterates quickly converge to fixed points, in which case 
there is a very simple closed form :-).  For other values of c, you get 
chaos, in which case there are some simple things you can say, but probably 
not the sort of closed form you were wishing for.

Even in the chaotic regime, such maps do not make good digital 
pseudo-random number generators.  Basically they act like shift registers, 
reading out successive insignificant digits of c.  When c is a real 
honest-to-goodness real number, the behavior can be quite 
interesting;  when c is a float I suspect it's much less interesting.  But 
I'm not a expert.  If you really want to know, there are _serious_ experts 
on this topic, and dozens of scholarly books.

Cheers --- jsd





Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-17 Thread John Denker

Hi Ted --

At 11:41 PM 8/14/99 -0400, you wrote: 
 
standard Mathematician's style --- encrypted by formulae 
guaranteed to make it opaque to all but those who are trained in the
peculiar style of Mathematics' papers. 
 ...
someone tried to pursuade me to use Maurer's test
...
too memory intensive and too CPU intensive

You are very wise to be skeptical of mathematical mumbo-jumbo.

You mentioned questions about efficiency, but I would like to call into
question whether the entropy estimate provided by Maurer's Universal
Statistical Test (MUST) would be suitable for our purposes, even if it
could be computed for free.

Don't be fooled by the Universal name.  If you looked it up in a real-world
dictionary, you might conclude that Universal means all-purpose or
general-purpose.  But if you look it up in a mathematical dictionary, you
will find that a Universal probability distribution has the property that
if we compare it to some other distribution, it is not lower by more than
some constant factor.  Alas, the "constant" depends on what two
distributions are being compared, and there is no uniform bound on it!  Oooops!

In the language of entropy, a Universal entropy-estimator overestimates the
entropy by no more than a constant -- but beware, there is no uniform upper
bound on the constant.

To illustrate this point, I have invented Denker's Universal Statistical
Test (DUST) which I hereby disclose and place in the public domain:
According to DUST, the entropy of a string is equal to its length.  That's
it!  Now you may not *like* this test, and you may quite rightly decide
that it is not suitable for your purposes, but my point is that according
to the mathematical definitions, DUST is just as Universal as MUST.

There are profound theoretical reasons to believe it is impossible to
calculate a useful lower bound on the entropy of a string without knowing
how it was prepared.  There simply *cannot* be an all-purpose statistical test.

If you were to make the mistake of treating a Universal estimator as an
all-purpose estimator, and then applying it in a situation where the input
might (in whole or in part) be coming from an adversary, you would lay
yourself open to a chosen-seed attack (analogous to a chosen-plaintext attack).

On the other side of the same coin, if you *do* know something about how
the input was prepared, there obviously are things you can do to improve
your estimate of its entropy.  For example, in the early stages of a
hardware RNG, you could use two input channels, sending the
differential-mode signal to the next stage, and using the common-mode
signal only for error checking.  This is a good way to get rid of a certain
type of interference, and could be quite useful in the appropriate
circumstances.  Returning to the ugly side of the coin, you can see that a
small change in the way the inputs were prepared would make this
differencing scheme worthless, possibly leading to wild overestimates of
the entropy.

BOTTOM LINE:  
 *) Incorporating an all-purpose entropy-estimator into /dev/random is
impossible.
 *) Incorporating something that *pretends* to be an all-purpose estimator
is a Really Bad Idea.
 *) The present design appears to be the only sound design:  whoever
provides the inputs is responsible for providing the estimate of the
entropy thereof.  If no estimate is provided, zero entropy is attributed.

Cheers --- jsd




Re: linux-ipsec: /dev/random

1999-08-04 Thread John Denker

At 10:08 AM 8/4/99 -0400, D. Hugh Redelmeier wrote:

I think that this description reflects an inappropriate understanding
of entropy.  Entropy is in some sense spread throughout the whole
output of /dev/urandom.  You don't use entropy up, you spread it over
more and more bytes of output.  This view, of course, depends on
trusting the hashing/mixing to do what it is supposed to do.

What matters here is not your understanding or my understanding of what
entropy is.  What matters to me is /dev/random's opinion of how much
entropy it has on hand.  Reads from /dev/urandom deplete this quantity,
byte for byte, so that heavy demands on /dev/urandom cause blockage of any
processes that make any use of /dev/random.  I renew my assertion that this
constitutes, shall we say, an opportunity for improvement.




Re: linux-ipsec: /dev/random

1999-08-03 Thread John Denker

At 10:09 AM 8/2/99 -0400, Paul Koning wrote:

1. Estimating entropy.  Yes, that's the hard one.  It's orthogonal
from everything else.  /dev/random has a fairly simple approach;
Yarrow is more complex.

It's not clear which is better.  If there's reason to worry about the
one in /dev/random, a good solution would be to include the one from
Yarrow and use the smaller of the two answers.

Hard?  That's much worse than hard.  In general, it's impossible in
principle to look at a bit stream and determine any lower bound on its
entropy.  Consider the bitstream produced by a light encoding of /dev/zero.
 If person "A" knows the encoding, the conditional entropy is zero.  If
person "B" hasn't yet guessed the encoding, the conditional entropy is large.

Similar remarks apply to physical entropy:  I can prepare a physical system
where almost any observer would measure lots of entropy, whereas someone
who knew how the system was prepared could easily return it to a state with
10**23 bits less apparent entropy.  Example: spin echoes.

2. Pool size.  /dev/random has a fairly small pool normally but can be 
made to use a bigger one.  Yarrow argues that it makes no sense to use 
a pool larger than N bits if an N bit mixing function is used, so it
uses a 160 bit pool given that it uses SHA-1.  I can see that this
argument makes sense.  (That suggests that the notion of increasing
the /dev/random pool size is not really useful.)

Constructive suggestion:  given a RNG that we think makes good use of an N
bit pool, just instantiate C copies thereof, and combine their outputs.
ISTM this should produce something with N*C bits of useful state.

5. "Catastrophic reseeding" to recover from state compromise.

So while this attack is a bit of a stretch, defending against it is
really easy.  It's worth doing.

I agree.  As you and Sandy pointed out, one could tack this technology onto
/dev/urandom and get rid of one of the two main criticisms.

And could we please call it "quantized reseeding"?  A catastrophe is
usually a bad thing.

6. Inadequate entropy sources for certain classes of box.

If the TRNG has zero output, all you can do is implement a good PRNG and
give it a large, good initial seed "at the factory".

If the TRNG has a very small output, all you can do is use it wisely.
Quantized reseeding appears to be the state of the art.

--

There's one thing that hasn't been mentioned in any of the "summaries", so
I'll repeat it:  the existing /dev/urandom has the property that it uses up
*all* the TRNG bits from /dev/random before it begins to act like a PRNG.
Although this is a non-issue if only one application is consuming random
bits, it is a Bad Thing if one application (that only needs a PRNG) is
trying to coexist with another application (that really needs a TRNG).

This, too, is relatively easy to fix, but it needs fixing.

I see no valid argument that there is anything major wrong with the
current generator, nor that replacing it with Yarrow would be a good
thing at all.  

I agree, depending on what you mean by "major".  I see three "areas for
improvement" 
  a) don't reseed more often than necessary, so it doesn't suck the TRNG dry,
  b) when it does reseed, it should use quantized reseeding, and
  c) get around the limitation of the width of the mixing function, perhaps
using the parallel-instances trick mentioned above, or otherwise.




Re: linux-ipsec: /dev/random

1999-08-03 Thread John Denker

At 01:27 PM 8/2/99 -0400, Paul Koning wrote:

we weren't talking about "in principle" or "in general".
Sure, given an unspecified process of unknown (to me) properties I
cannot make sensible statements about its entropy.  That is true but
it isn't relevant to the discussion.

Instead, we're talking about systems where we have some understanding
of the properties involved.

For example, to pick a physical process, suppose I had a noise
generator (resistor), shielding of known properties or at least
bounded effectiveness, biases ditto, I would say I can then come up
with a reasonable entropy estimate, especially if I'm quite
conservative.  This is what people typically do if they build
"hardware random number generators".  They certainly need to be
treated with care and analyzed cautiously, but it definitely is a
thing that can be done.

I agree with that.  Indeed I actually attached a homebrew TRNG to my
server, pretty much as you described.

Sure, you can do cat /dev/zero | md5sum  /dev/random, but I don't
believe anyone is proposing that as a way of feeding entropy into it.

That's where we might slightly disagree :-) ... I've seen some pretty
questionable proposals ... but that's not the point.

The point is that there are a lot of customers out there who aren't ready
to run out and acquire the well-designed hardware TRNG that you alluded to.
 So we need to think carefully about the gray area between the
strong-but-really-expensive solution and the cheap-but-really-lame
proposals.  The gray area is big and important.

Cheers --- jsd



Re: linux-ipsec: /dev/random

1999-08-03 Thread John Denker

At 01:50 PM 8/2/99 -0400, Paul Koning wrote:

I only remember a few proposals (2 or 3?) and they didn't seem to be
[unduly weak].  Or do you feel that what I've proposed is this
weak?  If so, why?  I've seen comments that say "be careful" but I
don't remember any comments suggesting that what I proposed is
completely bogus...

 We can waste lots of cycles having cosmic discussions, 
 but that's not helping  
matters.  What we need is a minimum of ONE decent quality additional
entropy source, one that works for diskless IPSEC boxes. 

OK, I see four proposals on the table.  (If I've missed something, please
accept my apologies and send a reminder.)

1) Hardware TRNG
2) Network timing
3) Deposits from a "randomness server"
4) Just rely on PRNG with no reseeding.

Discussion:

1) Suppose we wanted to deploy a jillion of these things.  Suppose they
have hardware TRNGs at an incremental cost of $10.00 apiece.  That comes to
ten jillion dollars, and I don't want to pay that unless I have to.

2) Network timing may be subject to observation and possibly manipulation
by the attacker.  My real-time clocks are pretty coarse (10ms resolution).
This subthread started with a discussion of software to estimate the
entropy of a bitstream, and I submit that this attack scenario is a perfect
example of a situation where no software on earth can provide a useful
upper bound on the entropy of the offered bit-stream.

3) Deposits from a server are conspicuously ineffective for terminating a
continuation attack.  If we can't do better than that, we might as well go
for option (4) and not even pretend we are defending against continuation
attacks.

4) I don't think my customers would be very happy with a system that could
not recover from a transient read-only compromise.


So... What have I missed?  What's your best proposal?

Thanx --- jsd




Re: linux-ipsec: Re: TRNG, PRNG

1999-07-28 Thread John Denker

At 08:02 PM 7/22/99 +0200, Anonymous wrote:
 That is:
   1a') When there is entropy in the pool, it [/dev/urandom]
 gobbles it all up before
 acting like a PRNG.  Leverage factor=1.  This causes other applications to
 stall if they need to read /dev/random.

This does not seem to be a big problem, and in fact is arguably the right
behavior.

What it means is, /dev/urandom provides the best quality random numbers
possible, given the entropy available.  

H.  People usually take the "minimax" approach to security analysis;
that is, we design our defenses assuming the opponents make their best
move.  Therefore I don't undestand the argument for using "best available"
bits.

ISTM that if a certain quality X+ is required, it should be required
always, unless proven otherwise.  To say it the other way, if a certain
quality X- suffices sometimes it should suffice always, unless proven
otherwise.  

In my case X- is the unreseeded PRNG behavior of /dev/urandom.  The
designers of linux-ipsec have evidently decided this is good enough,
because that's where they get key material.

In my application, many keys will be generated during conditions where the
TRNG has been totally depleted.  I must assume that attackers will know
this, and will be able to focus their attacks on those keys.

If you are telling me that the unreseeded PRNG is not good enough, then I
have deployed an insecure system.  That would be bad, and there would be no
way to fix it short of a hardware TRNG.

OTOH if the unreseeded PRNG *is* good enough, then it is wastefully selfish
for it to gobble up all the TRNG bits.  It is improper to assume that the
application that is gobbling up all the PRNG bits is the only application
running on this machine.  In my case there are other applications for which
one could make a very good argument that they need TRNG bits under
conditions when IPsec does not.

Also..


At 12:40 PM 7/22/99 -0700, bram wrote in part:

 In particular, consider the following reseeding schedule:
   a) Every N minutes...
   b) Every Z bits of PRNG output... 
   c) As soon as a quantum of TRNG material is available ...
 ... whichever comes *LAST*, and where N and Z are chosen to ensure a good
 leverage ratio.

a) and b) don't help much - the true answer is c).

One could argue (a) and (b) *do* add something -- they address the problem
that started this thread, namely the seemingly-needless depletion of the TRNG.

Right?




Re: House committee ditches SAFE for law enforcement version

1999-07-26 Thread John Denker

At 07:31 AM 7/26/99 -0400, Bill Sommerfeld wrote:

".. for any Speech or Debate in either House, they shall not be
questioned in any other place."

But then again, i'm not a lawyer, and I'm also not sure how this
provision has been interpreted in the past..

IANL but as you can imagine, members of congress take their privileges very
seriously.  No court or executive agency would dare sanction a member for
something said in speech or debate, and the privilege has even been
extended to member's aides, e.g. in the Pentagon Papers case:
  http://www.law.vill.edu/Fed-Ct/Supreme/Flite/opinions/408US606.htm

Leakage from the floor (or peremptory declassification, as Senator Gravel
did with the Pentagon Papers) has been a sore point in the past.  It makes
agencies very leery of giving a "secret briefing" to members of congress.
But congress wants, and sometimes requires, such briefings.

The result is that each house has its own rules against disclosing secret
information.  I couldn't easily find a copy of the rules, but I assume that
member who broke the rules could be censured or expelled.  OTOH in a case
where there was a legitimate difference of opinion as to whether something
*should* have been classified, the member would have a very strong defense.




depleting the random number generator

1999-07-17 Thread John Denker

Hi Folks --

I have a question about various scenarios for an attack against IPsec by way 
of the random number generator.  The people on the linux-ipsec mailing list 
suggested I bring it up here.

Specifically:  consider a central machine (call it Whitney) that is 
implementing many IPsec tunnels.  For me, this is a highly non-hypothetical 
situation.

Step 1) Suppose some as-yet unknown person (the "applicant") contacts 
Whitney and applies for an IPsec tunnel to be set up.  The good part is that 
at some point Whitney tries to authenticate the Diffie-Hellman exchange (in 
conformance with RFC2409 section 5) and fails, because this applicant is an 
attacker and is not on our list of people to whom we provide service.  The 
bad part is that Whitney has already gobbled up quite a few bits of entropy 
from /dev/random before the slightest bit of authentication is attempted.

Step 2) The attacker endlessly iterates step 1.  This is easy.  AFAIK there 
is no useful limit on how often new applications can be made.  This quickly 
exhausts the entropy pool on Whitney.

Step 3a) If Whitney is getting key material from /dev/random, the result is 
a denial of service.  All the IPsec tunnels will time out and will be 
replaced slowly or not at all, because of the entropy shortage.

Step 3b) OTOH if Whitney is getting its key material from /dev/urandom 
(that's urandom with a U), then we don't have a DoS attack, but instead we 
have a situation where the attacker can mount a low-entropy attack against 
any or all of the other tunnels.  Yuuuck.

=

There are variations on this theme.  For instance, note that sshd is a 
prodigious consumer of random bits.  Therefore if your IPsec machine is 
running sshd, bad guys have another way of mounting attacks against your 
RNG.  They don't even need to know a valid ssh key;  failed ssh attempts 
suck up plenty of entropy.

==

I certainly hope these issues have been analyzed and brought under control. 
Can somebody lend me a clue as to the status, and/or where I might read more 
about it?  If this list is not the optimal forum for discussing such
things, could somebody point me to a better one?

Thanx --- jsd