Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-09 Thread Ian G
William Allen Simpson wrote:
There are already other worthy comments in the thread(s).

This is a great post.  One can't stress enough
that programmers need programming guidance,
not arcane information theoretic concepts.
We are using
computational devices, and therefore computational infeasibility is the
standard that we must meet.  We _NEED_ unpredictability rather than
pure entropy.

By this, do you mean that /dev/*random should deliver
unpredictability, and /dev/entropy should deliver ...
pure entropy?
So, here are my handy practical guidelines:
(1) As Metzger so wisely points out, the implementations of /dev/random,
/dev/urandom, etc. require careful auditing.  Folks have a tendency to
improve things over time, without a firm understanding of the
underlying requirements.

Right, but in the big picture, this is one of those
frequently omitted steps.  Why?  Coders don't have
time to acquire the knowledge or to incorporate
all the theory of RNG in, and as much of today's
software is based on open source, it is becoming the
baseline that no theoretical foundation is required
in order to do that work.  Whereas before, companies
c/would make a pretence at such a foundation, today,
it is acceptable to say that you've read the Yarrow
paper and are therefore qualified.
I don't think this is a bad thing, I'd rather have a
crappy /dev/random than none at all.  But if we
are to improve the auditing, etc, what we would
need is information on just _what that means_.
E.g., a sort of webtrust-CA list of steps to take
in checking that the implementation meets the
desiderata.
(2) The non-blocking nature of /dev/urandom is misunderstood.  In fact,
/dev/urandom should block while it doesn't have enough entropy to reach
its secure state.  Once it reaches that state, there is no future need
to block.

If that's the definition that we like then we should
create that definition, get it written in stone, and
start clubbing people with it (*).
(2A) Of course, periodically refreshing the secure state is a good
thing, to overcome any possible deficiencies or cycles in the PRNG.

As long as this doesn't effect definition (2) then it
matters not.  At the level of the definition, that is,
and this note belongs in the implementation notes
as do (2B), (2C).
(2B) I like Yarrow.  I was lucky enough to be there when it was first
presented.  I'm biased, as I'd come to many of the same conclusions,
and the strong rationale confirmed my own earlier ad hoc designs.

(2C) Unfortunately, Ted Ts'o basically announced to this list and
others that he didn't like Yarrow (Sun, 15 Aug 1999 23:46:19 -0400).  Of
course, since Ted was also a proponent of 40-bit DES keying, that depth
of analysis leads me to distrust anything else he does.  I don't know
whether the Linux implementation of /dev/{u}random was ever fixed.

( LOL... Being a proponent of 40-bit myself, I wouldn't
be so distrusting.  I'd hope he was just pointing out
that 40-bits is way stronger than the vast majority
of traffic out there;  that which we talked about here
is buried in the noise level when it comes to real effects
on security simply because it's so rare. )
(3) User programs (and virtually all system programs) should use
/dev/urandom, or its various equivalents.
(4) Communications programs should NEVER access /dev/random.  Leaking
known bits from /dev/random might compromise other internal state.
Indeed, /dev/random should probably have been named /dev/entropy in the
first place, and never used other than by entropy analysis programs in
a research context.

I certainly agree that overloading the term 'random'
has caused a lot of confusion.  And, I think it's an
excellent idea to abandon hope in that area, and
concentrate on terms that are useful.
If we can define an entropy device and present
that definition, then there is a chance that the
implementors of devices in Unixen will follow that
lead.  But entropy needs to be strongly defined in
practical programming terms, along with random
and potentially urandom, with care to eliminate
such crypto academic notions as information
theoretic arguments and entropy reduction.

(4A) Programs must be audited to ensure that they do not use
/dev/random improperly.
(4B) Accesses to /dev/random should be logged.
I'm confused by this aggresive containment of the
entropy/random device.  I'm assuming here that
/dev/random is the entropy device (better renamed
as /dev/entropy) and Urandom is the real good PRNG
which doesn't block post-good-state.
If I take out 1000 bits from the *entropy* device, what
difference does it make to the state?  It has no state,
other than a collection of unused entropy bits, which
aren't really state, because there is no relationship
from one bit to any other bit.  By definition.  They get
depleted, and more gets collected, which by definition
are unrelated.
Why then restrict it to non-communications usages?
What does it matter if an SSH daemon leaks bits used
in its *own* key generation if those bits can never be

Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-09 Thread Taral
On Sat, Jan 08, 2005 at 10:46:17AM +0800, Enzo Michelangeli wrote:
 But that was precisely my initial position: that the insight on the
 internal state (which I saw, by definition, as the loss of entropy by the
 generator) that we gain from one bit of output is much smaller than one
 full bit. 

I think this last bit is untrue. You will find that the expected number
of states of the PRNG after extracting one bit of randomness is half of
the number of states you had before, thus resulting in one bit of
entropy loss.

-- 
Taral [EMAIL PROTECTED]
This message is digitally signed. Please PGP encrypt mail to me.
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?


pgpEGgoI4O221.pgp
Description: PGP signature


Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-08 Thread Enzo Michelangeli
- Original Message - 
From: [EMAIL PROTECTED]
To: cryptography@metzdowd.com
Sent: Friday, January 07, 2005 9:30 AM
Subject: Re: entropy depletion (was: SSL/TLS passive sniffing)

  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Enzo
  Michelangeli
  Sent: Tuesday, January 04, 2005 7:50 PM
 
  This entropy depletion issue keeps coming up every now and
  then, but I still don't understand how it is supposed to
  happen. If the PRNG uses a really non-invertible algorithm
  (or one invertible only with intractable complexity), its
  output gives no insight whatsoever on its internal state.
 
 I see much misunderstanding of entropy depletion and many misstatements
 because of it.

 It is true you don't know what the internal state is but the number of
 possible internal states tends to reduce with every update of the
 internal state. See Random Mapping Statistics by Philippe Flajolet and
 Andrew M. Odlyzko (Proceedings of the workshop on the theory and
 application of cryptographic techniques on Advances in cryptology,
 Houthalen, Belgium, Pages: 329 - 354, year 1990) for a thorough
 discussion.
[...]
 In the real world, our PRNG state update functions are complex enough
 that we don't know if they are well behaved. Nobody knows how many
 cycles exist in a PRNG state update function using, for example, SHA-1.
 You run your PRNG long enough and you may actually hit a state that,
 when updated, maps onto itself. When this occurs your PRNG will start
 producing the same bits over and over again. It would be worse if you
 hit a cycle of 10,000 or so because you may never realize it.

 I don't know of any work on how not-so well behaved PRNG state update
 function lose entropy.

But that was precisely my initial position: that the insight on the
internal state (which I saw, by definition, as the loss of entropy by the
generator) that we gain from one bit of output is much smaller than one
full bit. However, I've been convinced by the argument broght by John and
others - thanks guys - that we should not mix the concept of entropy
with issues of computational hardness.

That said, however, I wonder if we shouldn't focus more, for practical
purposes, on the replacement concept offered by John of usable
randomness, with a formal definition allowing its calculation in concrete
cases (so that we may assess the risk deriving from using a seeded PRNG
rather than a pure RNG in a more quantitative way). The paper you mention
appears to go in that direction.

Enzo


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-07 Thread Michael_Heyman
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Enzo 
 Michelangeli
 Sent: Tuesday, January 04, 2005 7:50 PM
 
 This entropy depletion issue keeps coming up every now and 
 then, but I still don't understand how it is supposed to 
 happen. If the PRNG uses a really non-invertible algorithm 
 (or one invertible only with intractable complexity), its 
 output gives no insight whatsoever on its internal state.

I see much misunderstanding of entropy depletion and many misstatements
because of it.

It is true you don't know what the internal state is but the number of
possible internal states tends to reduce with every update of the
internal state. See Random Mapping Statistics by Philippe Flajolet and
Andrew M. Odlyzko (Proceedings of the workshop on the theory and
application of cryptographic techniques on Advances in cryptology,
Houthalen, Belgium, Pages: 329 - 354, year 1990) for a thorough
discussion. 

The jist is that a well behaved state update function for a PRNG will
have one very long cycle. This cycle will be shorter than the number of
possible values that the state can hold. States not on the cycle are on
branches of states that eventually land on the cycle. Flajolet and
Odlyzko go on to show that the expected cycle length for a 1000 bit
state will be around 2^500 iterations.

So, you start your PRNG by filling the state with 1000 bits of real
entropy. You have 2^1000 possible states. You use your PRNG and update
the state. Now, there are a certain number of states that the PRNG
cannot be in. After one state update, the PRNG cannot be in the states
at the ends of the chains of states branched off from the aforementioned
cycle. This means that, after one state update, you have slightly less
than 1000 bits of entropy. When you update the state again, you now have
more states that the PRNG cannot be in, thus reducing your entropy
again. Every time you use your PRNG, you reduce your entropy in this way
and you keep on doing so in an asymptotic way until, after many many
iterations, you are close enough to 500 bits that you don't care
anymore.

In the real world, our PRNG state update functions are complex enough
that we don't know if they are well behaved. Nobody knows how many
cycles exist in a PRNG state update function using, for example, SHA-1.
You run your PRNG long enough and you may actually hit a state that,
when updated, maps onto itself. When this occurs your PRNG will start
producing the same bits over and over again. It would be worse if you
hit a cycle of 10,000 or so because you may never realize it.

I don't know of any work on how not-so well behaved PRNG state update
function lose entropy. I figure the state update functions we as a
community use in what we consider to be well designed PRNGs probably
have multiple long cycles and maybe a few scary short cycles that are so
unlikely that nobody has hit them. I don't even know what multiple
cycles means for entropy.

Because of the lack of knowledge, cryptographic PRNGs have more state
than they probably need just to assure enough entropy - at least that is
one thing I look for when looking at cryptographic PRNGs.

-Michael Heyman

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-07 Thread Taral
On Thu, Jan 06, 2005 at 04:35:05PM +0800, Enzo Michelangeli wrote:
 By how much exactly? I'd say, _under the hypothesis that the one-way
 function can't be broken and other attacks fail_, exactly zero; in the
 real world, maybe a little more.

Unfortunately for your analysis, *entropy* assumes that there is
infinite compute capacity. From an information-theoretic point of view,
there is NO SUCH THING as a perfect one-way function.

-- 
Taral [EMAIL PROTECTED]
This message is digitally signed. Please PGP encrypt mail to me.
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?


pgpRflyK9JPXi.pgp
Description: PGP signature


Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-07 Thread Jerrold Leichter
|  You're letting your intuition about usable randomness run roughshod
|  over the formal definition of entropy.  Taking bits out of the PRNG
|  *does* reduce its entropy.
| 
| By how much exactly? I'd say, _under the hypothesis that the one-way
| function can't be broken and other attacks fail_, exactly zero; in the
| real world, maybe a little more. But in
| /usr/src/linux/drivers/char/random.c I see that the extract_entropy()
| function, directly called by the exported kernel interface
| get_random_bytes(), states:
| 
| if (r-entropy_count / 8 = nbytes)
| r-entropy_count -= nbytes*8;
| else
| r-entropy_count = 0;
| 
| ...which appears to assume that the pool's entropy (the upper bound of
| which is POOLBITS, defined equal to 4096) drops by a figure equal to the
| number of bits that are extracted (nbytes*8). This would only make sense
| if those bits weren't passed through one-way hashing.
The argument you are making is that because the one-way function isn't
reversible, generating values from the pool using it doesn't decrease its
computational entropy.  (Its mathematical entropy is certainly depleted,
since that doesn't involve computational difficulty.  But we'll grant that
that doesn't matter.)

The problem with this argument is that it gives you no information about the
unpredictablity of the random numbers generated.  Here's an algorithm based
on your argument:

Pool: bits[512]
initializePool()
{   Fill Pool with 512 random bits; }

getRandom() : bits[160]
{   return(SHA(bits));
}

By your argument, seeing the result of a call to getRandom() does not reduce
the effective entropy of the pool at all; it remains random.  We certainly
believe that applying SHA to a random collection of bits produces a random
value.  So, indeed, the result of getRandom() is ... random.  It's also
constant.

Granted, no one would implement a random number generator this way.  But
*why*?  What is it you have to change to make this correct?  Why?  Can you
prove it?  Just saying you have to change the pool after every call
won't work:

getRandom() : bits[160]
{   Rotate bits left by 1 bit;
return(SHA(bits));
}

This (seems to) generated 512 random values, then repeats.  Just what *is*
good enough?
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-07 Thread John Kelsey
From: John Denker [EMAIL PROTECTED]
Sent: Jan 5, 2005 2:06 PM
To: Enzo Michelangeli [EMAIL PROTECTED]
Cc: cryptography@metzdowd.com
Subject: Re: entropy depletion (was: SSL/TLS passive sniffing)

...
You're letting your intuition about usable randomness run roughshod over
the formal definition of entropy.  Taking bits out of the PRNG *does*
reduce its entropy.  This may not (and in many applications does not)
reduce its ability to produce useful randomness.

Right.  The critical question is whether the PRNG part gets to a secure state, 
which basically means a state the attacker can't guess in the amount of work 
he's able to do.   If the PRNG gets to a secure state before generating any 
output, then assuming the PRNG algorithm is secure, the outputs are 
indistinguishable from random.  

The discussion of how much fresh entropy is coming in is sometimes a bit 
misleading.  If you shove 64 bits of entropy in, then generate a 128-bit 
output, then shove another 64 bits of entropy in, you don't end up in a secure 
state, because an attacker can guess your first 64 bits of entropy from your 
first output.  What matters is how much entropy is shoved in between the time 
when the PRNG is in a known state, and the time when it's used to generate an 
output.  

--John Kelsey

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: entropy depletion (was: SSL/TLS passive sniffing)

2005-01-06 Thread Enzo Michelangeli
- Original Message - 
From: John Denker [EMAIL PROTECTED]
Sent: Thursday, January 06, 2005 3:06 AM

 Enzo Michelangeli wrote:
[...]
   If the PRNG uses a
   really non-invertible algorithm (or one invertible only
   with intractable complexity), its output gives no insight
   whatsoever on its internal state.

 That is an invalid argument.  The output is not the only source of
 insight as to the internal state.  As discussed at
 http://www.av8n.com/turbid/paper/turbid.htm#sec-prng-attack
 attacks against PRNGs can be classified as follows:
1. Improper seeding, i.e. internal state never properly initialized.
2. Leakage of the internal state over time. This rarely involves
  direct cryptanalytic attack on the one-way function, leading to
  leakage through the PRNGs output channel.  More commonly it
  involves side-channels.
   3. Improper stretching of limited entropy supplies, i.e. improper
  reseeding of the PRNG, and other re-use of things that ought not
  be re-used.
   4. Bad side effects.

 There is a long record of successful attacks against PRNGs (op cit.).

Yes, but those are implementation flaws. Also a true RNG could present
weaknesses and be attacked (e.g., with strong EM fields overcoming the
noise of its sound card; not to mention vulnerabilities induced by the
quirks you discuss at
http://www.av8n.com/turbid/paper/turbid.htm#sec-quirks).

Anyway, I was not saying RNG's are useless because PRNG's are more than
enough: the scope of my question was much narrower, and concerned the
concept of entropy depletion.

 I'm not saying that the problems cannot be overcome,
 but the cost and bother of overcoming them may be such
 that you decide it's easier (and better!) to implement
 an industrial-strength high-entropy symbol generator.

Sure, I don't disagree with that.

   As entropy is a measure of the information we don't have about the
   internal state of a system,

 That is the correct definition of entropy ... but it must be correctly
 interpreted and correctly applied;  see below.

   it seems to me that in a good PRNGD its value
   cannot be reduced just by extracting output bits. If there
   is an entropy estimator based on the number of bits extracted,
   that estimator must be flawed.

 You're letting your intuition about usable randomness run roughshod
 over the formal definition of entropy.  Taking bits out of the PRNG
 *does* reduce its entropy.

By how much exactly? I'd say, _under the hypothesis that the one-way
function can't be broken and other attacks fail_, exactly zero; in the
real world, maybe a little more. But in
/usr/src/linux/drivers/char/random.c I see that the extract_entropy()
function, directly called by the exported kernel interface
get_random_bytes(), states:

if (r-entropy_count / 8 = nbytes)
r-entropy_count -= nbytes*8;
else
r-entropy_count = 0;

...which appears to assume that the pool's entropy (the upper bound of
which is POOLBITS, defined equal to 4096) drops by a figure equal to the
number of bits that are extracted (nbytes*8). This would only make sense
if those bits weren't passed through one-way hashing. Perhaps, a great
deal of blockage problems when using /dev/random would go away with a more
realistic estimate.

Enzo


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2005-01-06 Thread Werner Koch
On Wed, 5 Jan 2005 08:49:36 +0800, Enzo Michelangeli said:

 That's basically what /dev/urandom does, no?  (Except that it has the
 undesirable side-effect of depleting the entropy estimate maintained
 inside the kernel.)

 This entropy depletion issue keeps coming up every now and then, but I
 still don't understand how it is supposed to happen. If the PRNG uses a

It is a practical issue: Using /dev/urandom to avoid waiting for a
blocked /dev/random will let other processes wait infinitely on a
blocked /dev/random.

The Linux implementation of /dev/urandom is identical to /dev/random
but instead of blocking, (as /dev/random does on a low entropy
estimation) it continues to give output by falling back to a PRNG mode
of operation.

For services with a high demand of random it is probably better to
employ its own PRNG and reseed it from /dev/random from time to time.


Salam-Shalom,

   Werner




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2005-01-04 Thread John Denker
I wrote:
If the problem is a shortage of random bits, get more random bits!
Florian Weimer responded:
We are talking about a stream of several kilobits per second on a busy
server (with suitable mailing lists, of course).  This is impossible
to obtain without special hardware.
Not very special, as I explained:
Almost every computer sold on the mass market these days has a sound
system built in. That can be used to generate industrial-strength
randomness at rates more than sufficient for the applications we're
talking about.  
How many bits per second can you produce using an off-the-shelf sound
card?  Your paper gives a number in excess of 14 kbps, if I read it
correctly, which is surprisingly high.
1) You read it correctly.
  http://www.av8n.com/turbid/paper/turbid.htm#tab-soundcards
2) The exact number depends on details of your soundcard.  14kbits/sec
was obtained from a plain-vanilla commercial-off-the-shelf desktop
system with AC'97 audio.  You can of course do worse if you try (e.g.
Creative Labs products) but it is easy to do quite a bit better.
I obtained in excess of 70kbits/sec using an IBM laptop mgfd in
1998.
3) Why should this be surprising?
It's an interesting approach, but for a mail server which mainly sends
to servers with self-signed certificates, it's overkill.  
Let's see
 -- Cost = zero.
 -- Quality = more than enough.
 -- Throughput = more than enough.
I see no reason why I should apologize for that.
Debian also
supports a few architectures for which sound cards are hard to obtain.
And we would separate desktop and server implementations because the
sound card is used on desktops.  I'd rather sacrifice forward secrecy
than to add such complexity.
As the proverb says, no matter what you're trying to do, you can always
do it wrong.  If you go looking for potholes, you can always find a
pothole to fall into if you want.
But if you're serious about solving the problem, just go solve the
problem.  It is eminently solvable;  no sacrifices required.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL/TLS passive sniffing

2005-01-04 Thread David Wagner
Florian Weimer [EMAIL PROTECTED] writes:
I'm slightly troubled by claims such as this one:
  http://lists.debian.org/debian-devel/2004/12/msg01950.html
   [which says: If you're going to use /dev/urandom then you might
as well just not encrypt the session at all.]

That claim is totally bogus, and I doubt whether that poster has any
clue about this subject.  As far as we know, Linux's /dev/urandom is just
fine, once it has been seeded properly.  Pay no attention to those who
don't know what they are talking about.

(That poster wants you to believe that, since /dev/urandom uses a
cryptographic-strength pseudorandom number generator rather than a
true entropy source, it is useless.  Don't believe it.  The poster is
confused and his claims are wrong.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2005-01-04 Thread Greg Rose
At 22:51 2004-12-22 +0100, Florian Weimer wrote:
* John Denker:
 Florian Weimer wrote:

 Would you recommend to switch to /dev/urandom (which doesn't block if
 the entropy estimate for the in-kernel pool reaches 0), and stick to
 generating new DH parameters for each connection,

 No, I wouldn't.
Not even for the public parameters?
Am I understanding correctly? Does SSL/TLS really generate a new P and G 
for each connection? If so, can someone explain the rationale behind this? 
It seems insane to me. And not doing so would certainly ease the problem on 
the entropy pool, not to mention CPU load for primality testing.

I must be misunderstanding. Surely. Please?
Greg.

Greg RoseINTERNET: [EMAIL PROTECTED]
Qualcomm Incorporated VOICE: +1-858-651-5733   FAX: +1-858-651-5766
5775 Morehouse Drivehttp://people.qualcomm.com/ggr/
San Diego, CA 92121   232B EC8F 44C6 C853 D68F E107 E6BF CD2F 1081 A37C
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-22 Thread Florian Weimer
* Victor Duchovni:

 The third mode is quite common for STARTTLS with SMTP if I am not
 mistaken. A one day sample of inbound TLS email has the following cipher
 frequencies:

 8221(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
 6529(using TLSv1 with cipher EDH-RSA-DES-CBC3-SHA (168/168 bits))

The Debian folks have recently stumbled upon a problem in this area:
Generating the ephemeral DH parameters is expensive, in terms of CPU
cycles, but especailly in PRNG entropy.  The PRNG part means that it's
not possible to use /dev/random on Linux, at least on servers.  The
CPU cycles spent on bignum operations aren't a real problem.

Would you recommend to switch to /dev/urandom (which doesn't block if
the entropy estimate for the in-kernel pool reaches 0), and stick to
generating new DH parameters for each connection, or is it better to
generate them once per day and use it for several connections?

(There's a second set of parameters related to the RSA_EXPORT mode in
TLS, but I suppose it isn't used much, and supporting it is not a top
priority.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-22 Thread John Denker
Florian Weimer wrote:
Would you recommend to switch to /dev/urandom (which doesn't block if
the entropy estimate for the in-kernel pool reaches 0), and stick to
generating new DH parameters for each connection, 
No, I wouldn't.
 or ...
generate them once per day and use it for several connections?
I wouldn't do that, either.

If the problem is a shortage of random bits, get more random bits!
Almost every computer sold on the mass market these days has a sound
system built in. That can be used to generate industrial-strength
randomness at rates more than sufficient for the applications we're
talking about.  (And if you can afford to buy a non-mass-market
machine, you can afford to plug a sound-card into it.)
For a discussion of the principles of how to get arbitrarily close
to 100% entropy density, plus working code, see:
  http://www.av8n.com/turbid/
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-22 Thread Victor Duchovni
On Sun, Dec 19, 2004 at 05:24:59PM +0100, Florian Weimer wrote:

 * Victor Duchovni:
 
  The third mode is quite common for STARTTLS with SMTP if I am not
  mistaken. A one day sample of inbound TLS email has the following cipher
  frequencies:
 
  8221(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
  6529(using TLSv1 with cipher EDH-RSA-DES-CBC3-SHA (168/168 bits))
 
 The Debian folks have recently stumbled upon a problem in this area:
 Generating the ephemeral DH parameters is expensive, in terms of CPU
 cycles, but especailly in PRNG entropy.  The PRNG part means that it's
 not possible to use /dev/random on Linux, at least on servers.  The
 CPU cycles spent on bignum operations aren't a real problem.
 
 Would you recommend to switch to /dev/urandom (which doesn't block if
 the entropy estimate for the in-kernel pool reaches 0), and stick to
 generating new DH parameters for each connection, or is it better to
 generate them once per day and use it for several connections?
 

Actually reasoning along these lines is why Lutz Jaenicke implemented
PRNGD, it is strongly recommended (at least by me) that mail servers
use PRNGD or similar.  PRNGD delivers psuedo-random numbers mixing in
real entropy periodically.

EGD, /dev/random and /dev/urandom don't produce bits fast enough. Also
Postfix internally seeds the built-in OpenSSL PRNG via the tlsmgr process
and this hands out seeds for smtp servers and clients, so the demand for
real entropy is again reduced.

Clearly a PRNG is a compromise (if the algorithm is found to be weak we
could have problems), but real entropy is just too expensive.

I use PRNGD.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-22 Thread Florian Weimer
* Victor Duchovni:

 The Debian folks have recently stumbled upon a problem in this area:
 Generating the ephemeral DH parameters is expensive, in terms of CPU
 cycles, but especailly in PRNG entropy.  The PRNG part means that it's
 not possible to use /dev/random on Linux, at least on servers.  The
 CPU cycles spent on bignum operations aren't a real problem.
 
 Would you recommend to switch to /dev/urandom (which doesn't block if
 the entropy estimate for the in-kernel pool reaches 0), and stick to
 generating new DH parameters for each connection, or is it better to
 generate them once per day and use it for several connections?
 

 Actually reasoning along these lines is why Lutz Jaenicke implemented
 PRNGD, it is strongly recommended (at least by me) that mail servers
 use PRNGD or similar.  PRNGD delivers psuedo-random numbers mixing in
 real entropy periodically.

 EGD, /dev/random and /dev/urandom don't produce bits fast enough.

Is this the only criticism of /dev/urandom (on Linux, at least)?  Even
on ancient hardware (P54C at 200 MHz), I can suck about 150 kbps out
of /dev/urandom, which is more than enough for our purposes.  (It's
not a web server, after all.)

I'm slightly troubled by claims such as this one:

  http://lists.debian.org/debian-devel/2004/12/msg01950.html

I know that Linux' /dev/random implementation has some problems (I
believe that the entropy estimates for mouse movements are a bit
unrealistic, somewhere around 2.4 kbps), but the claim that generating
session keys from /dev/urandom is a complete no-no is rather
surprising.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-05 Thread Dirk-Willem van Gulik


On Wed, 1 Dec 2004, Anne  Lynn Wheeler wrote:

 the other attack is on the certification authorities business process

Note that in a fair number of Certificate issuing processes common in
industry the CA (sysadmin) generates both the private key -and-
certificate, signs it and then exports both to the user their PC (usually
as part of a VPN or Single Sing on setup). I've seen situations more than
once where the 'CA' keeps a copy of both on file. Generally to ensure that
after the termination of an employeee or the loss of a laptop things 'can
be set right' again.

Suffice to say that this makes evesdropping even easier.

Dw

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL/TLS passive sniffing

2004-12-05 Thread Anton Stiglic
This sounds very confused.  Certs are public.  How would knowing a copy
of the server cert help me to decrypt SSL traffic that I have intercepted?

I found allot of people mistakenly use the term certificate to mean
something like a pkcs12 file containing public key certificate and private
key.  Maybe if comes from crypto software sales people that oversimplify or
don't really understand the technology.  I don't know, but it's a rant I
have.  

Now if I had a copy of the server's private key, that would help, but such
private keys are supposed to be closely held.

Or are you perhaps talking about some kind of active man-in-the-middle
attack, perhaps exploiting DNS spoofing?  It doesn't sound like it, since
you mentioned passive sniffing.

I guess the threat would be something like an adversary getting access to a
web server, getting a hold of the private key (which in most cases is just
stored in a file, allot of servers need to be bootable without intervention
as well so there is a password somewhere in the clear that allows one to
unlock the private key), and then using it from a distance, say on a router
near the server where the adversary can sniff the connections.  A malicious
ISP admin could pull off something like that, law authority that wants to
read your messages, etc.

Is that a threat worth mentioning?  Well, it might be.  In any case,
forward-secrecy is what can protect us here.  Half-certified (or fully
certified) ephemeral Diffie-Hellman provides us with that property.

Of course, if someone could get the private signature key, he could then do
a man-in-the-middle attack and decrypt all messages as well.  It wouldn't
really be that harder to pull off.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-05 Thread Anne Lynn Wheeler
Anton Stiglic wrote:
I found allot of people mistakenly use the term certificate to mean
something like a pkcs12 file containing public key certificate and private
key.  Maybe if comes from crypto software sales people that oversimplify or
don't really understand the technology.  I don't know, but it's a rant I
have.  
 

i just had went off on possibly similar rant in comp.security.ssh where 
a question was posed about password
or certficate
http://www.garlic.com/~lynn/2004p.html#60
http://www.garlic.com/~lynn/2004q.html#0

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL/TLS passive sniffing

2004-12-01 Thread Ben Nagy
OK, Ian and I are, rightly or wrongly, on the same page here. Obviously my
choice of the word certificate has caused confusion.

[David Wagner]
 This sounds very confused.  Certs are public.  How would 
 knowing a copy
 of the server cert help me to decrypt SSL traffic that I have 
 intercepted?

Yes, sorry, what I _meant_ was the whole certificate file, PFX style, also
containing private keys. I assure you, I'm not confused, just perhaps guilty
of verbal shortcuts. I should, perhaps, have not characterised myself as
'bumbling enthusiast', to avoid the confusion with 'idiot'. :/

[...]
 Ian Grigg writes:
 I note that disctinction well!  Certificate based systems
 are totally vulnerable to a passive sniffing attack if the
 attacker can get the key.  Whereas Diffie Hellman is not,
 on the face of it.  Very curious...
 
 No, that is not accurate.  Diffie-Hellman is also insecure if 
 the private
 key is revealed to the adversary.  The private key for 
 Diffie-Hellman
 is the private exponent.

No, I'm not talking about escrowing DH exponents. I'm talking about modes
like in IPSec-IKE where there is a signed DH exchange using ephemeral DH
exponents - this continues to resist passive sniffing if the _signing_ keys
have somehow been compromised, unless I have somehow fallen on my head and
missed something.

 Perhaps the distinction you had in mind is forward secrecy.

Yes and no. Forward secrecy is certainly at the root of my question, with
regards to the RSA modes not providing it and certain of the DH modes doing
so. :)

Thanks!

ben
  


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: SSL/TLS passive sniffing

2004-12-01 Thread ben
 -Original Message-
 From: Eric Rescorla [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, December 01, 2004 7:01 AM
 To: [EMAIL PROTECTED]
 Cc: Ben Nagy; [EMAIL PROTECTED]
 Subject: Re: SSL/TLS passive sniffing
 
 Ian Grigg [EMAIL PROTECTED] writes:
[...]
  However could one do a Diffie Hellman key exchange and do this
  under the protection of the public key? [...]
 
 Uh, you've just described the ephemeral DH mode that IPsec
 always uses and SSL provides.
 
 Try googling for station to station protocol
 
 -Ekr

Right. And my original question was, why can't we do that one-sided with
SSL, even without a certificate at the client end? In what ways would that
be inferior to the current RSA suites where the client encrypts the PMS
under the server's public key.

Eric's answer seems to make the most sense - I guess generating the DH
exponent and signing it once per connection server-side would be a larger
performance hit than I first thought, and no clients care.

Thanks for all the answers, on and off list. ;)

Cheers,

ben



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-12-01 Thread Eric Rescorla
[EMAIL PROTECTED] writes:

 -Original Message-
 From: Eric Rescorla [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, December 01, 2004 7:01 AM
 To: [EMAIL PROTECTED]
 Cc: Ben Nagy; [EMAIL PROTECTED]
 Subject: Re: SSL/TLS passive sniffing
 
 Ian Grigg [EMAIL PROTECTED] writes:
 [...]
  However could one do a Diffie Hellman key exchange and do this
  under the protection of the public key? [...]
 
 Uh, you've just described the ephemeral DH mode that IPsec
 always uses and SSL provides.
 
 Try googling for station to station protocol
 
 -Ekr

 Right. And my original question was, why can't we do that one-sided with
 SSL, even without a certificate at the client end? In what ways would that
 be inferior to the current RSA suites where the client encrypts the PMS
 under the server's public key.

Just to be completely clear, this is exactly whatthey 
TLS_RSA_DHE_* ciphersuites currently do, so it's purely a matter
of configuration and deployment.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL/TLS passive sniffing

2004-11-30 Thread Ben Nagy
Hi all,

I'm a bumbling crypto enthusiast as a sideline to my other, real, areas of
security expertise. Recently a discussion came up on firewall-wizards about
passively sniffing SSL traffic by a third party, using a copy of the server
cert (for, eg, IDS purposes).

There was some question about whether this is possible for connections that
use client-certs, since it looks to me from the spec that those connections
should be using one of the Diffie Hellman cipher suites, which is obviously
not vulnerable to a passive sniffing 'attack'. Active 'attacks' will
obviously still work. Bear in mind that we're talking about deliberate
undermining of the SSL connection by organisations, usually against their
website users (without talking about the goodness, badness or legality of
that), so how do they get the private keys isn't relevant.

However, I was wondering why the implementors chose the construction used
with the RSA suites, where the client PMS is encrypted with the server's
public key and sent along - it seems to make this kind of escrowed passive
sniffing very easy. I can't think why they didn't use something based on DH
- sure you only authenticate one side of the connection, but who cares? Was
it simply to save one setup packet?

Anyone know?

Cheers,

ben


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-11-30 Thread Ian Grigg
Ben raises an interesting thought:

 There was some question about whether this is possible for connections that
 use client-certs, since it looks to me from the spec that those connections
 should be using one of the Diffie Hellman cipher suites, which is obviously
 not vulnerable to a passive sniffing 'attack'. Active 'attacks' will
 obviously still work. Bear in mind that we're talking about deliberate
 undermining of the SSL connection by organisations, usually against their
 website users (without talking about the goodness, badness or legality of
 that), so how do they get the private keys isn't relevant.

We have the dichotomy that DH protects against all passive
attacks, and a signed cert protects against most active attacks,
and most passive attacks, but not passive attacks where the
key is leaked, and not active attacks where the key is
forged (as a cert).

But we do not use both DH and certificates at the same time,
we generally pick one or the other.

Could we however do both?

In the act of a public key protected key exchange, Alice
generally creates a random key and encrypts that to Bob's
public key.  That random then gets used for further traffic.

However could one do a Diffie Hellman key exchange and do this
under the protection of the public key?  In which case we are
now protected from Bob aggressively leaking the public key.
(Or, to put it more precisely, Bob would now have to record
and leak all his traffic as well, which is a substantially
more expensive thing to engage in.)

(This still leaves us with the active attack of a forged
key, but that is dealt with by public key (fingerprint)
caching.)

Does that make sense?  The reason I ask is that I've just
written a new key exchange protocol element, and I thought
I was being clever by having both Bob and Alice provide
half the key each, so as to protect against either party
being non-robust with secret key generation.  (As a programmer
I'm more worried about the RNG clagging than the key leaking,
but let's leave that aside for now...)

Now I'm wondering whether the key exchange should do a DH
within the standard public key protected key exchange?
Hmmm, this sounds like I am trying to do PFS  (perfect
forward secrecy).  Any thoughts?

iang


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL/TLS passive sniffing

2004-11-30 Thread David Wagner
Ian Grigg writes:
I note that disctinction well!  Certificate based systems
are totally vulnerable to a passive sniffing attack if the
attacker can get the key.  Whereas Diffie Hellman is not,
on the face of it.  Very curious...

No, that is not accurate.  Diffie-Hellman is also insecure if the private
key is revealed to the adversary.  The private key for Diffie-Hellman
is the private exponent.  If you learn the private exponent that one
endpoint used for a given connection, and if you have intercepted that
connection, you can derive the session key and decrypt the intercepted
traffic.

Perhaps the distinction you had in mind is forward secrecy.  If you use
a different private key for every connection, then compromise of one
connection's private key won't affect other connections.  This is
true whether you use RSA or Diffie-Hellman.  The main difference is
that in Diffie-Hellman, key generation is cheap and easy (just an
exponentiation), while in RSA key generation is more expensive.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS passive sniffing

2004-11-30 Thread Ian Grigg
 Ian Grigg writes:
I note that disctinction well!  Certificate based systems
are totally vulnerable to a passive sniffing attack if the
attacker can get the key.  Whereas Diffie Hellman is not,
on the face of it.  Very curious...

 No, that is not accurate.  Diffie-Hellman is also insecure if the private
 key is revealed to the adversary.  The private key for Diffie-Hellman
 is the private exponent.  If you learn the private exponent that one
 endpoint used for a given connection, and if you have intercepted that
 connection, you can derive the session key and decrypt the intercepted
 traffic.

I wasn't familiar that one could think in those terms.  Reading
here:  http://www.rsasecurity.com/rsalabs/node.asp?id=2248 it
says:

In recent years, the original Diffie-Hellman protocol
has been understood to be an example of a much more
general cryptographic technique, the common element
being the derivation of a shared secret value (that
is, key) from one party's public key and another
party's private key. The parties' key pairs may be
generated anew at each run of the protocol, as in
the original Diffie-Hellman protocol.

It seems the compromise of *either* exponent would lead to
solution.

 Perhaps the distinction you had in mind is forward secrecy.  If you use
 a different private key for every connection, then compromise of one
 connection's private key won't affect other connections.  This is
 true whether you use RSA or Diffie-Hellman.  The main difference is
 that in Diffie-Hellman, key generation is cheap and easy (just an
 exponentiation), while in RSA key generation is more expensive.

Yes.  So if a crypto system used the technique of using
Diffie-Hellman key exchange (with unique exponents for each
session), there would be no lazy passive attack, where I
am defining the lazy attack as a once-off compromise of a
private key.  That is, the attacker would still have to
learn the individual exponent for that session, which
(assuming the attacker has to ask for it of one party)
would be equivalent in difficulty to learning the secret
key that resulted and was used for the secret key cipher.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]