### Re: combining entropy

```On Tue, Oct 28, 2008 at 7:55 PM, Leichter, Jerry
[EMAIL PROTECTED] wrote:
2.  The Byzantine model.  Failed modules can do anything
including cooperating by exchanging arbitrary
information and doing infinite computation.

So in the Byzantine model I can crack RSA?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```

This isn't enough.  Somehow, you have to state that the values emitted
on demand in any given round i (where a round consists of exactly one
demand on all N member and produces a single output result) cannot
receive any input from any other members.  Otherwise, if N=2 and member
0 produces true random values that member 1 can see before it responds
to the demand it received, then member 1 can cause the final result to
be anything it likes.

In the case of malicious members who can snoop the inputs,
Mal can get any result he wants if the combining function is XOR
(or, with slightly more work, if it's a non-cryptographic checksum.)
But if your combining function is a cryptographic hash,
it's computationally difficult to do.

However, even a hash isn't always enough - consider the case
where the application of the random numbers only uses k of the N bits,
and the attacker has enough time to try out 2**k (waving hands roughly here)
different cases.  So you may still need to design your protocols carefully.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On Sat, 25 Oct 2008, John Denker wrote:

| On 10/25/2008 04:40 AM, IanG gave us some additional information.
|
| Even so, it appears there is still some uncertainty as to
| interpretation, i.e. some uncertainty as to the requirements
| and objectives.
|
| I hereby propose a new scenario.  It is detailed enough to
| be amenable to formal analysis.  The hope is that it will
| satisfy the requirements and objectives ... or at least
| promote a more precise discussion thereof.
|
| persons).  Each of them, on demand, puts out a 160 bit
| word, called a member word.  We wish to combine these
| to form a single word, the group word, also 160 bits
| in length.
This isn't enough.  Somehow, you have to state that the values emitted
on demand in any given round i (where a round consists of exactly one
demand on all N member and produces a single output result) cannot
receive any input from any other members.  Otherwise, if N=2 and member
0 produces true random values that member 1 can see before it responds
to the demand it received, then member 1 can cause the final result to
be anything it likes.

This is an attack that must be considered because you already want to
consider the case:

|  b) Some of [the members] are malicious.  Their outputs may appear
|   random, but are in fact predictable by our adversary.

Stating this requirement formally seems to be quite difficult.  You can
easily make it very strong - the members are to be modeled as
probabilistic TM's with no input.  Then, certainly, no one can see
anyone else's value, since they can't see *anything*.  But you really
want to say something along the lines of no malicious member can see
the value output by any non-malicious member, which gets you into
requiring an explicit failure model - which doesn't fit comfortably with
the underlying problem.

If the issue is how to make sure you get out at least all the randomness
that was there, where the only failures are that some of your sources
become predictable, the XOR is fine.  But once you allow for more
complicated failure/attack modes, it's really not clear what is going on
and what the model should to be.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On 10/28/2008 09:43 AM, Leichter, Jerry wrote:

| persons).  Each of them, on demand, puts out a 160 bit
| word, called a member word.  We wish to combine these
| to form a single word, the group word, also 160 bits
| in length.
This isn't enough.  Somehow, you have to state that the values emitted
on demand in any given round i (where a round consists of exactly one
demand on all N member and produces a single output result) cannot
receive any input from any other members.  Otherwise, if N=2 and member
0 produces true random values that member 1 can see before it responds
to the demand it received, then member 1 can cause the final result to
be anything it likes.

Perhaps an example will make it clear where I am coming
from.  Suppose I start with a deck of cards that has been
randomly shuffled.  It can provide log2(52!) bits of
entropy.  That's a little more than 225 bits.  Now suppose
I have ten decks of cards all arranged alike.  You could
set this up by shuffling one of them and then stacking
the others to match ... or by some more symmetric process.
In any case the result is symmetric w.r.t interchange of
decks.  In this situation, I can choose any one of the
decks and obtain 225 bits of entropy.  The funny thing
is that if I choose N of the decks, I still get only 225
bits of entropy, not N*225.

This can be summarized by saying that entropy is not an
extensive quantity in this situation.  The graph of
entropy versus N goes like this:

225*   *   *   *   *

0*
0   1   2   3   4   5  (# of decks)

The spooky aspect of this situation is the whack-a-mole
aspect:  You cannot decide in advance which one of the
decks has entropy and which N-1 of them do not.  That's
the wrong question.  The first deck we choose to look
at has 225 bits of entropy, and only then can we say
that the other N-1 decks have zero additional entropy.

The original question spoke of trusted sources of
entropy, and I answered accordingly.  To the extent
that the sources are correlated, they were never eligible
to be considered trusted sources of entropy.  To say
the same thing the other way around, to the extent
that each source can be trusted to provide a certain
amount of entropy, it must be to that extent independent
of the others.

It is possible for a source to be partially dependent
and partially independent.  For example, if you take
each of the ten aforementioned decks and cut the deck
randomly and independently, that means the first deck
we look at will provide 225 bits of entropy, and each
one thereafter will provide 5.7 bits of additional
entropy, since log2(52)=5.7.  So in this situation,
each deck can be /trusted/ to provide 5.7 bits of
entropy.

In this situation, requiring each deck to have no
input from the other decks would be an overly strict
requirement.  We do not need full independence;  we
just need some independence, as quantified by the
provable lower bound on the entropy.

If you wanted, you could do a deeper analysis of this
example, taking into account the fact that 5.7 is not
the whole story.  It is easy to use 5.7 bits as a valid
and trustworthy lower bound, but under some conditions
more entropy is available, and can be quantified by
considering the _joint_ probability distribution and
computing the entropy of that distribution.  Meanwhile
the fact remains that under a wide range of practical
conditions, it makes sense to engineer a randomness
generator based on provable lower bounds, since that
is good enough to get the job done, and a deeper
analysis would not be worth the trouble.
http://www.av8n.com/turbid/paper/turbid.htm

If the issue is how to make sure you get out at least all the randomness
that was there,

I'm going to ignore the At least.  It is very hard to
get out more than you put in.

On a less trivial note:  The original question did not
require getting out every last bit of available randomness.
In situations where the sources might be partially
independent and partially dependent, that would be a
very hard challenge, and I do not wish to accept that
challenge.

Dealing with provable lower bounds on the entropy is
more tractable, and sufficient for a wide range of
practical purposes.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On Tue, 28 Oct 2008, John Denker wrote:

| Date: Tue, 28 Oct 2008 12:09:04 -0700
| From: John Denker [EMAIL PROTECTED]
| To: Leichter, Jerry [EMAIL PROTECTED],
| Cryptography cryptography@metzdowd.com
| Cc: IanG [EMAIL PROTECTED]
| Subject: Re: combining entropy
|
| On 10/28/2008 09:43 AM, Leichter, Jerry wrote:
|
|  | We start with a group comprising N members (machines or
|  | persons).  Each of them, on demand, puts out a 160 bit
|  | word, called a member word.  We wish to combine these
|  | to form a single word, the group word, also 160 bits
|  | in length.
|  This isn't enough.  Somehow, you have to state that the values emitted
|  on demand in any given round i (where a round consists of exactly one
|  demand on all N member and produces a single output result) cannot
|  receive any input from any other members.  Otherwise, if N=2 and member
|  0 produces true random values that member 1 can see before it responds
|  to the demand it received, then member 1 can cause the final result to
|  be anything it likes.
|
|
| Perhaps an example will make it clear where I am coming
| from.  Suppose I start with a deck of cards that has been
| randomly shuffled.  It can provide log2(52!) bits of
| entropy.  That's a little more than 225 bits.  Now suppose
| I have ten decks of cards all arranged alike.  You could
| set this up by shuffling one of them and then stacking
| the others to match ... or by some more symmetric process.
| In any case the result is symmetric w.r.t interchange of
| decks.  In this situation, I can choose any one of the
| decks and obtain 225 bits of entropy.  The funny thing
| is that if I choose N of the decks, I still get only 225
| bits of entropy, not N*225
| The original question spoke of trusted sources of
| entropy, and I answered accordingly.  To the extent
| that the sources are correlated, they were never eligible
| to be considered trusted sources of entropy.  To say
| the same thing the other way around, to the extent
| that each source can be trusted to provide a certain
| amount of entropy, it must be to that extent independent
| of the others.
Rest of example omitted.  I'm not sure of the point.  Yes, there are
plenty of ways for correlation to sneak in.

As far as I can see, only the second piece I quoted is relevant, and it
essentially gets to the point:  The original problem isn't well posed.
It makes no sense *both* to say the sources and trusted *and* to say
that they may not deliver the expected entropy.  If I know the entropy of
all the sources, that inherently includes some notion of trust - call
it source trust:  I can trust them to have at least that much entropy.
I have to have that trust, because there is no way to measure the
(cryptographic) entropy.  (And don't say I can analyze how the source
is constructed, because then I'm left with the need to trust that what
I analyzed is actually still physically there - maybe an attacker has
replaced it!)

Given such sources it's easy to *state* what it would mean for them to
be independent:  Just that if I consider the source produced by
concatenating all the individual sources, its entropy is the sum of the
entropies of the constituents.  Of course, that's an entropy I can again
measure - at least in the limit - in the information theoretical sense,
but not in the cryptographic sense; another aspect of trust - call it
independence trust - has to enter here.

All that's fine, but how then are we supposed to construe a question
about what happens if some of the sources fail to deliver their rated
entropy?  That means that source trust must be discarded.  (Worse, as
the original problem is posed, I must discard source trust for *some
unknown subset of the sources*.)  But given that, why should I assume
that independence trust remains?

say, physical failures of sources implemented as well-isolated modules,
it might well be a reasonable thing to do.  In fact, this is essentially
the independent- failure model we use all the time in building reliable
physical systems.  Of course, as we know well, that model is completely
untenable when the concern is hostile attack, not random failure.  What
do you replace it with?

Consider the analogy with reliable distributed systems.  People have
basically only dealt with two models:

1.  The fail-stop model.  A failed module stops interacting.
2.  The Byzantine model.  Failed modules can do anything
including cooperating by exchanging arbitrary
information and doing infinite computation.

The Byzantine model is bizarre sounding, but it's just a way of expressing
a worst-case situation:  Maybe the failed modules act randomly but just by
bad luck they do the worst possible thing.

We're trying to define something different here.  Twenty-odd years ago,
Mike Fischer at Yale proposed some ideas in this direction (where
modules have access```

### Re: combining entropy

```On Sat, Oct 25, 2008 at 12:40 PM, IanG [EMAIL PROTECTED] wrote:
Jonathan Katz wrote:
I think it depends on what you mean by N pools of entropy.

I can see that my description was a bit weak, yes.  Here's a better
view, incorporating the feedback:

If I have N people, each with a single pool of entropy,
and I pool each of their contributions together with XOR,
is that as good as it gets?

I think you need to define what you mean by as good as it gets.
Clearly XOR loses entropy that might be there, so on the measure of
good == most entropy, it is not.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On 10/25/2008 04:40 AM, IanG gave us some additional information.

Even so, it appears there is still some uncertainty as to
interpretation, i.e. some uncertainty as to the requirements
and objectives.

I hereby propose a new scenario.  It is detailed enough to
be amenable to formal analysis.  The hope is that it will
satisfy the requirements and objectives ... or at least
promote a more precise discussion thereof.

persons).  Each of them, on demand, puts out a 160 bit
word, called a member word.  We wish to combine these
to form a single word, the group word, also 160 bits
in length.

We must find a combining function.  The primary objective
is to maximize the entropy of the group word.  A secondary
objective is computational simplicity.

The members can be categorized as follows:
a) Some of them are abjectly broken.  Their outputs have
zero entropy density.  Constant outputs and/or predictable
outputs fall into this category.
b) Some of them are malicious.  Their outputs may appear
random, but are in fact predictable by our adversary.
c) M of them have an entropy density greater than XX.
As a concrete example, we consider the case where XX=50%,
i.e. 80 bits of entropy in a 160 bit word.
d) Some of them could contain a high entropy density,
very close to 100%.  For our example, we assume there
are none of these;  otherwise the problem would be
too easy.

If we do things properly, case (b) is no worse than case
(a), for reasons that will become apparent shortly, so
we can lump these cases together.

We don't know which generator falls into which category.
All we need to know is that M of the generators are
putting out useful entropy.

I recommend the following combining function:  concatenate
all N of the member words, and then feed that through a
hash function to produce the group word.  Since SHA-1 is
efficient and has a 160 bit output word, it will serve
nicely in our example.

In the sub-case where M=1, the recommended hash-based
procedure produces a group word with 80 bits of entropy,
i.e. a 50% entropy density, which is the best we can
do.  In this sub-case, SHA-1 is no better than XOR.

As M increases, the entropy density of the output word
converges rather rapidly to 100%.  This is subject to
mild assumptions about the hash function actually working
as a hash function, i.e. not being grossly broken.

When M is greater than 1, the hash function approach
is much better than the XOR approach.  Here is an
easy proof:  Consider the case where each member in
category (c) puts out a 160 bit word consisting of 80
totally random bits in the left half and 80 constant
bits in the right half.  XORing these together only
gets you to 80 bits of entropy in the group word,
whereas hashing is better by a factor of 2.  Actually
(2 minus epsilon) if you want to be fussy about it.

In the case where the entropy is evenly distributed
within the member word, i.e. 160 bits each with a 50%
entropy density, the result is more subtle:  The group
word will converge to 100% entropy density, but the
hash version converges _faster_ than the XOR version.
Here faster means you can get by with a smaller M.
Considerably smaller.  Also (!) beware that to get XOR
to converge at all, this paragraph depends on some
properties of the members that may be hard to realize
in practice ... whereas the hash approach has no
such dependence.

=

To summarize:  In the special sub-case where M=1, XOR
is as good as it gets.  In all other cases I can think
of, the hash approach is much better.

My analysis applies to a specific set of requirements.
If somebody wants to discuss other requirements, please
be specific about what the requirements are.  There
are innumerable creeping features that we could discuss.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```Alas on 10/25/2008 01:40 PM, I wrote:

To summarize:  In the special sub-case where M=1, XOR
is as good as it gets.  In all other cases I can think
of, the hash approach is much better.

I should have said that in the special sub-case where
the member word has entropy density XX=100% _or_ in
the special sub-case where M=1, XOR is as good as it
gets.  In all other cases I can think of, the hash
approach is much better.

(I excluded the XX=100% case earlier in the note, but
I should have included it in the summary.  Sorry.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```John Denker [EMAIL PROTECTED] wrote:

To say the same thing in more detail:  Suppose we start
with N generators, each of which puts out a 160 bit word
containing 80 bits of _trusted_ entropy.  That's a 50%
entropy density.

So you need a 2:1 or heavier compression that won't lose
entropy. If you just need one 160 word out per N in, then
hashing them is the obvious way to do that.

We next consider the case where N-1 of the generators have
failed, or can no longer be trusted, ...

XOR is provably correct because it is _reversible_ in the
thermodynamic sense.  That means it cannot increase or
decrease the entropy.

Yes, but the proof holds for any reversible mapping. XOR
makes each output bit depend on exactly two inputs bits.
Sometimes you want a mapping that mixes them better.

If one input is entirely random, XOR is fine; random ^ x is
random for any x. It is also fine in the case above, where
only one generator works.

If   1 inputs have some entropy but none have enough,
which seems to me the commonest case, XOR is not
the best choice; it does not mix well enough.

Nyberg's perfect s-boxes are in some ways the ideal
mixer. 2n bits in, n out, all columns and all linear
combinations of columns are bent functions. Big
S-boxes are expensive though, and building even
small Nyberg S-boxes is going to take significant
effort. Designing something that uses a bunch of say
8 by 4 S-boxes to do good mixing on 160-bit chunks
is not trivial either.

You could use IDEA multiplication in mixing. Two 16-bit
words in, one out, and every output bit depends on all
input bits.

If every 16-bit input word has 50% entropy density
(not the same as every 160-bit word does, but perhaps
close enough) then the output should have 100%.

For N  1, you need to combine those and worry about
overall mixing. If entropy density is known to be ~50%,
you can combine pairs with IDEA to get ~100%, then
use cheaper operations for any other mixing needed.
but gives slightly better mixing because of carries.

For N  2 and density  50%, you could use a cascade
of IDEA operations 8-4-2-1 or whatever. Or do
something like: combine two 160-bit chunks with 10
IDEA multiplications, circular shift the result 8 bits,
combine with next 160-bit input, ...

At some point, you may find yourself designing a hash.
If that happens, just give up and use a standard hash.

--
Sandy Harris,
Quanzhou, Fujian, China

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```
On Sat, 25 Oct 2008, John Denker wrote:

On 10/25/2008 04:40 AM, IanG gave us some additional information.

Even so, it appears there is still some uncertainty as to
interpretation, i.e. some uncertainty as to the requirements
and objectives.

I hereby propose a new scenario.  It is detailed enough to
be amenable to formal analysis.  The hope is that it will
satisfy the requirements and objectives ... or at least
promote a more precise discussion thereof.

persons).  Each of them, on demand, puts out a 160 bit
word, called a member word.  We wish to combine these
to form a single word, the group word, also 160 bits
in length.

snip

If you are interested in something with a formal analysis, you should
check out work on (single-source or multiple-source) extractors.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On 10/24/2008 03:40 PM, Jack Lloyd wrote:

Perhaps our seeming disagreement is due to a differing interpretation
of 'trusted'. I took it to mean that at least one pool had a
min-entropy above some security bound. You appear to have taken it to
mean that it will be uniform random?

Thanks, that question advances the discussion.

The answer, however, is no, I did not assume 100% entropy
density.  Here is the critical assumption that I did make:

We consider the scenario where we started with N randomness
generators, but N-1 of them have failed.  One of them is
still working, but we don't know which one.

To say the same thing in more detail:  Suppose we start
with N generators, each of which puts out a 160 bit word
containing 80 bits of _trusted_ entropy.  That's a 50%
entropy density.

Here _trusted_ means we have a provable lower bound on the
entropy.  I assume this is the same as the aforementioned
min-entropy above some security bound.

We next consider the case where N-1 of the generators have
failed, or can no longer be trusted, which is essentially the
same thing for present purposes.  Now we have N-1 generators
putting out zero bits of trusted entropy, plus one generator
putting out 80 bits of trusted entropy.  I emphasize that
these 80 bits of trusted entropy are necessarily uncorrelated
with anything happening on the other N-1 machines, for the
simple reason that they are uncorrelated with anything
happening anywhere else in the universe ... otherwise they
would not qualify as trusted entropy.

XORing together all N of the 160 bit output words produces
a single 160 bit word containing 80 bits of trusted entropy.
Therefore, unless there is some requirement or objective
that I don't know about, the previously-stated conclusion
holds:

XOR is a good-enough combining function,
and nothing else would be any better.

XOR is provably correct because it is _reversible_ in the
thermodynamic sense.  That means it cannot increase or
decrease the entropy.

=

Obviously this numerical example generalizes to any entropy
density from zero to 100% inclusive.

To summarize:  The key assumptions are that we have N-1
broken generators and one working generator.  We don't
know which one is working, but we know that it is working
correctly.

For more about the theory and practice of high-entropy
randomness generators, see
http://www.av8n.com/turbid/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```Jonathan Katz wrote:
I think it depends on what you mean by N pools of entropy.

I can see that my description was a bit weak, yes.  Here's a better
view, incorporating the feedback:

If I have N people, each with a single pool of entropy,
and I pool each of their contributions together with XOR,
is that as good as it gets?

My assumptions are:

* I trust no single person and their source of entropy.

* I trust at least one person + pool.

* Entropy by its definition is independent and is private
(but it is worth stating these, as any leaks will kill us!)

* Efficiency is not a concern, we just expand the pool size
(each pool is size X, and the result is size X).

* The people have ordinary skill.

now to respond to the questions:

1.  I am assuming that at least one pool is good entropy.  This is
partly an assumption of desperation or simplicity.

In practice, no individual (source or person) is trusted at an
isolated level.  But this leads to a sort of circular argument that
says, nobody is trusted.  We can solve this two ways:

I join the circle.  I trust myself, *but* I don't trust
my source of entropy.  So this is still hopeful.

We ensure that there are at least two cartels in the
circle that don't trust each other!  Then, add a dash
of game theory, and the two cartel pools should at
least be independent of each other, and therefore the
result should be good entropy.

I suspect others could more logically arrive at a better assumption,
but for now, the assumption of one trusted person/pool seems to
cover it.

2.  Having thought about Stephan's comment a bit more (because it
(because they were precise), it is clear that I need to stress the
privacy / independence criteria, even if strictly covered by the
definition of entropy.  Too much of the practical aspects will
depend on ensuring independence of the pools to just lean blithely
on the definitions.  I had missed that dependency.

3.  The proposals on concatenation and cleanup are tempting.  In
Jon's words, it can solve obvious problems.  However, they introduce
a complexity of understanding the cleanup function, and potential

The people have ordinary skill.

Which means they are unable to determine whether a cryptographically
complex cleanup function is indeed cleaning, or not.

Here, then, we reach an obvious limit, in that the people have to be
able to determine that the XOR is doing its job, and they need to be
able to do a bit of research to decide what is their best guess at
their private entropy source.

Thanks to all.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### combining entropy

```If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

My assumptions are:

* I trust no single source of Random Numbers.
* I trust at least one source of all the sources.
* no particular difficulty with lossy combination.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On 09/29/2008 05:13 AM, IanG wrote:
My assumptions are:

* I trust no single source of Random Numbers.
* I trust at least one source of all the sources.
* no particular difficulty with lossy combination.

If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

Yes.

The second assumption suffices to prove the result,
since (random bit) XOR (anything) is random.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```
[Moderator's note: top posting is not tasteful. --Perry]

I think it depends on what you mean by N pools of entropy.

Are you assuming that one of these is sources is (pseudo)random, but you
don't know which one? Are you assuming independence of these difference
sources? If both these assumptions hold, then XOR will do the trick.

If your only assumption is that one of the sources has high min-entropy
(but may not necessarily be uniform), or if the independence assumption
does not hold, then you may need to use some form of randomness
extraction.

On Mon, 29 Sep 2008, IanG wrote:

If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

My assumptions are:

* I trust no single source of Random Numbers.
* I trust at least one source of all the sources.
* no particular difficulty with lossy combination.

iang

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On Mon, Sep 29, 2008 at 1:13 PM, IanG [EMAIL PROTECTED] wrote:
If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

Surely not. Consider N pools each of size 1 bit. Clearly you can do
better than the 1 bit your suggestion would yield.

More concretely, concatenation would seem better than XOR.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```

On Oct 24, 2008, at 14:29, John Denker wrote:

On 09/29/2008 05:13 AM, IanG wrote:

My assumptions are:

* I trust no single source of Random Numbers.
* I trust at least one source of all the sources.
* no particular difficulty with lossy combination.

If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

Yes.

The second assumption suffices to prove the result,
since (random bit) XOR (anything) is random.

Ah, but for this to hold, you will also have to assume that the N
pools are all independent.  If they are not, you cannot even guarantee
one single bit of entropy (whatever that is).  For example, if N =
2, your trusted source is pool 1, and I can read pool 1 and control
pool 2, I set pool 2 = pool 1, and all you get is zeros. And that
surely does not contain X bits of entropy for any reasonable
definition of entropy.

Fun,

Stephan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```L.S.,

If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

My assumptions are:

* I trust no single source of Random Numbers.
* I trust at least one source of all the sources.
* no particular difficulty with lossy combination.
I take the last item to mean that you do not mind wasting entropy but
want to be sure the resulting random number is unpredictable.

* The sources are independent of each other
then the XOR of the random sources will be at least as unpredictable as
the most unpredictable individual random source (to keep away from the
entropy discussion). As far as I can se, this the if at least one
source is unpredictable for a workload of x, the resulting random is
also at least that unpredictable property that you seem to be looking
for.

If the sources are not independent, in the most extreme case: the sources
are the same, the result is not so good. XORing in the same RNG stream
twice, however good the RNG, is not so useful ;-)

Without the threatmodel, I am not sure if this is a problem for you, but
if the attacker has control or knowledge of some of the sources, he
also knows the XOR of the remaining ones. In the case he knows all but
one sources, and the remaining source is not so unpredictable (LFSR,
poorly biased noise source), the result can be quite predictable (and in
weak RNG designs, the remaining source might be compromised).
Note that this could also be used to force the combined RNG to more
likely generate a chosen output.

Using hashfunctions to combine the randoms makes it computationally
harder for such chosen results to be generated, it quickly becomes
effectively a search problem for hash-collisions where you have only
limited choice on the input. Also temporary lulls in the quality of the
random sources are much better handled. Peter Gutmann's dissertation
has a very good description of what he did for hardening his cryptolib's
the random generation from many such attacks/mistakes.

With kind regards,
Wouter Slegers

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```
On Oct 24, 2008, at 15:37, Stephan Neuhaus wrote:

Ah, but for this to hold, you will also have to assume that the N
pools are all independent.

Slight correction: You will have to assume that one of the trusted
pools is independent from the others.

Best,

Stephan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```

IanG wrote:

If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

My assumptions are:

* I trust no single source of Random Numbers.
* I trust at least one source of all the sources.
* no particular difficulty with lossy combination.

Do you really trust that no single source of entropy can have knowledge
of the other source's output, so it can surreptitiously correlate its own?

I.e, you are are also assuming that these sources are *independent*.

--

- Thierry Moreau

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```

On Sep 29, 2008, at 5:13 AM, IanG wrote:

If I have N pools of entropy (all same size X) and I pool them
together with XOR, is that as good as it gets?

My assumptions are:

* I trust no single source of Random Numbers.
* I trust at least one source of all the sources.
* no particular difficulty with lossy combination.

It's within epsilon for a good many epsilon.

I'm presuming you want the resultant size to be X, as well. Otherwise,
the suggestion that Ben has, concatenation is obviously better, and
you can solve obvious problems.

Another solution is to hash the N pools together with a suitably
secure function. (Most the available algorithms are suitably secure
for this purpose.) The downside of this is that you are capping your
entropy at the size of the hash function. It's better than XOR because
it's not linear, blah, blah, blah.

However, if you had three pools, each relatively large, it doesn't
hurt anything to XOR them together. It's pretty easy to prove that the
result does not decrease entropy, but I think it's impossible to prove
that it increases it. XORing is really taking the max of the N pools.

You have to realize that XOR is bad if there's a chance to leak the
entropy pool, XOR is a bad function. If whoever produced pool X sees
X^Y, then they know Y. But you know that, too.

Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On Fri, Oct 24, 2008 at 10:23:07AM -0500, Thierry Moreau wrote:

Do you really trust that no single source of entropy can have knowledge of
the other source's output, so it can surreptitiously correlate its own?

I.e, you are are also assuming that these sources are *independent*.

I do not think one means the other here.

An omniscient malicious RNG source seems quite unlikely in most threat
models. However that is a very different statement from saying that
lacking such an attacker, you can safely assume your 'pools of
entropy' (to quote the original question) are independent in the
information-theoretic sense.

Say you execute (on a Linux machine) two commands, like ifconfig -a
and netstat -s (which print ASCII text with statistics about network
interfaces and network protocols, resp), capturing the output as two

Both have some amount of entropy (perhaps zero if an attacker is on
the machine and runs his commands at the same time as yours - and
perhaps quite a bit more if the local machine happens to be safe). But
they are certainly not statistically independent!  Information in one
will be somewhat reflected in the other (packet counters), and of
course at the macro level all your inputs have high bit unset, so if
you combined via XOR your output will have at best .875 bits of
entropy per bit.

To address IanG's question more directly, my first thought would be to
use something like the design Hugo Krawczyk describes in On
Extract-then-Expand Key Derivation Functions and an HMAC-based KDF
(http://www.ee.technion.ac.il/~hugo/kdf/kdf.pdf) or one of the related
PRNG designs he references. Then use the output of the HMAC PRF to
feed the DT vector of an X9.31 PRNG (using block cipher du jour), a
trick AFAIK invented by Peter Gutmann which has always seemed like a
good worst-case-scenario trick to me (for instance, if the code for
the hash's compression function is miscompiled), though at the cost of
extra code/design complexity (and thus points of failure) - as always

-Jack (IANAC)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On 10/24/2008 01:12 PM, Jack Lloyd wrote:

is a very different statement from saying that
lacking such an attacker, you can safely assume your 'pools of
entropy' (to quote the original question) are independent in the
information-theoretic sense.

The question, according to the original poster, is not
whether it is safe to assume that one of the entropy
sources can be trusted.  Safe or not, the question explicitly
assumed that one of the sources was trusted ... and asked
what the consequences of that assumption would be.

In particular, evidently the scenario was that we started
with N high-entropy randomness generators, but N-1 of
them have failed.  One of them is still working, but we
don't know which one.

In that scenario, XOR is a good-enough combining function,
and nothing else would be any better.

If somebody wants to discuss a different scenario, please
clarify what the new scenario is.

Suggesting that the trusted source is correlated with one
of the other sources is quite contrary to the requirements
expressed in the original question.

That is to say, if the source is not independent, it was
never eligible to be a trusted entropy source.

If you want to quantify this, write down the _joint_ probability
distribution for all the sources, and calculate the entropy
of that distribution in the usual way.

1) There is _one_ very precise meaning for entropy that is
well-established and conventional across a wide range of
fields ... everything from kitchen appliances to cosmology.
http://www.av8n.com/physics/thermo-laws.htm#sec-relevance

2) Authors are allowed to define and redefine terms however
they please ... _provided_ they define any nonstandard terms
that they use.  Anybody who takes a well-established standard
term and uses it in a nonstandard way has a double-extra-special
duty to explain what he's doing.

I assume the original poster was using the term entropy
in the conventional, precise sense ... and until I hear
otherwise I will continue to do so.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```

### Re: combining entropy

```On Fri, Oct 24, 2008 at 03:20:24PM -0700, John Denker wrote:
On 10/24/2008 01:12 PM, Jack Lloyd wrote:

is a very different statement from saying that
lacking such an attacker, you can safely assume your 'pools of
entropy' (to quote the original question) are independent in the
information-theoretic sense.

The question, according to the original poster, is not
whether it is safe to assume that one of the entropy
sources can be trusted.  Safe or not, the question explicitly
assumed that one of the sources was trusted ... and asked
what the consequences of that assumption would be.

Perhaps our seeming disagreement is due to a differing interpretation
of 'trusted'. I took it to mean that at least one pool had a
min-entropy above some security bound. You appear to have taken it to
mean that it will be uniform random?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

```