Re: [Cryptography] Thoughts on hardware randomness sources

2013-09-14 Thread Marcus D. Leech

On 09/13/2013 11:32 PM, Jerry Leichter wrote:

On Sep 12, 2013, at 11:06 PM, Marcus D. Leech wrote:

There are a class of hyper-cheap USB audio dongles with very uncomplicated 
mixer models.  A small flotilla of those might get you some fault-tolerance.
  My main thought on such things relates to servers, where power consumption 
isn't really much of an issue

I'm not sure what servers you're talking about here.

If by server you mean one of those things in a rack at Amazon or Google or 
Rackspace - power consumption, and its consequence, cooling - is *the* major 
issue these days.  Also, the servers used in such data centers don't have 
multiple free USB inputs - they may not have any.

If by server you mean some quite low-power box in someone's home ... power is 
again an issue.  People want these things small, fan-free, and dead reliable.  
And they are increasingly aware of the electric bills always-on devices produce.

About the only server for which power is not an issue is one of those 
extra-large desktops that small businesses use.

 -- Jerry

I was mostly contrasting with mobile systems, where power consumption 
is at an absolute premium.


The USB sound systems I'm thinking of consume 350mW while operating, and 
about 300uW when idle.   A couple or three of those on even
  a stripped-down server would contribute in only the smallest way to 
extra power consumption.  And the extra computational load?  When these
  servers things are running flat-out serving up secured connections?  
I would guess the phrase an inconsiderable trifle would apply.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts on hardware randomness sources

2013-09-14 Thread Jerry Leichter
On Sep 12, 2013, at 11:06 PM, Marcus D. Leech wrote:
 There are a class of hyper-cheap USB audio dongles with very uncomplicated 
 mixer models.  A small flotilla of those might get you some fault-tolerance.
  My main thought on such things relates to servers, where power consumption 
 isn't really much of an issue
I'm not sure what servers you're talking about here.

If by server you mean one of those things in a rack at Amazon or Google or 
Rackspace - power consumption, and its consequence, cooling - is *the* major 
issue these days.  Also, the servers used in such data centers don't have 
multiple free USB inputs - they may not have any.

If by server you mean some quite low-power box in someone's home ... power is 
again an issue.  People want these things small, fan-free, and dead reliable.  
And they are increasingly aware of the electric bills always-on devices produce.

About the only server for which power is not an issue is one of those 
extra-large desktops that small businesses use.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Peter Fairbrother
Recommendations are given herein as: symmetric_key_length - 
recommended_equivalent_RSA_key_length, in bits.


Looking at Wikipedia,  I see:

As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in 
strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit 
symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys. RSA 
claims that 1024-bit keys are likely to become crackable some time 
between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. 
An RSA key length of 3072 bits should be used if security is required 
beyond 2030.[6]


http://www.emc.com/emc-plus/rsa-labs/standards-initiatives/key-size.htm

That page doesn't give any actual recommendations or long-term dates 
from RSA now. It gives the traditional recommendations 80 - 1024 and 
112 - 2048, and a 2000 Lenstra/Verheul minimum commercial 
recommendation for 2010 of 78 - 1369.



NIST key management guidelines further suggest that 15360-bit RSA keys 
are equivalent in strength to 256-bit symmetric keys.[7]


http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf

NIST also give the traditional recommendations, 80 - 1024 and 112 - 
2048, plus 128 - 3072, 192 - 7680, 256 - 15360.




I get that 1024 bits is about on the edge, about equivalent to 80 bits 
or a little less, and may be crackable either now or sometime soon.


But, I wonder, where do these longer equivalent figures come from?

I don't know, I'm just asking - and I chose Wikipedia because that's the 
general wisdom.


Is this an area where NSA have shaped the worldwide cryptography 
marketplace to make it more tractable to advanced cryptanalytic 
capabilities being developed by NSA/CSS, by perhaps greatly 
exaggerating the equivalent lengths?


And by emphasising the difficulty of using longer keys?

As I said, I do not know. I merely raise the possibility.


[ Personally, I recommend 1,536 bit RSA keys and DH primes for security 
to 2030, 2,048 if 1,536 is unavailable, 4,096 bits if paranoid/high 
value; and not using RSA at all for longer term security. I don't know 
whether someone will build that sort of quantum computer one day, but 
they might. ]



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Paul Hoffman
Also see RFC 3766 from almost a decade ago; it has stood up fairly well.

--Paul Hoffman
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism proof email, namespaces, and anonymity

2013-09-14 Thread Max Kington
On Fri, Sep 13, 2013 at 10:12 PM, Perry E. Metzger pe...@piermont.comwrote:

 On Fri, 13 Sep 2013 16:55:05 -0400 John Kelsey crypto@gmail.com
 wrote:
  Everyone,
 
  The more I think about it, the more important it seems that any
  anonymous email like communications system *not* include people who
  don't want to be part of it, and have lots of defenses to prevent
  its anonymous communications from becoming a nightmare for its
  participants.  If the goal is to make PRISM stop working and make
  the email part of the internet go dark for spies (which definitely
  includes a lot more than just US spies!), then this system has to
  be something that lots of people will want to use.
 
  There should be multiple defenses against spam and phishing and
  other nasty things being sent in this system, with enough
  designed-in flexibility to deal with changes in attacker behavior
  over tome.

 Indeed. As I said in the message I just pointed Nico at:
 http://www.metzdowd.com/pipermail/cryptography/2013-August/016874.html

 Quoting myself:

Spam might be a terrible, terrible problem in such a network since
it could not easily be traced to a sender and thus not easily
blocked, but there's an obvious solution to that. I've been using
Jabber, Facebook and other services where all or essentially all
communications require a bi-directional decision to enable messages
for years now, and there is virtually no spam in such systems
because of it. So, require such bi-directional friending within
our postulated new messaging network -- authentication is handled
by the public keys of course.


The keys. This to me is the critical point for widespread adoption, key
management. How do you do this in a way that doesn't put people off
immediately.

There are two new efforts I'm aware if trying to solve this in a user
friendly way are https://parley.co/#how-it-works and http://mailpile.is.

Parley's approach does at least deal with the longevity of the private key
although it does boil security down to a password, if I can obtain their
packet and the salt I can probably brute force the password.
I've exchanged mails with the mailpile.is guys and I think they're still
looking at the options.

Max
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Key management, key storage. (was Re: prism proof email, namespaces, and anonymity)

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 17:23:40 +0100 Max Kington
mking...@webhanger.com wrote:
 The keys. This to me is the critical point for widespread adoption,
 key management. How do you do this in a way that doesn't put people
 off immediately.

You don't seem to be entirely talking about key management, given
that you talk about mailpile and parley. Parley seems to be simply
talking about *key storage* for example, which is a different kettle
of fish.

However, on the topic of key management itself, my own proposal was
described here:

http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html

In summary, I proposed a way you can map IDs to keys through pure
long term observation/widely witnessed events. The idea is not
original given that to some extent things like Certificate
Transparency already do this in other domains.


Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 09:31:22 -0700 Paul Hoffman
paul.hoff...@vpnc.org wrote:
 Also see RFC 3766 from almost a decade ago; it has stood up fairly
 well.

For those not aware, the document, by Paul and Hilarie Orman,
discusses equivalent key strengths and practical brute force methods,
giving extensive detail on how all calculations were done.

A URL for the lazy:

http://tools.ietf.org/html/rfc3766

It is very well done. I'd like to see an update done but it does
feel like the methodology was well laid out and is difficult to
argue with in general. The detailed numbers are slightly different
from others out there, but not so much as to change the general
recommendations that have been floating around.

Their table, from April 2004, looked like this:

   +-+---+--+--+
   | System  |   |  |  |
   | requirement | Symmetric | RSA or DH| DSA subgroup |
   | for attack  | key size  | modulus size | size |
   | resistance  | (bits)| (bits)   | (bits)   |
   | (bits)  |   |  |  |
   +-+---+--+--+
   | 70  | 70|  947 | 129  |
   | 80  | 80| 1228 | 148  |
   | 90  | 90| 1553 | 167  |
   |100  |100| 1926 | 186  |
   |150  |150| 4575 | 284  |
   |200  |200| 8719 | 383  |
   |250  |250|14596 | 482  |
   +-+---+--+--+

They had some caveats, such as the statement that if TWIRL like
machines appear, we could presume an 11 bit reduction in strength --
see the RFC itself for details.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Key management, key storage. (was Re: prism proof email, namespaces, and anonymity)

2013-09-14 Thread Trevor Perrin
On Sat, Sep 14, 2013 at 9:46 AM, Perry E. Metzger pe...@piermont.com wrote:

 However, on the topic of key management itself, my own proposal was
 described here:

 http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html

 In summary, I proposed a way you can map IDs to keys through pure
 long term observation/widely witnessed events. The idea is not
 original given that to some extent things like Certificate
 Transparency already do this in other domains.


Hi Perry,

What you're proposing is multipath probing of email users' public
keys.  Certificate Transparency isn't the right comparison, but this
has certainly been discussed in other domains:

Public Spaces Key Infrastructure / SecSpider (Osterweil et al, 2006, 2007) [1]
Perspectives (for HTTPS - Wendlant et al, 2008) [3]
Convergence (for HTTPS - Marlinspike, 2011) [4]
Vantages (for DNSSSEC - Osterweil et al, 2013) [5]

Probing servers is easier than probing email users, and publishing a
servername - key directory is also easier as server names don't have
the same privacy concerns as email names.  Still, it's an interesting
idea.

Key changes are a challenge to this approach, which people tend to overlook.

One approach is to have the probed party declare a commitment to
maintaining its public key constant for some period of time, and have
this commitment be detected by the probing parties.  This provides
some timing guarantees so that the rest of the system can probe and
download new results at regular intervals, without having sudden key
changes cause glitches.  Things like HPKP [6] and TACK [7] explore
this option.


Trevor


[1] http://irl.cs.ucla.edu/papers/pski.pdf
[2] http://secspider.cs.ucla.edu/docs.html
[3] http://perspectives-project.org/
[4] http://convergence.io/
[5] http://irl.cs.ucla.edu/~eoster/doc/pubdata-tpds13.pdf
[6] http://tools.ietf.org/html/draft-ietf-websec-key-pinning-08
[7] http://tack.io
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Peter Fairbrother

On 14/09/13 17:14, Perry E. Metzger wrote:

On Sat, 14 Sep 2013 16:53:38 +0100 Peter Fairbrother
zenadsl6...@zen.co.uk wrote:

NIST also give the traditional recommendations, 80 - 1024 and 112
- 2048, plus 128 - 3072, 192 - 7680, 256 - 15360.

[...]

But, I wonder, where do these longer equivalent figures come from?

I don't know, I'm just asking - and I chose Wikipedia because that's
the general wisdom.

[...]

[ Personally, I recommend 1,536 bit RSA keys and DH primes for
security to 2030, 2,048 if 1,536 is unavailable, 4,096 bits if
paranoid/high value; and not using RSA at all for longer term
security. I don't know whether someone will build that sort of
quantum computer one day, but they might. ]


On what basis do you select your numbers? Have you done
calculations on the time it takes to factor numbers using modern
algorithms to produce them?


Yes, some - but I don't believe that's enough. Historically, it would 
not have been (and wasn't) - it doesn't take account of algorithm 
development.


I actually based the 1,536-bit figure on the old RSA factoring 
challenges, and how long it took to break them.


We are publicly at 768 bits now, and that's very expensive 
http://eprint.iacr.org/2010/006.pdf - and, over the last twenty years 
the rate of public advance has been about 256 bits per decade.


So at that rate 1,536 bits would become possible but very expensive in 
2043, and would still be impossible in 2030.



If 1,024 is possible but very expensive for NSA now, and 256 bits per 
decade is right, then 1,536 may just be on the verge of edging into 
possibility in 2030 - but I think progress is going to slow (unless they 
develop quantum computers).


We have already found many of the easy-to-find advances in theory.



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread ianG

On 14/09/13 18:53 PM, Peter Fairbrother wrote:


But, I wonder, where do these longer equivalent figures come from?



http://keylength.com/ (is a better repository to answer your question.)



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Adam Back

On Sat, Sep 14, 2013 at 12:56:02PM -0400, Perry E. Metzger wrote:

http://tools.ietf.org/html/rfc3766

  | requirement | Symmetric | RSA or DH| DSA subgroup |
  | for attack  | key size  | modulus size | size |
  +-+---+--+--+
  |100  |100| 1926 | 186  |

if TWIRL like machines appear, we could presume an 11 bit reduction in
strength


100-11 = 89 bits.  Bitcoin is pushing 75 bits/year right
now with GPUs and 65nm ASICs (not sure what balance).  Does that place ~2000
bit modulus around the safety margin of 56-bit DES when that was being
argued about (the previous generation NSA key-strength sabotage)?

Anyone have some projections for the cost of a TWIRL to crack 2048 bit RSA? 
Projecting 2048 out to a 2030 doesnt seem like a hugely conservative

estimate.  Bear in mind NSA would probably be willing to drop $1b one-off to
be able to crack public key crypto for the next decade.  There have been
cost and performance, power, density improvements since TWIRL was proposed. 
Maybe the single largest employer of mathematicians can squeeze a few

incremetal optimizations of the TWIRL algorithm or implementation strategy.

Tin foil or not: maybe its time for 3072 RSA/DH and 384/512 ECC?

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Perfection versus Forward Secrecy

2013-09-14 Thread Tony Arcieri
On Thu, Sep 12, 2013 at 11:08 PM, Eugen Leitl eu...@leitl.org wrote:

 I do not think that the spooks are too far away from open research in
  QC hardware. It does not seem likely that we'll be getting real QC
 any time soon, if ever.


I don't think the spooks are ahead of the public either, and I really doubt
the NSA has a large quantum computer.

We still haven't seen quantum computers built yet which can truly rival
their conventional electronic brethren, especially if you look at it from a
cost perspective. DWave computers are interesting from a novelty
perspective, but not really ready to replace existing computers, even for
highly specialized tasks like running Shor's algorithm.

Nevertheless, if you've been following the trends in quantum computers over
the last few years, they are getting larger, and DWave is an example of
them moving out of the labs and turning into something you can buy.

I wouldn't be surprised to see a large quantum computer built in the next
two decades.

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Quantum Computers for Shor's Algorithm (was Re: Perfection versus Forward Secrecy)

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 11:49:50 -0700 Tony Arcieri basc...@gmail.com
wrote:
 We still haven't seen quantum computers built yet which can truly
 rival their conventional electronic brethren, especially if you
 look at it from a cost perspective. DWave computers are interesting
 from a novelty perspective, but not really ready to replace
 existing computers, even for highly specialized tasks like running
 Shor's algorithm.
 
 Nevertheless, if you've been following the trends in quantum
 computers over the last few years, they are getting larger, and
 DWave is an example of them moving out of the labs and turning into
 something you can buy.
 
 I wouldn't be surprised to see a large quantum computer built in
 the next two decades.

DWave has never unambiguously shown their machine actually is a
quantum computer, and even if it is, given its design it very
specifically cannot run Shor's algorithm or anything like it.

I'm unaware of a quantum computer of more than five qbits that has
been demonstrated that can run Shor's algorithm, and that specific
method, using a molecule with five distinct NMR peaks, cannot really
be extended further.

If you can find a reference to quantum computer with more qbits that
can run Shor's algorithm that has been demonstrated in public, I
would be very interested.

(And yes, I'm aware of the two photon device that factored the number
21, though I believe the team used tricks to make that work --
opinions on whether that work could scale would be welcome of course.)

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Quantum Computers for Shor's Algorithm (was Re: Perfection versus Forward Secrecy)

2013-09-14 Thread Tony Arcieri
On Sat, Sep 14, 2013 at 12:12 PM, Perry E. Metzger pe...@piermont.comwrote:

 DWave has never unambiguously shown their machine actually is a
 quantum computer


There was some controversy about that a few months ago. In the end, my
understanding is it netted out that it *is* a real (albeit limited) quantum
computer:

http://www.wired.com/wiredenterprise/2013/06/d-wave-quantum-computer-usc/


 and even if it is, given its design it very specifically cannot run Shor's
 algorithm or anything like it.


Sure, I never said it could ;) I also said that conventional computers can
still outpace it. I'm certainly NOT saying, that in their present capacity,
that DWave computers are any sort of threat to modern cryptography.

But still, it goes to show that quantum computers are happening. Now it's
just a question of whether a large computer capable of running Shor's
algorithm is actually on the horizon, or if it falls into a category like
nuclear fusion where work on it drags on indefinitely.

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Quantum Computers for Shor's Algorithm (was Re: Perfection versus Forward Secrecy)

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 12:42:22 -0700 Tony Arcieri basc...@gmail.com
wrote:
 Sure, I never said it could ;) I also said that conventional
 computers can still outpace it. I'm certainly NOT saying, that in
 their present capacity, that DWave computers are any sort of threat
 to modern cryptography.
 
 But still, it goes to show that quantum computers are happening.

Given that the DWave design is totally unsuitable for Shor's
algorithm, it seems to have no real bearing on the situation in
either direction.

To break 1024 bit keys (a minimum capability for a useful Shor
machine, I'd say), you need several thousand qbits. I've not heard of
a demonstration of more than a half dozen, and I've seen no
progress on the topic in a while. It isn't like last year we could do
six and the year before five and this year someone announced fifteen
-- there have been no incremental improvements.

It is of course possible that there's been secret research on this at
NSA which has gotten far further, but I would expect that the
manufacturing technology needed to do that would require a huge
number of people to pull off, too many to keep quiet indefinitely.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts on hardware randomness sources

2013-09-14 Thread Bill Stewart

At 08:32 PM 9/13/2013, Jerry Leichter wrote:
If by server you mean one of those things in a rack at Amazon or 
Google or Rackspace - power consumption, and its consequence, 
cooling - is *the* major issue these days.  Also, the servers used 
in such data centers don't have multiple free USB inputs - they may 
not have any.


More to the point, the servers in the data centers aren't going to 
let you plug things in to them, especially if you're just renting a 
virtual machine or cloud minutes and don't get to connect to the real 
hardware at all (which also means you're not going to be able to use 
disk drive timing.)
A tablet computer has lots of sensors in it; even turning the cameras 
on at boot time and hashing the raw pixels should give you a 
reasonable chunk of entropy; you're not going to turn your virtual 
machine upside down and shake it like an Etch-A-Sketch.


I realize it's possible for somebody to try to manipulate this, but 
I've always assumed that ethernet packet timing ought to give you 
some entropy even so, and even though with virtual machines you may 
only get quantized versions of interrupt times.  Startup processes 
are probably going to include pinging a router and a name server, or 
at least they could if you wanted.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-14 Thread Kent Borg

On 09/14/2013 03:29 PM, John Denker wrote:
Things like clock skew are usually nothing but squish ... not reliably 
predictable, but also not reliably unpredictable. I'm not interested 
in squish, and I'm not interested in speculation about things that 
might be random. 


I see theoretical the enemy of the good here.

The term squish is entertaining, but be careful that once you paint 
away with your broad brush that you don't dismiss engineering realities 
that matter.


I can see there is an appeal to entropy sources that you can work back 
to some quantum origin, but even they will fail horribly if you don't 
build a larger system that is secure, and secure at some non-trivial 
radius.  (How much Tempest-hardening are you going to do?)


And once we have built such vaguely secure systems, why reject entropy 
sources within those systems, merely because they you think they look 
like squish?  If there is a random component, why toss it out?  You 
seem to respect using hashing to condition and stretch entropy--though 
why any existing hash shouldn't also fall to your squish 
generalization, I don't know.  It seems that you would reject using a 
coin toss as a source of entropy because coins are not perfectly fair 
and there are biases in their results.  So?  You respect hashing, why 
not clean the output with a good hash?


You dismiss things like clock skew, but when I start to imagine ways 
to defeat interrupt timing as an entropy source, your Johnson noise 
source also fails: by the time the adversary has enough information 
about what is going on inside the GHz-plus box to infer precise clock 
phase, precise interrupt timing, and how fast the CPU responds...they 
have also tapped into the code that is counting your Johnson.


There are a lot of installed machines that can get useful entropy from 
existing sources, and it seems you would have the man who is dying of 
thirst die, because the water isn't pure enough.


Certainly, if hardware manufacturers want to put dedicated entropy 
sources in machines, I approve, and I am even going to use rdrand as 
*part* of my random numbers, but in the mean time, give the poor servers 
a sip of entropy.  (And bravo to Linux distributions that overruled the 
purist Linux maintainer who thought no entropy was better than poorly 
audited entropy, we are a lot more secure because of them.)



-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-14 Thread John Kelsey
Your first two categories are talking about the distribution of entropy--we 
assume some unpredictability exists, and we want to quantify it in terms of 
bits of entropy per bit of output.  That's a useful distinction to make, and as 
you said, if you can get even a little entropy per bit and know how much you're 
getting, you can get something very close to ideal random bits out.

Your second two categories are talking about different kinds of 
sources--completely deterministic, or things that can have randomness but don't 
always.  That leaves out sources that always have a particular amount of 
entropy (or at least are always expected to!).  

I'd say even the squish category can be useful in two ways:

a.  If you have sensible mechanisms for collecting entropy, they can't hurt and 
sometimes help.  For example, if you sample an external clock, most of the 
time, the answer may be deterministic, but once in awhile, you may get some 
actual entropy, in the sense that the clock drift is sufficient that the 
sampled value could have one of two values, and an attacker can't know which.  

b.  If you sample enough squishes, you may accumulate a lot of entropy.  Some 
ring oscillator designs are built like this, hoping to occasionally sample the 
transition in value on one of the oscillators.  The idea is that the rest of 
the behavior of the oscillators might possibly be predicted by an attacker, but 
what value gets read when you sample a value that's transitioning between a 0 
and a 1 is really random, changed by thermal noise.  

I think the big problem with (b) is in quantifying the entropy you get.  I also 
think that (b) describes a lot of what commonly gets collected by the OS and 
put into the entropy pool.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography