Re: [Cryptography] Thoughts on hardware randomness sources

2013-09-14 Thread Marcus D. Leech

On 09/13/2013 11:32 PM, Jerry Leichter wrote:

On Sep 12, 2013, at 11:06 PM, Marcus D. Leech wrote:

There are a class of hyper-cheap USB audio dongles with very uncomplicated 
mixer models.  A small flotilla of those might get you some fault-tolerance.
  My main thought on such things relates to servers, where power consumption 
isn't really much of an issue

I'm not sure what servers you're talking about here.

If by server you mean one of those things in a rack at Amazon or Google or 
Rackspace - power consumption, and its consequence, cooling - is *the* major 
issue these days.  Also, the servers used in such data centers don't have 
multiple free USB inputs - they may not have any.

If by server you mean some quite low-power box in someone's home ... power is 
again an issue.  People want these things small, fan-free, and dead reliable.  
And they are increasingly aware of the electric bills always-on devices produce.

About the only server for which power is not an issue is one of those 
extra-large desktops that small businesses use.

 -- Jerry

I was mostly contrasting with mobile systems, where power consumption 
is at an absolute premium.


The USB sound systems I'm thinking of consume 350mW while operating, and 
about 300uW when idle.   A couple or three of those on even
  a stripped-down server would contribute in only the smallest way to 
extra power consumption.  And the extra computational load?  When these
  servers things are running flat-out serving up secured connections?  
I would guess the phrase an inconsiderable trifle would apply.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts on hardware randomness sources

2013-09-14 Thread Jerry Leichter
On Sep 12, 2013, at 11:06 PM, Marcus D. Leech wrote:
 There are a class of hyper-cheap USB audio dongles with very uncomplicated 
 mixer models.  A small flotilla of those might get you some fault-tolerance.
  My main thought on such things relates to servers, where power consumption 
 isn't really much of an issue
I'm not sure what servers you're talking about here.

If by server you mean one of those things in a rack at Amazon or Google or 
Rackspace - power consumption, and its consequence, cooling - is *the* major 
issue these days.  Also, the servers used in such data centers don't have 
multiple free USB inputs - they may not have any.

If by server you mean some quite low-power box in someone's home ... power is 
again an issue.  People want these things small, fan-free, and dead reliable.  
And they are increasingly aware of the electric bills always-on devices produce.

About the only server for which power is not an issue is one of those 
extra-large desktops that small businesses use.

-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Summary of the discussion so far

2013-09-14 Thread Max Kington

On 13 Sep 2013, at 21:46, Nico Williams wrote:

 On Fri, Sep 13, 2013 at 03:17:35PM -0400, Perry E. Metzger wrote:
 On Thu, 12 Sep 2013 14:53:28 -0500 Nico Williams
 n...@cryptonector.com wrote:
 Traffic analysis can't really be defeated, not in detail.
 
 What's wrong with mix networks?
 
 First: you can probably be observed using them.  Unless too many people
 use mix networks you might just end up attracting unwanted attention:
 more passive surveillance, maybe even active attacks (at the limit very
 physical attacks).

I do wonder what the problem with being observed using it is though.  I
understand the problem and the want to not have traffic analysis done
on your communications but what's the practical effect on your communications
if they are?

If I think about what I'm bothered about.  I do work part-time for an arm of 
government.  I don't like the idea that someone is out there with a big 
ear-trumpet
recording all my communications.  I like to be able to discuss the rights 
wrongs of mass surveillance but at the same time I don't want to be labelled
a dangerous subversive.  I like to have a moan about things I dislike but 
I don't want those to re-appear at some meeting where I'm called in for
a meeting, hat on, without coffee.  At least not where I've not been compelled
to produce them (at least I know what's coming!).  So privacy on the messages 
is important to me but not
necessarily is it of *equal* importance that my communications partners are 
hidden.  I might swap emails
with Ben, Ben likes a good moan too, we both work for the same branch. 
The fact that I work with Ben and talk to him is neither here nor there.

For example, Hemlis is taking on the problem of obscuring traffic with regards
to the 'who' you're talking to and not just the 'what'.  I wonder how important
that is, really, especially when they're talking about centralised control of 
user information to ensure security, but haven't addressed what happens 
when they're compelled to help people game their own system (the it's ok, we'll
go to prison before we help the spooks I always find a bit weak, what if they
turn up with a car battery and a pair of pliers?) It's not clear how they're 
going to do 
any of this yet.  All in all they seem to have good intentions but I fear 
they're falling
into the trap of trying to solve the 'interesting' problems as a priority 
without having
a consistant plan.

I'm sure they'll come up with some sort of mix network.


 
 Second: I suspect that to be most effective the mix network also has to
 be most inconvenient (high latency, for example).  That probably means
 mix networks won't be popular enough to help with the first problem.

As Perry points out in his August posts, latency is less important although
for instant messaging traffic people do kind of want 'instant' for a low enough
value of latency.  The latency though is only of massive importance if it's 
critical
that who you talk to be obscured as well.  If you have *some* idea
of the people in a network who are communicating with each other there
also needs to be enough bandwidth to hide your messages in, as you're 
probably already observing the traffic close (or fairly close) to the endpoint
it's being delivered to.  

I took an approach in the system that I built of batching messages together
inside an encrypted bundle and padding them with junk so that you got a 
message every x minutes or x seconds and it was always at least y 
size regardless of if there was anything in it for you of 
interest or not.  If messages were over y size, they split and queued up for 
the next interval.

 
 Third: the mix network had better cross multiple jurisdictions that are
 not accustomed to cooperating with each other.  This seems very
 difficult to arrange.

Specifically on the jurisdictional point:

I've looked into this, I did some research into cloud providers in different 
jurisdictions.  After all if it's going to scale you're unlikely to be building 
data 
centres on the way to the system becoming successful.  It is possible that
you don't actually need to go to the extremes of routing stuff via Russia, China
Egypt and Pakistan.  I've got another discussion on another list about what
entities that are allowed to co-opoerate can actually do on behalf of each 
other. 
It turns out there is an interesting disconnect between Irish law and the UK law
(I picked Irish law because Amazon's european operations are in Ireland)

You have to decide if you are worried about co-operation as allowed by law 
or not for the jurisdiction you're in, i.e. are you going to go to prison or 
not.

The main instrument of cooperation here is a thing called an MLAT, a mutual
legal assistance treaty and they're signed with an awesome number of countries.

They only enable cooperation to the extent that local law allows and have 
different
rules about support that allows evidence that can be admissible in court and 
other
kinds of support.

So it comes back to 

[Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Peter Fairbrother
Recommendations are given herein as: symmetric_key_length - 
recommended_equivalent_RSA_key_length, in bits.


Looking at Wikipedia,  I see:

As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in 
strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit 
symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys. RSA 
claims that 1024-bit keys are likely to become crackable some time 
between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. 
An RSA key length of 3072 bits should be used if security is required 
beyond 2030.[6]


http://www.emc.com/emc-plus/rsa-labs/standards-initiatives/key-size.htm

That page doesn't give any actual recommendations or long-term dates 
from RSA now. It gives the traditional recommendations 80 - 1024 and 
112 - 2048, and a 2000 Lenstra/Verheul minimum commercial 
recommendation for 2010 of 78 - 1369.



NIST key management guidelines further suggest that 15360-bit RSA keys 
are equivalent in strength to 256-bit symmetric keys.[7]


http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf

NIST also give the traditional recommendations, 80 - 1024 and 112 - 
2048, plus 128 - 3072, 192 - 7680, 256 - 15360.




I get that 1024 bits is about on the edge, about equivalent to 80 bits 
or a little less, and may be crackable either now or sometime soon.


But, I wonder, where do these longer equivalent figures come from?

I don't know, I'm just asking - and I chose Wikipedia because that's the 
general wisdom.


Is this an area where NSA have shaped the worldwide cryptography 
marketplace to make it more tractable to advanced cryptanalytic 
capabilities being developed by NSA/CSS, by perhaps greatly 
exaggerating the equivalent lengths?


And by emphasising the difficulty of using longer keys?

As I said, I do not know. I merely raise the possibility.


[ Personally, I recommend 1,536 bit RSA keys and DH primes for security 
to 2030, 2,048 if 1,536 is unavailable, 4,096 bits if paranoid/high 
value; and not using RSA at all for longer term security. I don't know 
whether someone will build that sort of quantum computer one day, but 
they might. ]



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Paul Hoffman
Also see RFC 3766 from almost a decade ago; it has stood up fairly well.

--Paul Hoffman
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism proof email, namespaces, and anonymity

2013-09-14 Thread Max Kington
On Fri, Sep 13, 2013 at 10:12 PM, Perry E. Metzger pe...@piermont.comwrote:

 On Fri, 13 Sep 2013 16:55:05 -0400 John Kelsey crypto@gmail.com
 wrote:
  Everyone,
 
  The more I think about it, the more important it seems that any
  anonymous email like communications system *not* include people who
  don't want to be part of it, and have lots of defenses to prevent
  its anonymous communications from becoming a nightmare for its
  participants.  If the goal is to make PRISM stop working and make
  the email part of the internet go dark for spies (which definitely
  includes a lot more than just US spies!), then this system has to
  be something that lots of people will want to use.
 
  There should be multiple defenses against spam and phishing and
  other nasty things being sent in this system, with enough
  designed-in flexibility to deal with changes in attacker behavior
  over tome.

 Indeed. As I said in the message I just pointed Nico at:
 http://www.metzdowd.com/pipermail/cryptography/2013-August/016874.html

 Quoting myself:

Spam might be a terrible, terrible problem in such a network since
it could not easily be traced to a sender and thus not easily
blocked, but there's an obvious solution to that. I've been using
Jabber, Facebook and other services where all or essentially all
communications require a bi-directional decision to enable messages
for years now, and there is virtually no spam in such systems
because of it. So, require such bi-directional friending within
our postulated new messaging network -- authentication is handled
by the public keys of course.


The keys. This to me is the critical point for widespread adoption, key
management. How do you do this in a way that doesn't put people off
immediately.

There are two new efforts I'm aware if trying to solve this in a user
friendly way are https://parley.co/#how-it-works and http://mailpile.is.

Parley's approach does at least deal with the longevity of the private key
although it does boil security down to a password, if I can obtain their
packet and the salt I can probably brute force the password.
I've exchanged mails with the mailpile.is guys and I think they're still
looking at the options.

Max
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Key management, key storage. (was Re: prism proof email, namespaces, and anonymity)

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 17:23:40 +0100 Max Kington
mking...@webhanger.com wrote:
 The keys. This to me is the critical point for widespread adoption,
 key management. How do you do this in a way that doesn't put people
 off immediately.

You don't seem to be entirely talking about key management, given
that you talk about mailpile and parley. Parley seems to be simply
talking about *key storage* for example, which is a different kettle
of fish.

However, on the topic of key management itself, my own proposal was
described here:

http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html

In summary, I proposed a way you can map IDs to keys through pure
long term observation/widely witnessed events. The idea is not
original given that to some extent things like Certificate
Transparency already do this in other domains.


Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 09:31:22 -0700 Paul Hoffman
paul.hoff...@vpnc.org wrote:
 Also see RFC 3766 from almost a decade ago; it has stood up fairly
 well.

For those not aware, the document, by Paul and Hilarie Orman,
discusses equivalent key strengths and practical brute force methods,
giving extensive detail on how all calculations were done.

A URL for the lazy:

http://tools.ietf.org/html/rfc3766

It is very well done. I'd like to see an update done but it does
feel like the methodology was well laid out and is difficult to
argue with in general. The detailed numbers are slightly different
from others out there, but not so much as to change the general
recommendations that have been floating around.

Their table, from April 2004, looked like this:

   +-+---+--+--+
   | System  |   |  |  |
   | requirement | Symmetric | RSA or DH| DSA subgroup |
   | for attack  | key size  | modulus size | size |
   | resistance  | (bits)| (bits)   | (bits)   |
   | (bits)  |   |  |  |
   +-+---+--+--+
   | 70  | 70|  947 | 129  |
   | 80  | 80| 1228 | 148  |
   | 90  | 90| 1553 | 167  |
   |100  |100| 1926 | 186  |
   |150  |150| 4575 | 284  |
   |200  |200| 8719 | 383  |
   |250  |250|14596 | 482  |
   +-+---+--+--+

They had some caveats, such as the statement that if TWIRL like
machines appear, we could presume an 11 bit reduction in strength --
see the RFC itself for details.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Key management, key storage. (was Re: prism proof email, namespaces, and anonymity)

2013-09-14 Thread Trevor Perrin
On Sat, Sep 14, 2013 at 9:46 AM, Perry E. Metzger pe...@piermont.com wrote:

 However, on the topic of key management itself, my own proposal was
 described here:

 http://www.metzdowd.com/pipermail/cryptography/2013-August/016870.html

 In summary, I proposed a way you can map IDs to keys through pure
 long term observation/widely witnessed events. The idea is not
 original given that to some extent things like Certificate
 Transparency already do this in other domains.


Hi Perry,

What you're proposing is multipath probing of email users' public
keys.  Certificate Transparency isn't the right comparison, but this
has certainly been discussed in other domains:

Public Spaces Key Infrastructure / SecSpider (Osterweil et al, 2006, 2007) [1]
Perspectives (for HTTPS - Wendlant et al, 2008) [3]
Convergence (for HTTPS - Marlinspike, 2011) [4]
Vantages (for DNSSSEC - Osterweil et al, 2013) [5]

Probing servers is easier than probing email users, and publishing a
servername - key directory is also easier as server names don't have
the same privacy concerns as email names.  Still, it's an interesting
idea.

Key changes are a challenge to this approach, which people tend to overlook.

One approach is to have the probed party declare a commitment to
maintaining its public key constant for some period of time, and have
this commitment be detected by the probing parties.  This provides
some timing guarantees so that the rest of the system can probe and
download new results at regular intervals, without having sudden key
changes cause glitches.  Things like HPKP [6] and TACK [7] explore
this option.


Trevor


[1] http://irl.cs.ucla.edu/papers/pski.pdf
[2] http://secspider.cs.ucla.edu/docs.html
[3] http://perspectives-project.org/
[4] http://convergence.io/
[5] http://irl.cs.ucla.edu/~eoster/doc/pubdata-tpds13.pdf
[6] http://tools.ietf.org/html/draft-ietf-websec-key-pinning-08
[7] http://tack.io
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Peter Fairbrother

On 14/09/13 17:14, Perry E. Metzger wrote:

On Sat, 14 Sep 2013 16:53:38 +0100 Peter Fairbrother
zenadsl6...@zen.co.uk wrote:

NIST also give the traditional recommendations, 80 - 1024 and 112
- 2048, plus 128 - 3072, 192 - 7680, 256 - 15360.

[...]

But, I wonder, where do these longer equivalent figures come from?

I don't know, I'm just asking - and I chose Wikipedia because that's
the general wisdom.

[...]

[ Personally, I recommend 1,536 bit RSA keys and DH primes for
security to 2030, 2,048 if 1,536 is unavailable, 4,096 bits if
paranoid/high value; and not using RSA at all for longer term
security. I don't know whether someone will build that sort of
quantum computer one day, but they might. ]


On what basis do you select your numbers? Have you done
calculations on the time it takes to factor numbers using modern
algorithms to produce them?


Yes, some - but I don't believe that's enough. Historically, it would 
not have been (and wasn't) - it doesn't take account of algorithm 
development.


I actually based the 1,536-bit figure on the old RSA factoring 
challenges, and how long it took to break them.


We are publicly at 768 bits now, and that's very expensive 
http://eprint.iacr.org/2010/006.pdf - and, over the last twenty years 
the rate of public advance has been about 256 bits per decade.


So at that rate 1,536 bits would become possible but very expensive in 
2043, and would still be impossible in 2030.



If 1,024 is possible but very expensive for NSA now, and 256 bits per 
decade is right, then 1,536 may just be on the verge of edging into 
possibility in 2030 - but I think progress is going to slow (unless they 
develop quantum computers).


We have already found many of the easy-to-find advances in theory.



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread ianG

On 14/09/13 18:53 PM, Peter Fairbrother wrote:


But, I wonder, where do these longer equivalent figures come from?



http://keylength.com/ (is a better repository to answer your question.)



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Adam Back

On Sat, Sep 14, 2013 at 12:56:02PM -0400, Perry E. Metzger wrote:

http://tools.ietf.org/html/rfc3766

  | requirement | Symmetric | RSA or DH| DSA subgroup |
  | for attack  | key size  | modulus size | size |
  +-+---+--+--+
  |100  |100| 1926 | 186  |

if TWIRL like machines appear, we could presume an 11 bit reduction in
strength


100-11 = 89 bits.  Bitcoin is pushing 75 bits/year right
now with GPUs and 65nm ASICs (not sure what balance).  Does that place ~2000
bit modulus around the safety margin of 56-bit DES when that was being
argued about (the previous generation NSA key-strength sabotage)?

Anyone have some projections for the cost of a TWIRL to crack 2048 bit RSA? 
Projecting 2048 out to a 2030 doesnt seem like a hugely conservative

estimate.  Bear in mind NSA would probably be willing to drop $1b one-off to
be able to crack public key crypto for the next decade.  There have been
cost and performance, power, density improvements since TWIRL was proposed. 
Maybe the single largest employer of mathematicians can squeeze a few

incremetal optimizations of the TWIRL algorithm or implementation strategy.

Tin foil or not: maybe its time for 3072 RSA/DH and 384/512 ECC?

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Perfection versus Forward Secrecy

2013-09-14 Thread Tony Arcieri
On Thu, Sep 12, 2013 at 11:08 PM, Eugen Leitl eu...@leitl.org wrote:

 I do not think that the spooks are too far away from open research in
  QC hardware. It does not seem likely that we'll be getting real QC
 any time soon, if ever.


I don't think the spooks are ahead of the public either, and I really doubt
the NSA has a large quantum computer.

We still haven't seen quantum computers built yet which can truly rival
their conventional electronic brethren, especially if you look at it from a
cost perspective. DWave computers are interesting from a novelty
perspective, but not really ready to replace existing computers, even for
highly specialized tasks like running Shor's algorithm.

Nevertheless, if you've been following the trends in quantum computers over
the last few years, they are getting larger, and DWave is an example of
them moving out of the labs and turning into something you can buy.

I wouldn't be surprised to see a large quantum computer built in the next
two decades.

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Quantum Computers for Shor's Algorithm (was Re: Perfection versus Forward Secrecy)

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 11:49:50 -0700 Tony Arcieri basc...@gmail.com
wrote:
 We still haven't seen quantum computers built yet which can truly
 rival their conventional electronic brethren, especially if you
 look at it from a cost perspective. DWave computers are interesting
 from a novelty perspective, but not really ready to replace
 existing computers, even for highly specialized tasks like running
 Shor's algorithm.
 
 Nevertheless, if you've been following the trends in quantum
 computers over the last few years, they are getting larger, and
 DWave is an example of them moving out of the labs and turning into
 something you can buy.
 
 I wouldn't be surprised to see a large quantum computer built in
 the next two decades.

DWave has never unambiguously shown their machine actually is a
quantum computer, and even if it is, given its design it very
specifically cannot run Shor's algorithm or anything like it.

I'm unaware of a quantum computer of more than five qbits that has
been demonstrated that can run Shor's algorithm, and that specific
method, using a molecule with five distinct NMR peaks, cannot really
be extended further.

If you can find a reference to quantum computer with more qbits that
can run Shor's algorithm that has been demonstrated in public, I
would be very interested.

(And yes, I'm aware of the two photon device that factored the number
21, though I believe the team used tricks to make that work --
opinions on whether that work could scale would be welcome of course.)

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Quantum Computers for Shor's Algorithm (was Re: Perfection versus Forward Secrecy)

2013-09-14 Thread Tony Arcieri
On Sat, Sep 14, 2013 at 12:12 PM, Perry E. Metzger pe...@piermont.comwrote:

 DWave has never unambiguously shown their machine actually is a
 quantum computer


There was some controversy about that a few months ago. In the end, my
understanding is it netted out that it *is* a real (albeit limited) quantum
computer:

http://www.wired.com/wiredenterprise/2013/06/d-wave-quantum-computer-usc/


 and even if it is, given its design it very specifically cannot run Shor's
 algorithm or anything like it.


Sure, I never said it could ;) I also said that conventional computers can
still outpace it. I'm certainly NOT saying, that in their present capacity,
that DWave computers are any sort of threat to modern cryptography.

But still, it goes to show that quantum computers are happening. Now it's
just a question of whether a large computer capable of running Shor's
algorithm is actually on the horizon, or if it falls into a category like
nuclear fusion where work on it drags on indefinitely.

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] real random numbers

2013-09-14 Thread John Denker
This discussion will progress more smoothly and more rapidly
if we clarify some of the concepts and terminology.

There are at least four concepts on the table:
  1) At one extreme, there is 100% entropy density, for example
   32 bits of entropy in a 32-bit word.  I'm talking about real
   physics entropy, the kind of hard-core industrial-strength
   randomness that will never get cracked by anybody, ever.
  2) It is also possible to have lesser entropy density, such
   as 3 bits of entropy in a 32-bit word.  Such a word is
   partly predicable and partly not.  Remark: I emphatically 
   recommend that we pay attention to cases where we have a 
   provable lower bound on the entropy, even if the entropy 
   density starts out less than 100%, because the density can
   be increased to virtually 100% by hashing.  This is how
   turbid works.
  3) At the other extreme, there is complete determinism, such
   as the decimal digits of π, or even the decimal digits of 
   1/3rd.
  4) Last but not least, there is the squish category.  This
   case is covered by Murphy's law.  That means:
   -- If you wanted the symbol to be predictable, it would not
be reliably predictable.
   -- If you wanted the symbol to be unpredictable, it would
not be reliably unpredictable.

Illustration:  Back when I was 17 years old, I needed a random
number generator.  The pointy-haired boss directed me to use
the value of the instruction pointer at each interrupt.  I had
to explain to him that it was an event-driven system, so with
high likelihood, the IP pointed to the idle loop.  It was clearly
a squish.  You couldn't rely on it being predictable, but you
also couldn't rely on it being unpredictable.

===

To say the same thing yet again:  For non-adversarial situations,
you can do pretty much whatever you like, and you might reasonably
care about the typical case ... but a great many applications 
of randomness, including gaming and cryptography, are highly 
adversarial.  That demands a minimax approach.  The best case 
doesn't much matter and the typical case doesn't much matter;
we need to focus attention on the worst case.

Absence of proof is not proof of absence.  For the important
applications, we need a provably-good RNG.  The second law of
thermodynamics provides the required guarantees.



Here's another way of looking at almost the same issues.  People
on this list have been talking about combining every possible 
source of randomness ... the more the merrier.  Again there 
is a crucial distinction to be made:

 a) In the linux random device, /any/ user can mix stuff into
  the driver's pool.  This is a non-privileged operation.  The
  idea is that it can't hurt and it might help.  So far so good.
 b) Contributions of the type just mentioned do *not* increase
  the driver's estimate of the entropy in the pool.  If you want
  to increase the entropy-estimate, you need to issue a privileged
  ioctl.

If we want to have a meaningful discussion, we must distinguish
between 
 a) mixing in all sorts of squish, and
 b) mixing in some real entropy /and taking credit for it/.

Again, step (a) cannot get anybody into trouble.  Step (b) gets
you into trouble if you claim credit for more entropy than was
actually contributed.

==

Let's be clear:  There is a huge difference between contributing 
worthless squish and contributing real entropy.  IMHO the key 
distinction is whether or not you have a hard /lower bound/ on 
the amount of entropy, so you can claim credit for it, and be
confident of the claim.  I strongly recommend not bothering with
things that /might/ be unpredictable, and focusing instead on
things that are absolutely guaranteed to be unpredictable, as
guaranteed by the laws of physics.

On 09/12/2013 07:38 PM, Thor Lancelot Simon wrote:

 The audio subsystem actually posed *two* obvious opportunities: amplifier
 noise from channels with high final stage gain but connected by a mixer
 to muted inputs, and clock skew between system timers and audio sample
 clocks. 

I wouldn't have said that.  In a multi-stage amplifier, the noise 
figure is set by the /first/ stage, not the final stage.  More 
importantly, it hardly matters which stage contributes which part 
of the gain, so long as there is enough gain before the signal is 
digitized.  Most computer audio systems have enough gain to make 
the Johnson noise observable.  This gives us a source of hard-core, 
industrial-strength, guaranteed physics entropy.  In typical cases,
increasing the gain helps a tiny bit, but doubling the gain does 
/not/ double the amount of entropy per sample;  it only increases 
it by one bit.

Muting the inputs cannot help and might hurt.  It might well drop
the entropy production to zero.

Things like clock skew are usually nothing but squish ... not
reliably predictable, but also not reliably unpredictable.

I'm not interested in squish, and I'm not interested in speculation
about things that might be 

Re: [Cryptography] Quantum Computers for Shor's Algorithm (was Re: Perfection versus Forward Secrecy)

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 12:42:22 -0700 Tony Arcieri basc...@gmail.com
wrote:
 Sure, I never said it could ;) I also said that conventional
 computers can still outpace it. I'm certainly NOT saying, that in
 their present capacity, that DWave computers are any sort of threat
 to modern cryptography.
 
 But still, it goes to show that quantum computers are happening.

Given that the DWave design is totally unsuitable for Shor's
algorithm, it seems to have no real bearing on the situation in
either direction.

To break 1024 bit keys (a minimum capability for a useful Shor
machine, I'd say), you need several thousand qbits. I've not heard of
a demonstration of more than a half dozen, and I've seen no
progress on the topic in a while. It isn't like last year we could do
six and the year before five and this year someone announced fifteen
-- there have been no incremental improvements.

It is of course possible that there's been secret research on this at
NSA which has gotten far further, but I would expect that the
manufacturing technology needed to do that would require a huge
number of people to pull off, too many to keep quiet indefinitely.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts on hardware randomness sources

2013-09-14 Thread Bill Stewart

At 08:32 PM 9/13/2013, Jerry Leichter wrote:
If by server you mean one of those things in a rack at Amazon or 
Google or Rackspace - power consumption, and its consequence, 
cooling - is *the* major issue these days.  Also, the servers used 
in such data centers don't have multiple free USB inputs - they may 
not have any.


More to the point, the servers in the data centers aren't going to 
let you plug things in to them, especially if you're just renting a 
virtual machine or cloud minutes and don't get to connect to the real 
hardware at all (which also means you're not going to be able to use 
disk drive timing.)
A tablet computer has lots of sensors in it; even turning the cameras 
on at boot time and hashing the raw pixels should give you a 
reasonable chunk of entropy; you're not going to turn your virtual 
machine upside down and shake it like an Etch-A-Sketch.


I realize it's possible for somebody to try to manipulate this, but 
I've always assumed that ethernet packet timing ought to give you 
some entropy even so, and even though with virtual machines you may 
only get quantized versions of interrupt times.  Startup processes 
are probably going to include pinging a router and a name server, or 
at least they could if you wanted.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-14 Thread Kent Borg

On 09/14/2013 03:29 PM, John Denker wrote:
Things like clock skew are usually nothing but squish ... not reliably 
predictable, but also not reliably unpredictable. I'm not interested 
in squish, and I'm not interested in speculation about things that 
might be random. 


I see theoretical the enemy of the good here.

The term squish is entertaining, but be careful that once you paint 
away with your broad brush that you don't dismiss engineering realities 
that matter.


I can see there is an appeal to entropy sources that you can work back 
to some quantum origin, but even they will fail horribly if you don't 
build a larger system that is secure, and secure at some non-trivial 
radius.  (How much Tempest-hardening are you going to do?)


And once we have built such vaguely secure systems, why reject entropy 
sources within those systems, merely because they you think they look 
like squish?  If there is a random component, why toss it out?  You 
seem to respect using hashing to condition and stretch entropy--though 
why any existing hash shouldn't also fall to your squish 
generalization, I don't know.  It seems that you would reject using a 
coin toss as a source of entropy because coins are not perfectly fair 
and there are biases in their results.  So?  You respect hashing, why 
not clean the output with a good hash?


You dismiss things like clock skew, but when I start to imagine ways 
to defeat interrupt timing as an entropy source, your Johnson noise 
source also fails: by the time the adversary has enough information 
about what is going on inside the GHz-plus box to infer precise clock 
phase, precise interrupt timing, and how fast the CPU responds...they 
have also tapped into the code that is counting your Johnson.


There are a lot of installed machines that can get useful entropy from 
existing sources, and it seems you would have the man who is dying of 
thirst die, because the water isn't pure enough.


Certainly, if hardware manufacturers want to put dedicated entropy 
sources in machines, I approve, and I am even going to use rdrand as 
*part* of my random numbers, but in the mean time, give the poor servers 
a sip of entropy.  (And bravo to Linux distributions that overruled the 
purist Linux maintainer who thought no entropy was better than poorly 
audited entropy, we are a lot more secure because of them.)



-kb

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Security is a total system problem (was Re: Perfection versus Forward Secrecy)

2013-09-14 Thread John Kelsey
On Sep 13, 2013, at 3:23 PM, Perry E. Metzger pe...@piermont.com wrote:

 The problem these days is not that something like AES is not good
 enough for our purposes. The problem is that we too often build a
 reinforced steel door in a paper wall.

Also, if AES being insufficiently strong is our problem, we have a whole bunch 
of solutions easily at hand.  Superencrypt successively with, say, Serpent, 
Twofish, CAST, Salsa, and Keccak in duplex mode.  This has a performance cost, 
but it is orders of magnitude less overhead than switching to manual key 
distribution of one-time pads.  

It's hard for me to think of a real world threat that is addressed better by a 
one-time pad than by something cheaper and less likely to get broken via human 
error or attacks on the key management mechanism.  

 Perry E. Metzgerpe...@piermont.com

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-14 Thread John Kelsey
Your first two categories are talking about the distribution of entropy--we 
assume some unpredictability exists, and we want to quantify it in terms of 
bits of entropy per bit of output.  That's a useful distinction to make, and as 
you said, if you can get even a little entropy per bit and know how much you're 
getting, you can get something very close to ideal random bits out.

Your second two categories are talking about different kinds of 
sources--completely deterministic, or things that can have randomness but don't 
always.  That leaves out sources that always have a particular amount of 
entropy (or at least are always expected to!).  

I'd say even the squish category can be useful in two ways:

a.  If you have sensible mechanisms for collecting entropy, they can't hurt and 
sometimes help.  For example, if you sample an external clock, most of the 
time, the answer may be deterministic, but once in awhile, you may get some 
actual entropy, in the sense that the clock drift is sufficient that the 
sampled value could have one of two values, and an attacker can't know which.  

b.  If you sample enough squishes, you may accumulate a lot of entropy.  Some 
ring oscillator designs are built like this, hoping to occasionally sample the 
transition in value on one of the oscillators.  The idea is that the rest of 
the behavior of the oscillators might possibly be predicted by an attacker, but 
what value gets read when you sample a value that's transitioning between a 0 
and a 1 is really random, changed by thermal noise.  

I think the big problem with (b) is in quantifying the entropy you get.  I also 
think that (b) describes a lot of what commonly gets collected by the OS and 
put into the entropy pool.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] real random numbers

2013-09-14 Thread Bill Stewart

At 10:04 AM 9/12/2013, John Denker wrote:

Quantum noise is the low-temperature asymptote, and thermal noise is
the high-temperature asymptote of the /same/ physical process.

So ... could we please stop talking about radioactive random number
generators and quantum random number generators?  It's embarrassing.


No. You're misunderstanding what people mean by radioactive RNGs.
(Quantum, fine; I don't use the term when talking about RNGs, because
I don't know what it means in that context, and I took enough quantum physics
in college many decades ago to know that most people who use the term
use it to mean handwavy stuff I don't understand, so I won't argue with
you about whether it's different from thermal noise.)

Radioactive RNGs consist of a radiation source with a large number of
unstable nuclei and a particle detector that detects their decay events.
The events occur at some average rate, based on the decay rate per nucleus
and the number of nuclei, but the timing for when the next atom
feels like decaying is entirely unpredictable, so you get a Poisson process.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Security is a total system problem (was Re: Perfection versus Forward Secrecy)

2013-09-14 Thread Tony Arcieri
On Fri, Sep 13, 2013 at 12:23 PM, Perry E. Metzger pe...@piermont.comwrote:

 I strongly suspect that delivering them securely to the vast number
 of endpoints involved and then securing the endpoints as well would
 radically limit the usefulness. Note that it appears that even the
 NSA generally prefers to compromise endpoints rather than attack
 crypto.


Yes, even airgapping keys within an organization scales poorly (I say this
as an employee of a company that has built a high availability encryption
service around HSMs). While USB drives are certainly large enough to store
huge pads, the fact remains that OTP is only better than other systems if
we can keep the keys off the wire. This means that we need a sneakernet to
move keys around.

The payments industry in the US has done this somewhat successfully. They
do things like shipping fragments of the keys through different shipping
companies, having the recipient reassemble them at their end. Even then
it's difficult to know if they've been intercepted: you can encrypt them,
and put the drives in tamper evident bags, but at least the latter can be
thwarted.

Obviously the Public Key Infrastructure scales a lot better than this
approach.

-- 
Tony Arcieri
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] real random numbers

2013-09-14 Thread Sandy Harris
Let me a try a different way of stating (what I think is) Denker's point.

From docs for my RNG, at:
ftp://ftp.cs.sjtu.edu.cn:990/sandy/maxwell/

Discussing Denker's Turbid, found at:
http://www.av8n.com/turbid/paper/turbid.htm

(Quoting)
The unique advantage of Turbid is that it provably delivers almost
perfectly random numbers. Most other generators – including mine,
random(4), and the others discussed in this section – estimate the
randomness of their inputs. Sensible ones attempt to measure the
entropy, and are very careful that their estimates are sufficiently
conservative. They then demonstrate that, provided that the estimate
is good, the output will be adequately random. This is a reasonable
approach, but hardly optimal.

Turbid does something quite different. It measures properties of the
sound device and uses arguments from physics to derive a lower bound
on the Johnson-Nyquist noise [3] which must exist in the circuit. From
that, and some mild assumptions about properties of the hash used, it
gets a provable lower bound on the output entropy. Parameters are
chosen to make that bound 159.something bits per 160-bit SHA context.
The documentation talks of “smashing it up against the asymptote”.
{End quote)

The difference is real and it seems quite clear that an RNG with a
provable bound is preferable to one where analysis must rely on
assumptions about or estimates of input entropy. For a rather large
subset of servers -- basically any that have an unused sound card
equivalent or can easily add one -- Turbid should be the first thing
considered for use as an RNG. It is open source, so auditable, and
uses hardware that appears unlikely to be subject to fiddling by
three-letter agencies of any government.

The basic design of RDRAND looks like it could be proven secure in
much the same way, but with the Snowden revelations   plus this paper
http://hardware.slashdot.org/story/13/09/13/1228216/stealthy-dopant-level-hardware-trojans
it becomes harder to trust.

All that said, I still want to use something like Linux random(4) with
a large pool and multiple entropy sources.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography