Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-03-05 Thread Adam Back

Further the fact that the entropy seeding is so bad that some
implementations are generating literally the same p value (but seemingly
different q values) I would think you could view the fact that this can be
detected and efficiently exploited via batch GCD as an indication of an even
bigger problem.

Namely if the seeding is that bad you could outright compute all possible
values of p even for cases where p was not shared, by running throught the
evidently small by cryptographic standards number of possible PRNG states...

Then you might be looking at more than 1% or whatever the number is that
literally collide in specific p value.  Assuming p is more vulnerable than
q, you could then use the same batch GCD to test.

Adam

On Thu, Feb 16, 2012 at 02:47:21PM -0600, Nico Williams wrote:

On Thu, Feb 16, 2012 at 12:28 PM, Jeffrey Schiller j...@qyv.net wrote:

Are you thinking this is because it causes the entropy estimate in the RNG to 
be higher than it really is? Last time I checked OpenSSL it didn't block 
requests for numbers in cases of low entropy estimates anyway, so line 3 
wouldn't reduce security for that reason.


I  am thinking this because in low entropy cases where multiple boxes generate 
the same first prime adding that additional entropy before the second prime is 
generated means they are likely to generate a different second prime leading to 
the GCD attack.


I'd thought that you were going to say that so many devices sharing
the same key instead of one prime would be better on account of the
problem being more noticeable.  Otherwise I don't see the difference
between one low-entropy case and another -- both are catastrophic
failures.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-24 Thread ianG

On 22/02/12 13:31 PM, Kevin W. Wall wrote:


So, let's bring this back to cryptography. I'm going to assume that
virtually all of you are a somewhat altruistic and are not in this game just
to make a boatload of money by keeping all the crypto knowledge
within the secret priesthood thereby driving your own salaries up.



! idk, sounds like a challengeable assumption.


For starters, I would urge those of you who are not involved in
the open source movement to step up and help out with things
like OpenSSL, OpenSSH, cryptographic libraries (in languages
*other* than C/C++), etc. Personally, I would *more* than welcome
someone here stepping forward and volunteering to head up
the crypto effort in OWASP ESAPI. Even though some
people from the NSA have reviewed it, I'm paranoid enough to
think that it's what they are NOT telling me that is wrong is what is
worrying me.

I know many of you have already contributed (I won't attempt to name
names because I'd probably unintentionally leave a few of you out and
offend them), but not nearly enough. Most of you who regularly post to
this mailing have commented on how you've seen some of the same
beginner crypto failures over and over, so how about starting with jus
  a simple crypto HowTo FAQ, maybe an OWASP crypto cheat sheat.


I suspect most of the people here would prefer to be paid for this.  I 
know I would.


(One of the reasons I never coded for Mozilla was that my company would 
have had a conflict in time.  Helping them with their policies however 
was not seen as a conflict.)


Just personal observations.



1) They think that key size is the paramount thing; the bigger the better.


NIST are the current baddies here.


2) The have no clue as to what cipher modes are. It's ECB by default.
3) More importantly, they don't know how to choose a cipher mode (not
 surprising, given #2). They need to understand the trade-offs.
4) They have no idea about how to generate keys, derived keys, IVs,
5) They don't know what padding is, or when/why to use it.
6) They have a very naive concept of entropy...where/when to use it and
 from where and how to obtain it.


Yes, crypto seems to be in layers.  Block algorithms.  Modes, and 
implications.  The rest.  The game is to push more of it back down to 
algorithms.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-23 Thread ianG

Well, that was a long post, Marsh!

I think it is a good perspective.  And it occurs to me that if this is a 
real problem there might be a real solution.


I suggest going to NIST and asking them to run a design competition for 
a hardware cell that produces good entropy.  Hardware designs aka cells 
aka asics aka idk what they call them are often standardised products 
these days.  You pull one from a library, lay it in a corner, connect up 
the lines on your CAD tool and you're done.


Our problem is what to design, what to layout, and how to make it good?

NIST have done well with the competition technique.  AES was a good 
thing, it brought in 30 designs and the world's cryptographers in one 
goal to find the best of the best.


Either way ... where the expertise is unclear and the problem is real 
and definable and also of widespread interest, a competition for a 
design might get the grey matter churning.  EEs get to play this time!


NIST recently produced a new standard for PRNGs that kicked out the 
entire entropy question.  The goal is a deterministic PRNG, testable and 
repeatable.  It took me a while to figure it out, but this separation 
from the old all-in-one thinking over to entropy source plus 
deterministic mixer is quite inspired.  Point being, they solved half 
the problem; they'll be open to the other half?


iang


On 23/02/12 08:55 AM, Marsh Ray wrote:

On 02/22/2012 09:32 AM, Thierry Moreau wrote:

While commenting about...

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-23 Thread Ondrej Mikle
On 02/22/2012 10:55 PM, Marsh Ray wrote:
 
 I'm putting myself in the position of an engineer who's designing the
 logic and writing some low-level firmware for the next consumer grade
 $50 blue box home router/wifi/firewall appliance:
 
 === [cue dream sequence wavy blur effect]
 
 I'm an EE many years experience going back almost, but not quite, as far
 as the days of fully discrete logic designs. I've been part of the
 current design team for 5 years now. I have a gray beard and drink 4
 mugs of strong coffee a day. I like to read science fiction and handmake
 acoustic guitars in my free time.

That is a great writeup. Can I get your permission for translating and
publishing it locally (with attribution to author, of course)?

Continuing with the duplicate moduli case, what is worse than key sharing or
sharing primes? Sharing keys _and_ sharing primes.

I took some first 80 results from crunching the moduli and mapped them back to
certificates. In EFF's SSL Observatory there were 3912 unique certs sharing
those factorized moduli (all embedded devices), couple extra in other DBs.

That likely means 3912 separate devices sharing keys and primes. My
interpretation is that in many cases, the second prime was generated identically
to other devices as well (if the cert/private key was part of firmware, the
certs would have been identical). Not that it'd be much surprising.

As a side note, none of the moduli belonged to a DNSSEC key.

Ondrej
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-23 Thread Marsh Ray

On 02/23/2012 02:27 PM, Ondrej Mikle wrote:

On 02/22/2012 10:55 PM, Marsh Ray wrote:


I'm putting myself in the position of an engineer who's designing the
logic and writing some low-level firmware for the next consumer grade
$50 blue box home router/wifi/firewall appliance:

=== [cue dream sequence wavy blur effect]

I'm an EE many years experience going back almost, but not quite, as far
as the days of fully discrete logic designs. I've been part of the
current design team for 5 years now. I have a gray beard and drink 4
mugs of strong coffee a day. I like to read science fiction and handmake
acoustic guitars in my free time.


That is a great writeup. Can I get your permission for translating and
publishing it locally (with attribution to author, of course)?


Thanks.

That is fine with me. If you think it might receive wider readership, I 
could clean it up a bit. Feel free to email me off list if anything 
comes up. I'd like to see a link to it if it makes on the web.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-23 Thread Ondrej Mikle
On 02/24/2012 12:00 AM, Michael Nelson wrote:
 Ondrej Mikle wrote:
 
 I took some first 80 results from crunching the moduli
 and mapped them back to certificates. In EFF's SSL
 Observatory there were 3912
 unique certs sharing those
 factorized moduli (all embedded devices), couple
 extra
 in other DBs.
 
 Could you tell us a couple of things about those certs?  I have created 
 plenty of test CAs on my desktop and issued all sorts of test certs and used 
 them on test servers.  None of them would have shared primes presumably, 
 because my code (much of it OpenSSL) has very fussy seeding and checks, but 
 it would not matter at all if they did -- it's just for testing.  I would be 
 interested to know: 
 
 1. Were the CAs serious CAs, or just test CAs?  Can you tell?

All the certs found so far were self-signed. Presumably the ones autogenerated
after first boot.

 
 2. Were the certs in front of things that really needed protecting?

Possibly (judging by a few reverse IP records). Majority of those 3912 certs
point to one specific product with VPN/IPSec capabilities targeted at SOHO users
(a glorified router).

Ondrej
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-22 Thread Thierry Moreau

While commenting about

http://www.cs.bris.ac.uk/Research/CryptographySecurity/knowledge.html

, Marsh Ray wrote:


It talks about entropy exclusively in terms of 'unpredictability', which
I think misses the essential point necessary for thinking about actual
systems: Entropy is a measure of uncertainty experienced by a specific
attacker.


I am curious that you seem to prefer the risk analysis definition of 
entropy over the more general definition. I am rather confident that a 
proper application of the more general definition is more effective in 
providing security assurance: the future attack vectors are deemed to be 
unexpected ones.


You are not alone using this perspective. NIST documents on secret 
random data generation are very confusing about the definition they use. 
(I dropped out of their feedback requests on the last revision/round 
where they split the contents into two documents and released only one.) 
NIST seems to refer to three definitions: one from the 
information-theory (min-entropy), one where every bit is unpredictable 
(full entropy -- you know how NIST loves cryptographic parameters of 
just the proper size), and the risk analysis definition.


Anyway, this whole thing about RSA modulus GCD findings questions us 
about entropy in a renewed perspective (a reminder that future attack 
vectors are deemed to be unexpected ones).


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-22 Thread Marsh Ray

On 02/22/2012 09:32 AM, Thierry Moreau wrote:

While commenting about

http://www.cs.bris.ac.uk/Research/CryptographySecurity/knowledge.html

 , Marsh Ray wrote:


It talks about entropy exclusively in terms of 'unpredictability',
which I think misses the essential point necessary for thinking
about actual systems: Entropy is a measure of uncertainty
experienced by a specific attacker.


(Actually this was was a comment on
http://blog.cryptographyengineering.com/2012/02/random-number-generation-illustrated.html
)


I am curious that you seem to prefer the risk analysis definition of
 entropy over the more general definition. I am rather confident that
a proper application of the more general definition is more effective
in providing security assurance: the future attack vectors are deemed
to be unexpected ones.


Simply pragmatic utilitarianism rather than risk analysis per se.

I'm putting myself in the position of an engineer who's designing the
logic and writing some low-level firmware for the next consumer grade
$50 blue box home router/wifi/firewall appliance:

=== [cue dream sequence wavy blur effect]

I'm an EE many years experience going back almost, but not quite, as far
as the days of fully discrete logic designs. I've been part of the
current design team for 5 years now. I have a gray beard and drink 4
mugs of strong coffee a day. I like to read science fiction and handmake
acoustic guitars in my free time.

So far in my career, I haven't been involved in implementing any crypto
hardware. But show me the spec, and I can implement it in programmable
logic with minimal power consumption. No one can match my productivity
when I have my favorite toolchain. In fact, my last ASIC design passed
validation on the first run, saving the company at least $100,000.

My manager used to be a hands-on EE too, but that was way back before
LSI programmable logic. These days his favorite expressions are
Sooner, cheaper, rechargeable: choose any two and
Don't make a Rembrandt.

This project is still in its early stages, and we haven't settled on a
supplier for the processor yet. Depending on the business people sales
volume projections, we'll probably end up with either a small embedded
SoC and a small glue ASIC, or licensing an IP soft core for a full ASIC
design.

We have settled on the embedded OS though, and we even have a simple web
server up and running on a development board left over from a previous
project. This will probably be the basis of the end-user management
interface, now being designed by our product people and software developers.

[fast forward to later in the project]

So the UI is prototyped and now up and running. The firmware guy has
been helping the web UI people add support for HTTPS, using an SSL
library we licensed. It was chosen primarily due to its support for our
embedded OS.

The software people are telling me the library requirements state it
needs a good source of random numbers, and it would be best if I supply
them via hardware. They even made it a design requirement, there's a
first time for everything I guess.

My first thought is an LFSR-based pseudorandom bit stream generator
which I could easily fit into the available area on our ASIC. We used to
use these in some RF work, and also in toys for noise generators.

But looking at the requirements closer, they say that for cryptographic
purposes this needs to be something more then your ordinary pseudorandom
generator. It needs to produce entropy. I haven't really heard that
word used much since school, I vaguely remember it being somewhat
mysterious. The term was introduced both in my advanced physics and
coding theory ...with completely different meanings!

Googling around for a definition of entropy I find the Wikipedia article:
https://en.wikipedia.org/wiki/Entropy
But that's not so helpful. Certainly nothing there I can implement.

So I search for random entropy generator and I find
Turbid: A High-Entropy Randomness Generator
http://www.av8n.com/turbid/paper/turbid.htm

This article is great and I read the whole thing. It really appeals to
my inner engineer. Of course, our product will not have an A/D
converter, much less a microphone, so it's just a curiosity.

I notice the TI CC2530 uses a LFSR which can be can be seeded with
random data from noise in the radio ADC.
http://www.ti.com/lit/ds/symlink/cc2530.pdf
But, again, we're not using that chip or an RF ADC.

Searching more on the web for entropy sources
https://en.wikipedia.org/wiki/Hardware_random_number_generator
I learn about quantum radioactive decay sources, cosmic RF noise
detectors, avalanche diodes, resistor noise, webcams on lava lamps and
all sorts of other fascinating ways to generate entropy. But, again,
none of these fit within our BoM (bill of materials).

I can really only find one entropy source design that I might be able to
do in the available hardware. This one involves using some free-running
oscillators to drive some counters or clock bits 

Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-22 Thread Jeffrey Walton
On Wed, Feb 22, 2012 at 2:53 AM, James A. Donald jam...@echeque.com wrote:
 On 2012-02-22 12:31 PM, Kevin W. Wall wrote:
 1) They think that key size is the paramount thing; the bigger the
 better.
 2) The have no clue as to what cipher modes are. It's ECB by default.
 3) More importantly, they don't know how to choose a cipher mode (not
      surprising, given #2). They need to understand the trade-offs.
 4) They have no idea about how to generate keys, derived keys, IVs,
 5) They don't know what padding is, or when/why to use it.
 6) They have a very naive concept of entropy...where/when to use it
  and from where and how to obtain it.

 The debian debacle was none of the above - the patch was simply obviously
 stupid even if one had no idea about what the software was supposed to be
 doing.
Remember, OpenSSL gave tacit approval: If it helps with debugging,
I'm in favor of removing them,
http://www.mail-archive.com/openssl-dev@openssl.org/msg21156.html.

OpenSSL Team Members: http://www.openssl.org/about/.

Jeff

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-22 Thread Marsh Ray

On 02/22/2012 05:49 PM, Jeffrey Walton wrote:

Remember, OpenSSL gave tacit approval: If it helps with debugging,
I'm in favor of removing them,
http://www.mail-archive.com/openssl-dev@openssl.org/msg21156.html.


The full quote from Ulf Möller is:


Kurt Roeckx schrieb:

What I currently see as best option is to actually comment out
those 2 lines of code.  But I have no idea what effect this really
has on the RNG.  The only effect I see is that the pool might
receive less entropy.  But on the other hand, I'm not even sure
how much entropy some unitialised data has.


Not much. If it helps with debugging, I'm in favor of removing them.
(However the last time I checked, valgrind reported thousands of
bogus error messages. Has that situation gotten better?)


What Ulf gave was his own weak conditional support based on the way Kurt 
posed the question, which implied that it was only entropy from 
uninitialized memory being added.


But did OpenSSL go ahead and remove them or express interest a patch? No.

Now that would certainly count as approval.

Personally, I think it's a brilliant example of engineering 
miscommunication. One of open source crypto's great teaching moments, 
akin to the civil engineer's KC Hyatt walkway collapse.

https://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse

Just look at this engineering diagram:
https://en.wikipedia.org/wiki/File:HRWalkway.svg

Could easily be a crypto system.

- Marsh

P.S. Sadly, in case anyone hadn't heard, Ulf Möller died last month.

http://ulf-m.blogspot.com/2012/02/help-us-find-people-who-killed-ulf.html

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-22 Thread Jeffrey Walton
On Wed, Feb 22, 2012 at 7:37 PM, Marsh Ray ma...@extendedsubset.com wrote:
 On 02/22/2012 05:49 PM, Jeffrey Walton wrote:

 Remember, OpenSSL gave tacit approval: If it helps with debugging,
 I'm in favor of removing them,
 http://www.mail-archive.com/openssl-dev@openssl.org/msg21156.html.

 The full quote from Ulf Möller is:

 Kurt Roeckx schrieb:

 What I currently see as best option is to actually comment out
 those 2 lines of code.  But I have no idea what effect this really
 has on the RNG.  The only effect I see is that the pool might
 receive less entropy.  But on the other hand, I'm not even sure
 how much entropy some unitialised data has.

 Not much. If it helps with debugging, I'm in favor of removing them.
 (However the last time I checked, valgrind reported thousands of
 bogus error messages. Has that situation gotten better?)

 What Ulf gave was his own weak conditional support based on the way Kurt
 posed the question, which implied that it was only entropy from
 uninitialized memory being added.
I seem to recall Debian stating they interpreted the statement as an
OK (but I can't find a citation at the moment).

For what its worth, I could not tell if Möller was OK with removing
the statements for Debug only, or all versions (loosely, Debug and
Release). What was not very clear at all (to me): how removing the
statements was even helpful in debugging.

 But did OpenSSL go ahead and remove them or express interest a patch? No.
In this instance, I believe Debian made the changes then pushed the
patch upstream. Debian did not wait for OpenSSL action. Isn't that
fairly typical? I don't recall what happened afterwards (did OpenSSL
kick the patch?).

 Personally, I think it's a brilliant example of engineering
 miscommunication. One of open source crypto's great teaching moments, akin
 to the civil engineer's KC Hyatt walkway collapse.
 https://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse
Agreed.

 P.S. Sadly, in case anyone hadn't heard, Ulf Möller died last month.
 http://ulf-m.blogspot.com/2012/02/help-us-find-people-who-killed-ulf.html
Very unfortunate. I hate to hear things like that (cryptograper or not).

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-22 Thread James A. Donald

On 2012-02-23 9:49 AM, Jeffrey Walton wrote:

On Wed, Feb 22, 2012 at 2:53 AM, James A. Donaldjam...@echeque.com  wrote:

On 2012-02-22 12:31 PM, Kevin W. Wall wrote:

1) They think that key size is the paramount thing; the bigger the
better.
2) The have no clue as to what cipher modes are. It's ECB by default.
3) More importantly, they don't know how to choose a cipher mode (not
  surprising, given #2). They need to understand the trade-offs.
4) They have no idea about how to generate keys, derived keys, IVs,
5) They don't know what padding is, or when/why to use it.
6) They have a very naive concept of entropy...where/when to use it
  and from where and how to obtain it.


The debian debacle was none of the above - the patch was simply obviously
stupid even if one had no idea about what the software was supposed to be
doing.

Remember, OpenSSL gave tacit approval: If it helps with debugging,
I'm in favor of removing them,
http://www.mail-archive.com/openssl-dev@openssl.org/msg21156.html.


OpenSSL approved removing uninitialized data as *one* of many sources of 
randomness.  They did not give approval to remove *all* sources of 
randomness.


The routine for stirring randomness into the entropy pool had all use of 
its input argument commented out, so that the routine did nothing - did 
nothing regardless of whether it was called with uninitialized data, or 
called with any other source of randomness.


Which was simply moronic.  You don't need to know anything about 
cryptography to figure out that disabling a widely used routine because 
valgrind complains about *two* uses of that routine is stupid.


The fact that this was done and passed code review discredits the debian 
organization.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-21 Thread ianG

On 21/02/12 04:22 AM, Thierry Moreau wrote:

Ben Laurie wrote:


On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:

In any case, I think the design of urandom in Linux is flawed and
should be fixed.


In FreeBSD random (and hence urandom) blocks at startup, but never again.

...

The mental model for authentication key generation operation should
reflect the fact that it requires the computer to roll dice very
secretly for your protection, but the computer is very poor at this type
of dice rolling -- it may thus take time and/or require you to input
anything on the keyboard/mouse/touchscreen until adequate dice shaking
simulation has been achieved.

If security experts are not prepared to face this fact -- true random
data collection and associated entropy assessment can not be made
intrinsic to a computer system -- we are unjustified to expect OS
suppliers to provide a magic fix, or software developers to take the
liberty to solve an issue which is seldom stated.



I think I agree.  I'd characterise it as like this:  if you don't care 
that much, it's good enough.  If you care an awful lot, you have to do 
it yourself anyway.  The solutions out there seem aligned with that 
needs curve.




In this perspective, the root cause for the RSA modulus GCD findings is
the security experts inability to recognize and follow-up the
ever-present challenges of secret random data generation. As such, the
Linux design is seldom at stake.



Yeah.  There is an inability on the part of some security people and all 
the media to accept that some designers have accepted a risk rather than 
stomp it dead.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-21 Thread James A. Donald

On 2012-02-21 10:57 PM, ianG wrote:
 if you don't care that much, it's good enough. If you care
 an awful lot, you have to do it yourself anyway.

My now outdated Crypto Kong maintained its own non volatile file of 
randomness, stored it to disk on program shutdown.  On each program 
startup, it collected more randomness from all available sources, and 
stirred them into that file.  On first using the program, before the 
user could do anything that required randomness he had to click through 
some UI and type some information, and it collected more randomness with 
every click and keystroke.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-21 Thread Marsh Ray

On 02/21/2012 08:31 PM, Kevin W. Wall wrote:

Apologies for this being a bit OT as far as the charter of this list
goes, and perhaps a bit self-serving as well. I hope you will bear
with me.


Meh. I think I've seen worse. :-)


To a degree, I think it is more ignorance than it is outright
incompetence, Overall, developers generally are much better than the
general public when it comes to analytic and reasoning abilities. And
I think that this Dunning-Kruger effect that you mention is a good
explanation. But this phenomena goes *way* beyond developer's
ignorance of cryptography. It even goes way beyond a general
ignorance of information security.


Most developers are under a great deal of pressure to complete tasks.
They get lots of tasks done - they get a raise.
They get so many tasks done that they inspire (or fear) others to
complete tasks - they get a promotion.

Doubly so for new and inexperienced developers.

Now, think of the mindset required to code securely. It involves digging
deeper, asking a lot of dumb-sounding questions, generally being more
cautious, and constantly brainstorming ways break things and reasons
*not* to ship the functionality.

In short, it is an absolutely toxic mindset for a new developer to have
at the vast majority of entry-level developer jobs.


A great example of this is time and time again, I encounter _web_
application developers who have absolutely no clue as to how HTTP
works as a protocol. That just seems so counter intuitive to me. Yet
at least with the younger web developers, it seems to be the rule
rather than the exception.


For many of them, not only is it their first exposure to a protocol,
it's their first programming project.

Protocols are *hard*, even simple ones.

Tim Berners Lee. Brilliant guy. Working at CERN.

HTTP 0.9.

Need I say more? ;-)


Some of this can be blamed on the fact that web developers deal
with higher and higher levels of abstraction, until eventually, they
really don't need to understand what a Set-Cookie response header
looks like. All of us do this to some extent, but I think it is
becoming more common and therefore more noticeable because 1)
technology moves at an ever increasing pace and 2) IT management
still hasn't figured out that developers can't wear all hats and
that there is no substitute for expertise. IT management still thinks
that all members of technical staff are completely interchangeable.

What does this have to do with the Dunning-Kruger effect? Well, I
think that it encourages developers, especially younger ones, to
fake it. Back when I started (now over 30 yrs ago!), it was OK to
admit your ignorance, at least at Bell Labs.


From everything I've heard, that was a very special place.


And you could always find someone to mentor you if you wanted to
learn something new. Not so today. Most people are too busy and I
haven't seen any _formal_ mentorship programs in any company for at
least the past 25 years.


Are companies still complaining colleges aren't producing
enough graduates proficient in language X on platform Y?


So, let's bring this back to cryptography. I'm going to assume that
virtually all of you are a somewhat altruistic and are not in this
game just to make a boatload of money by keeping all the crypto
knowledge within the secret priesthood thereby driving your own
salaries up.


Hmmm... is there anything I need to sign? :-)


For starters, I would urge those of you who are not involved in the
open source movement to step up and help out with things like
OpenSSL, OpenSSH, cryptographic libraries (in languages *other* than
C/C++), etc. Personally, I would *more* than welcome someone here
stepping forward and volunteering to head up the crypto effort in
OWASP ESAPI.


I think I looked at it briefly a year or two ago and, frankly, where I
got hung up was that it was written in Java.

I hate to be a purist, but I just feel uncomfortable with crypto code
written in a language that doesn't have guaranteed constant-time
operations (e.g. string comparisons) or secure memory overwrite functions.

Could be worse I suppose. Some days it seems that Javascript crypto is
inevitable.


Even though some people from the NSA have reviewed it, I'm paranoid
enough to think that it's what they are NOT telling me that is wrong
is what is worrying me.


I look at it this way. The US government has more information systems
than anybody and ostensibly part of the NSA's job is to secure them, or
at least put some parameters on the level of exposure.

Are they eating the dog food? Take it as the highest endorsement.


I know many of you have already contributed (I won't attempt to name
 names because I'd probably unintentionally leave a few of you out
and offend them), but not nearly enough.



Most of you who regularly post to this mailing have commented on how
you've seen some of the same beginner crypto failures over and over,
so how about starting with jus a simple crypto HowTo FAQ, maybe an
OWASP crypto cheat sheat.

Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-20 Thread Solar Designer
On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:
 In any case, I think the design of urandom in Linux is flawed and
 should be fixed.

Do you have specific suggestions?

Short of making it block, I can think of the following:

1. More distros may follow the suggestion in the Ensuring
unpredictability at system startup comment in drivers/char/random.c
(save previously accumulated entropy in a file on shutdown, restore it
from the file on bootup).

2. The kernel may mix in hardware serial numbers, MAC addresses, etc.
into the initial entropy pool.  Drawback: if this turns out to be
insufficient entropy anyway (such as if some of it is correctly guessed
by an attacker), these numbers may then be inferred back from the
random numbers.  BTW, this same risk currently applies to system time
at bootup and even to further stuff added to the pool (even keystroke
timings and keystrokes themselves), but perhaps we're assuming that
either there's sufficient entropy that those won't be inferred or if the
system time is the only entropy, then having it inferred is not the
biggest of our worries.

These tradeoffs are not really specific to Linux.  Sure, you can make
urandom block, but that's also a tradeoff.

Alexander
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-20 Thread Ben Laurie
On Mon, Feb 20, 2012 at 12:42 PM, Solar Designer so...@openwall.com wrote:
 On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:
 In any case, I think the design of urandom in Linux is flawed and
 should be fixed.

 Do you have specific suggestions?

 Short of making it block, I can think of the following:

 1. More distros may follow the suggestion in the Ensuring
 unpredictability at system startup comment in drivers/char/random.c
 (save previously accumulated entropy in a file on shutdown, restore it
 from the file on bootup).

 2. The kernel may mix in hardware serial numbers, MAC addresses, etc.
 into the initial entropy pool.  Drawback: if this turns out to be
 insufficient entropy anyway (such as if some of it is correctly guessed
 by an attacker), these numbers may then be inferred back from the
 random numbers.  BTW, this same risk currently applies to system time
 at bootup and even to further stuff added to the pool (even keystroke
 timings and keystrokes themselves), but perhaps we're assuming that
 either there's sufficient entropy that those won't be inferred or if the
 system time is the only entropy, then having it inferred is not the
 biggest of our worries.

 These tradeoffs are not really specific to Linux.  Sure, you can make
 urandom block, but that's also a tradeoff.

In FreeBSD random (and hence urandom) blocks at startup, but never again.

One thing I'd really like to know is whether it would have ever
unblocked on these devices - and if it does, whether it ends up with
good entropy...
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-20 Thread Thierry Moreau

Ben Laurie wrote:


On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:

In any case, I think the design of urandom in Linux is flawed and
should be fixed.


In FreeBSD random (and hence urandom) blocks at startup, but never again.



Thanks for bringing this freebsd random source design as this neat summary.

I take this opportunity to review the Linux design decisions.

First, let me put aside the initial entropy assessment issue -- it's not 
solvable without delving into the details, and let me assume the freebsd 
entropy collection is good, at the possible cost of slowing down the 
boot process.


Then, basically the freebsd design is initial seeding of a deterministic 
PRNG. If a) the PRNG design is cryptographically strong (a qualification 
 which can be fairly reliable if done with academic scrutiny), and b) 
the PRNG state remains secret, THEN the secret random source is good 
through the system operating life cycle. (I make a restriction of the 
design as a simple PRNG because periodic true random data merging into 
the PRNG state is something not studied in the algorithmic theory 
publications.)


The secrecy of the PRNG state is a requirement NO GREATER THAN the 
secrecy of any long-term secret (e.g. a MAC symmetric key or a digital 
signature private key) needed during the system operating life cycle. 
Even if there were a few cases where a security system requires a random 
source, but not a single long-term secret, an anecdotal case may not be 
the best model for a general-purpose OS design. By logical inference 
then, requiring continuous (or periodic) true random data collection is 
an over-design (i.e. engineering resources better put into greater 
assurance about secrecy protections), or a plain design flaw (remaining 
vulnerabilities in the secrecy attack vectors overlooked due to 
attention paid to true random data collection).


So, the freebsd design appears reasonable to me. Can it be brought into 
Linux? Is it a Linux design flaw to omit boot-time entropy assessment?


My answers are only as an option and no.

The design glitch is the blocking at boot time for entropy assessment 
(wait until the entropy pool is filled at an adequate level).


By essence, true random data collection is antagonistic to a complex 
computer system. Generally, you want a computer system to behave 
predictably. Specifically, it would be sad if your next aircraft 
boarding ends in a crash because a bad pointer in the fly-by-wire 
software referred to a memory location holding a precise interrupt 
timing measurement instead of a fixed data value (RTCA D0178B in a 
nutshell). In practice, almost every strategy for collecting true random 
data based on unpredictable facets of computer technology turns void 
with the technological advances. Dedicated devices or audio ports cost 
money and/or provisioning hindrance.


Thus, the blocking at boot time for entropy assessment may be not be 
acceptable as a default for Linux: it is hard to provide an upper limit 
of the blocking time, and it is certainly not perceived as useful by a 
large portion of system users/operators. The freebsd design for 
/dev/{,u}random appears fit for a more understanding users/operators base.


The mental model for authentication key generation operation should 
reflect the fact that it requires the computer to roll dice very 
secretly for your protection, but the computer is very poor at this type 
of dice rolling -- it may thus take time and/or require you to input 
anything on the keyboard/mouse/touchscreen until adequate dice shaking 
simulation has been achieved.


If security experts are not prepared to face this fact -- true random 
data collection and associated entropy assessment can not be made 
intrinsic to a computer system -- we are unjustified to expect OS 
suppliers to provide a magic fix, or software developers to take the 
liberty to solve an issue which is seldom stated.


In this perspective, the root cause for the RSA modulus GCD findings is 
the security experts inability to recognize and follow-up the 
ever-present challenges of secret random data generation. As such, the 
Linux design is seldom at stake.


Just my view, enjoy!



--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-20 Thread Paul Hoffman
On Feb 20, 2012, at 9:22 AM, Thierry Moreau wrote:

 First, let me put aside the initial entropy assessment issue -- it's not 
 solvable without delving into the details, and let me assume the freebsd 
 entropy collection is good, at the possible cost of slowing down the boot 
 process.

But that is central to this thread.

FreeBSD doesn't block on first boot because it doesn't create SSH keys on first 
boot. The security team decided long ago that good security that system 
administrators could not screw up was of high priority. Thus, FreeBSD doesn't 
come with SSH installed; it has to be installed after installation. This has 
two big security wins:

- There is no chance that the OpenSSH version that is part of the distro has a 
bug that was later fixed because there is no OpenSSH version in the distro

- The act of first booting and then pulling down things like OpenSSH gives the 
entropy pool a chance to grow to a sufficient size to create good keys

 So, the freebsd design appears reasonable to me. Can it be brought into 
 Linux? Is it a Linux design flaw to omit boot-time entropy assessment?

Different Linux distros make different choices. Linux is an operating system, 
not a distribution. FreeBSD is a distribution.

 My answers are only as an option and no.

Given the above, I suggest that both of your answers are wrong. Even if a 
distro creator wants to include an ssh server as part of the distro, it is 
obviously dangerous to generate keys immediately on the first boot. The only 
possible reason to do so is so that the installer can immediately log in over 
SSH, but without knowing the actual keys being created. That is possibly 
tolerable on a network where there is no possible MITM, but is otherwise 
piss-poor security.

A trivial way around the problem, even if you want to include an ssh server as 
part of the distro, is to not start ssh server in the first boot but to include 
a program that will install it later. The program that creates the ssh keys 
could check for /dev/random being blocked and, if it is, let the operator type 
a bunch of stuff that would unblock it.

--Paul Hoffman

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-20 Thread Ben Laurie
On Mon, Feb 20, 2012 at 5:22 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
 Then, basically the freebsd design is initial seeding of a deterministic
 PRNG. If a) the PRNG design is cryptographically strong (a qualification
  which can be fairly reliable if done with academic scrutiny), and b) the
 PRNG state remains secret, THEN the secret random source is good through the
 system operating life cycle. (I make a restriction of the design as a simple
 PRNG because periodic true random data merging into the PRNG state is
 something not studied in the algorithmic theory publications.)

 The secrecy of the PRNG state is a requirement NO GREATER THAN the secrecy
 of any long-term secret (e.g. a MAC symmetric key or a digital signature
 private key) needed during the system operating life cycle. Even if there
 were a few cases where a security system requires a random source, but not a
 single long-term secret, an anecdotal case may not be the best model for a
 general-purpose OS design. By logical inference then, requiring continuous
 (or periodic) true random data collection is an over-design (i.e.
 engineering resources better put into greater assurance about secrecy
 protections), or a plain design flaw (remaining vulnerabilities in the
 secrecy attack vectors overlooked due to attention paid to true random data
 collection).

 So, the freebsd design appears reasonable to me.

FreeBSD does actually introduce extra randomness over time.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-20 Thread Nico Williams
On Mon, Feb 20, 2012 at 7:07 AM, Ben Laurie b...@links.org wrote:
 In FreeBSD random (and hence urandom) blocks at startup, but never again.

So, not exactly a terribly wrong thing to do, eh?  ;)

What OSes have parallelized rc script/whatever nowadays?  Quite a few,
it seems (several Linux distros, MacOS X, Solaris, maybe some BSDs?

It seems to me that it should be quite safe to arrange for either a)
services that depend on /dev/urandom to not start until after [that
is, to depend on a service that does] proper seeding of it, or b)
/dev/urandom to block, but only early in boot, until properly seeded.
This is precisely why looking after the whole system is important; a
holistic view of the system will lead the developers to ensure that
there is enough entropy before any services (or user programs) run
that might need it.  And since user programs are outside the control
of the init process, it seems that (b) is the safer approach.

 One thing I'd really like to know is whether it would have ever
 unblocked on these devices - and if it does, whether it ends up with
 good entropy...

But devices like that really should have a) a factory seed (different
on each device, and obtained from a CSRNG), b) a clock and/or stable
storage for a counter so that it is possible to ensure distinct PRNG
state after each boot.  There are other cases where we may not be able
to rely on a factory seed, such as VMs and laptops.  (Well, at least
for pre-built VM images one could treat them like embedded devices and
embed a per-image seed...)

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-19 Thread Thierry Moreau

Ben Laurie wrote:

On Fri, Feb 17, 2012 at 8:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:

Ben Laurie wrote:

On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:

Isn't /dev/urandom BY DEFINITION of limited true entropy?


$ ls -l /dev/urandom
lrwxr-xr-x  1 root  wheel  6 Nov 20 18:49 /dev/urandom - random


The above is the specific instance on your environment. Mine is different:
different kernel major/minor device numbers for /dev/urandom and
/dev/random.


So? Your claim was Isn't /dev/urandom BY DEFINITION of limited true
entropy? My response is: no.


I got the definition from

man 4 random

If your /dev/urandom never blocks the requesting task irrespective of the
random bytes usage, then maybe your /dev/random is not as secure as it might
be (unless you have an high speed entropy source, but what is high speed
in this context?)


Oh, please. Once you have 256 bits of good entropy, that's all you need.



First, about the definition, from man 4 random:

quote
A  read  from  the  /dev/urandom device will not block waiting for more 
entropy.  As a result, if  there  is  not  sufficient  entropy  in  the 
entropy  pool,  the  returned  values are theoretically vulnerable to a 
cryptographic attack on the algorithms used by the  driver.   Knowledge 
of how to do this is not available in the current non-classified 
literature, but it is theoretically possible that such an attack may 
exist.  If this is a concern in your application, use /dev/random instead.

/quote

If the RSA modulus GCD findings is not a cryptographic attack, I don't 
know what is. (OK, it's not published as an attack on the *algorithm*, 
but please note the fact that /dev/urandom cryptographic weakness may be 
at stake according to other comments in the current discussion.)


Second, about sufficiency of 256 bits of good entropy, the problem 
lies with good entropy: it is not testable by software because entropy 
quality depends on the process by which truly random data is collected 
and the software can not assess its own environment (at least for the 
Linux kernel which is meant to be adapted/customized/built for highly 
diversified environment).


Third, since good entropy turns out to become someone's confidence in 
the true random data collection process, you may well have your own 
confidence.


In conclusion, I am personally concerned that some operational mishaps 
made some RSA keys generated with /dev/urandom in environments where I 
depend on RSA security.


And yes, my concern is rooted in the /dev/urandom definition as quoted 
above.


If I am wrong in this logical inference (i.e. the RSA modulus GCD 
findings could be traced to other root cause than limited entropy of 
/dev/urandom), then I admittedly have to revise my understanding.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-19 Thread Ben Laurie
On Sun, Feb 19, 2012 at 5:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
 Ben Laurie wrote:

 On Fri, Feb 17, 2012 at 8:39 PM, Thierry Moreau
 thierry.mor...@connotech.com wrote:

 Ben Laurie wrote:

 On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
 thierry.mor...@connotech.com wrote:

 Isn't /dev/urandom BY DEFINITION of limited true entropy?


 $ ls -l /dev/urandom
 lrwxr-xr-x  1 root  wheel  6 Nov 20 18:49 /dev/urandom - random

 The above is the specific instance on your environment. Mine is
 different:
 different kernel major/minor device numbers for /dev/urandom and
 /dev/random.


 So? Your claim was Isn't /dev/urandom BY DEFINITION of limited true
 entropy? My response is: no.

 I got the definition from

 man 4 random

 If your /dev/urandom never blocks the requesting task irrespective of the
 random bytes usage, then maybe your /dev/random is not as secure as it
 might
 be (unless you have an high speed entropy source, but what is high
 speed
 in this context?)


 Oh, please. Once you have 256 bits of good entropy, that's all you need.


 First, about the definition, from man 4 random:

 quote
 A  read  from  the  /dev/urandom device will not block waiting for more
 entropy.  As a result, if  there  is  not  sufficient  entropy  in  the
 entropy  pool,  the  returned  values are theoretically vulnerable to a
 cryptographic attack on the algorithms used by the  driver.   Knowledge of
 how to do this is not available in the current non-classified literature,
 but it is theoretically possible that such an attack may exist.  If this is
 a concern in your application, use /dev/random instead.
 /quote

That's what your man 4 random says, it's not what mine says.

 If the RSA modulus GCD findings is not a cryptographic attack, I don't know
 what is. (OK, it's not published as an attack on the *algorithm*, but please
 note the fact that /dev/urandom cryptographic weakness may be at stake
 according to other comments in the current discussion.)

I am not suggesting that the problems found are not caused by some
implementation of /dev/urandom. My point is simply that urandom is not
_defined_ to be weak.

 Second, about sufficiency of 256 bits of good entropy, the problem lies
 with good entropy: it is not testable by software because entropy quality
 depends on the process by which truly random data is collected and the
 software can not assess its own environment (at least for the Linux kernel
 which is meant to be adapted/customized/built for highly diversified
 environment).

 Third, since good entropy turns out to become someone's confidence in the
 true random data collection process, you may well have your own confidence.

 In conclusion, I am personally concerned that some operational mishaps made
 some RSA keys generated with /dev/urandom in environments where I depend on
 RSA security.

 And yes, my concern is rooted in the /dev/urandom definition as quoted
 above.

 If I am wrong in this logical inference (i.e. the RSA modulus GCD findings
 could be traced to other root cause than limited entropy of /dev/urandom),
 then I admittedly have to revise my understanding.

As I have pointed out, some systems choose to make urandom the same as
random. Would they suffer from the same problem, given the same
environment? I think it would be useful to know.

In any case, I think the design of urandom in Linux is flawed and
should be fixed.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Adam Back
I also was pondering as to how the implementers could have arrived at
this situation towards evaluating Stephen Farrell's draft idea to have
a service that double checks at key gen time (that none of the p, q
values are reused).  http://www.ietf.org/id/draft-farrell-kc-00.txt

(Which I dont think is nearly robust enough for relying on, but doesnt
hurt as a sanity check that will catch a small percentage of entropy
offenders).

So then what *could* have gone wrong?

1. (most generous) they had a rng that accounted for entropy
estimation, waited until it had the target amount (eg 128-bits
minimum) and then generated p, q BUT there was an overestimate of the
entropy, it was MUCH worse than estimated

2. they didnt bother estimating entropy, the system clock was not set
or it didnt have one and/or they didnt use it or used memory state
after boot from rom or something predictable enough to show the
collision rate.  (aka incompetence)

3. your idea -- maybe -- more incompetent things have happened :)

Occam's razor suggests cryptographic incompetence.. number one reason
deployed systems have crypto fails.  Who needs to hire crypto people,
the developer can hack it together, how hard can it be etc.  There's a
psychological theory of why this kind of thing happens in general -
the Dunning-Kruger effect.  But maybe 1 happened.

Adam

[1] http://en.wikipedia.org/wiki/Dunning–Kruger_effect

On 18 February 2012 07:57, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 Adam Back a...@cypherspace.org writes:

Further the fact that the entropy seeding is so bad that some implementations
are generating literally the same p value (but seemingly different q values)
I would think you could view the fact that this can be detected and
efficiently exploited via batch GCD as an indication of an even bigger
problem.

 Do we know that this is accidental rather than deliberate?  A cute
 optimisation for keygen would be to only randomly generate one half of the
 {p,q} pair.  It's plenty of randomness after all, surely you don't really need
 both to be generated randomly, only one will do, and it'll halve the keygen
 time...

 Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread James A. Donald

On 2012-02-18 7:40 PM, Adam Back wrote:
 Occam's razor suggests cryptographic incompetence.. number one reason
 deployed systems have crypto fails.  Who needs to hire crypto people,
 the developer can hack it together, how hard can it be etc.  There's a
 psychological theory of why this kind of thing happens in general -
 the Dunning-Kruger effect.

Rather, the problem is, who is to say who are the real crypto people? 
The Wifi consortium doubtless believed they were hiring officially 
accredited officially expert officially crypto people, blessed by the 
holy consensus of academia, but they got it wrong, then they got the fix 
wrong, then they got the fix of the fix wrong.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread ianG

On 18/02/12 23:05 PM, Peter Gutmann wrote:

Morlock Elloimorlockel...@yahoo.com  writes:


Properly designed rngs should refuse to supply bits that have less than
specified (nominal) entropy. The requestor can go away or wait.


So you're going to sacrifice availability for some nebulous (to the user)
level of security.  What do you think the survivability of this feature will
be in the real world?



To some extent this is an argument over designs  definitions.  It seems 
that we've reached a sort of consensus on definitions:


an RNG should deliver a quality of entropy, on which demand, it may 
insist none at the moment


a PRNG should deliver a quantity with some hopeful quality, and 
should therefore simply deliver a steady stream regardless of its state. 
 It is happy to deliver with a seed of 0.


Which latter probably implies that any PRNG is a perfect PRNG as per 
the NIST concept of fully deterministic, fully testable, and it is up to 
the User to provide the entire seed.


If the User chooses to hook her RNG output up to her PRNG input, then 
that works too, but she's then in charge of both variables.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Adam Back
I missed a favorite of mine that I've personally found multiple times
in deployed systems from small (10k users) to large (mil plus users),
without naming the guilty:

4. The RNG used was rand(3), and while there were multiple entropy
additions, they were fed using multiple invocations of srand(3).

Thats a double fail, at the least,:and commonly a triple or quadruple fail also:

a. rand3 internal state is tiny (effectively 32 bits the size of the seed);

b. they even misunderstood the srand(3) api, calling srand multiple
times resets the state fully each time, ie sometimes with less entropy
on subsequent calls;

c. commonly the entropy was weak and not measured anyway;

d. the entropy was added at random points during the security critical
phase, so that even if there was 128-bits it was released in
externally observable or testable ways in sub 32-bit lumps.

I guess the developers were uber-confident in their competence while
hacking that together :)  Dunning-Kruger effect at work!

Sometimes they may even have competent math cryptographers but who
cant program, dont look at code, and the developers never asked about
how to generate randomness.

I'm wondering if that is quite plausible for this case... the effect
would be rather like observed.

Adam

On 18 February 2012 10:40, Adam Back a...@cypherspace.org wrote:
 I also was pondering as to how the implementers could have arrived at
 this situation towards evaluating Stephen Farrell's draft idea to have
 a service that double checks at key gen time (that none of the p, q
 values are reused).  http://www.ietf.org/id/draft-farrell-kc-00.txt

 (Which I dont think is nearly robust enough for relying on, but doesnt
 hurt as a sanity check that will catch a small percentage of entropy
 offenders).

 So then what *could* have gone wrong?

 1. (most generous) they had a rng that accounted for entropy
 estimation, waited until it had the target amount (eg 128-bits
 minimum) and then generated p, q BUT there was an overestimate of the
 entropy, it was MUCH worse than estimated

 2. they didnt bother estimating entropy, the system clock was not set
 or it didnt have one and/or they didnt use it or used memory state
 after boot from rom or something predictable enough to show the
 collision rate.  (aka incompetence)

 3. your idea -- maybe -- more incompetent things have happened :)

 Occam's razor suggests cryptographic incompetence.. number one reason
 deployed systems have crypto fails.  Who needs to hire crypto people,
 the developer can hack it together, how hard can it be etc.  There's a
 psychological theory of why this kind of thing happens in general -
 the Dunning-Kruger effect.  But maybe 1 happened.

 Adam

 [1] http://en.wikipedia.org/wiki/Dunning–Kruger_effect

 On 18 February 2012 07:57, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 Adam Back a...@cypherspace.org writes:

Further the fact that the entropy seeding is so bad that some implementations
are generating literally the same p value (but seemingly different q values)
I would think you could view the fact that this can be detected and
efficiently exploited via batch GCD as an indication of an even bigger
problem.

 Do we know that this is accidental rather than deliberate?  A cute
 optimisation for keygen would be to only randomly generate one half of the
 {p,q} pair.  It's plenty of randomness after all, surely you don't really 
 need
 both to be generated randomly, only one will do, and it'll halve the keygen
 time...

 Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Jon Callas
It was (2), they didn't wait.

Come on -- every one of these devices is some distribution of Linux that comes 
with a stripped-down kernel and Busybox. It's got stripped-down startup, and no 
one thought that it couldn't have enough entropy. These are *network* people, 
not crypto people, and the distribution didn't have a module to handle 
initial-boot entropy generation.

Period, that's it. It's not malice, it's not even stupidity, it's just 
ignorance.

The answer to what were they thinking? is almost always they weren't.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/16/2012 03:47 PM, Nico Williams wrote:
 I'd thought that you were going to say that so many devices sharing
 the same key instead of one prime would be better on account of the
 problem being more noticeable.  Otherwise I don't see the difference
 between one low-entropy case and another -- both are catastrophic
 failures.

Yes, both are catastrophic, but to different degrees. If they share the
same key, then you have a large set of folks who share a common private
key. However the rest of the world doesn't know that key.

In the case where only one prime is shared, the whole world (or at least
everyone who has a copy of both public keys) has the private key.

-Jeff

- -- 
___
Jeffrey I. Schiller
MIT Technologist, Consultant, and Cavy Breeder
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@qyv.net
http://jis.qyv.name
___
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFPP+Ne8CBzV/QUlSsRAtL7AKCo6GAa1eN9Kmv6e8A5/7cHnN+FHQCg3yAj
N0eJHbHGYgyeVt/RXpoY7C4=
=dhm6
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Ben Laurie
On Fri, Feb 17, 2012 at 8:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
 Ben Laurie wrote:

 On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
 thierry.mor...@connotech.com wrote:

 Isn't /dev/urandom BY DEFINITION of limited true entropy?


 $ ls -l /dev/urandom
 lrwxr-xr-x  1 root  wheel  6 Nov 20 18:49 /dev/urandom - random


 The above is the specific instance on your environment. Mine is different:
 different kernel major/minor device numbers for /dev/urandom and
 /dev/random.

So? Your claim was Isn't /dev/urandom BY DEFINITION of limited true
entropy? My response is: no.

 I got the definition from

 man 4 random

 If your /dev/urandom never blocks the requesting task irrespective of the
 random bytes usage, then maybe your /dev/random is not as secure as it might
 be (unless you have an high speed entropy source, but what is high speed
 in this context?)

Oh, please. Once you have 256 bits of good entropy, that's all you need.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Paul Hoffman
On Feb 16, 2012, at 9:52 AM, Marsh Ray wrote:

 How often have we seen Linux distros generate SSH keys 2 seconds after the 
 first boot?

The paper that started this thread was talking about keys used for TLS, not 
SSH. TLS certs are not usually generated during first boot. The devices had 
plenty of time to get I/O.

--Paul Hoffman

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Thor Lancelot Simon
On Sat, Feb 18, 2012 at 12:57:30PM -0500, Jeffrey I. Schiller wrote:
 
 The problem is that ssh-keygen uses /dev/urandom and it should really
 use /dev/random. I suspect that once upon a time it may have (I don't
 have the history off hand) and someone got annoyed when it blocked and
 solved the problem.

Um, why would it ever _unblock_, on such a device under typical
first-boot conditions?

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/18/2012 01:50 PM, Thor Lancelot Simon wrote:
 Um, why would it ever _unblock_, on such a device under typical
 first-boot conditions?

The idea would be that bootstrap would continue without the key being
generated. The key generation could then be retried periodically.
Eventually the device should gather some entropy from network packet
arrival time and similar environmental input (whether or not that input,
particularly in the VM environment, is providing really good entropy is
a different question).

-Jeff

- -- 
___
Jeffrey I. Schiller
MIT Technologist, Consultant, and Cavy Breeder
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@qyv.net
http://jis.qyv.name
___
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFPP/318CBzV/QUlSsRAtmiAKDkv7VC79BecyAkkpimCoVxzHvrFQCfe9E7
iSl4Uc7xjRSwB/FOAvpbazw=
=CmQG
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Paul Hoffman
On Feb 18, 2012, at 11:37 AM, Jeffrey I. Schiller wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 02/18/2012 01:50 PM, Thor Lancelot Simon wrote:
 Um, why would it ever _unblock_, on such a device under typical
 first-boot conditions?
 
 The idea would be that bootstrap would continue without the key being
 generated. The key generation could then be retried periodically.
 Eventually the device should gather some entropy from network packet
 arrival time and similar environmental input (whether or not that input,
 particularly in the VM environment, is providing really good entropy is
 a different question).


Really? Many cryptographers would say that number of unpredictable bits is very 
much a part of the question. For example, you cannot prove that the duplicate 
keys found were generated when the PRNG of the system was uninitialized: it's 
quite possible that they were generated when the PRNG was initialized with the 
same inputs.

--Paul Hoffman

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/18/2012 03:02 PM, Paul Hoffman wrote:

 Really? Many cryptographers would say that number of unpredictable
 bits is very much a part of the question? ...

Of course it is. What I meant was that if /dev/random returns data,
its contract is to return good random data based on good entropy. It
isn't an application's job to second guess what /dev/random is doing.

In theory /dev/random gathers random data from the timing of various
interrupt. Things like ethernet packet inter-arrival time for
example. Even on a system without a keyboard (aka human to bang on it)
there should be some sources of real entropy available.

My concern about virtual machines is that the hypervisor layer may
reduce the entropy in these inter-arrival times by quantifying them
into discrete time intervals.

-Jeff

- --
___
Jeffrey I. Schiller
MIT Technologist, Consultant, and Cavy Breeder
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@qyv.net
http://jis.qyv.name
___

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iD8DBQFPQBtM8CBzV/QUlSsRAoRMAJ9OwEKWOCPHNLJJh3d6JFQo8eJ2dwCg6Psd
hkeK7b1nLtEFIqx8xRBHetI=
=E8+e
-END PGP SIGNATURE-

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Thierry Moreau

D. J. Bernstein wrote:
[...]


There are of course more defenses that one can add to provide resilience
against more severe randomness deficiencies: one can start with more
random bits and hash them down to 256 bits; use repeated RDTSC calls as
auxiliary randomness input; etc. These details have essentially nothing
to do with the choice of cryptographic primitive, and the whole point of
/dev/urandom is to centralize these details and get them right rather
than having everybody reimplement them badly. It would be interesting to
understand how /dev/urandom failed for the repeated RSA primes---I'm
presuming here that /dev/urandom was in fact the main culprit.



Isn't /dev/urandom BY DEFINITION of limited true entropy? True entropy 
collection may take time (and is inescapably based on environmental 
assumptions) while /dev/urandom is defined as non-blocking. No matter 
the theoretical properties of the (deterministic) PRNG component of 
/dev/urandom, they can not expand *true* entropy.


And this is so, no matter the amount of details you delegate to reputed 
security software developers.


Regards,

--
- Thierry Moreau

CONNOTECH Experts-conseils inc.
9130 Place de Montgolfier
Montreal, QC, Canada H2M 2A1

Tel. +1-514-385-5691
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Ben Laurie
On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
 Isn't /dev/urandom BY DEFINITION of limited true entropy?

$ ls -l /dev/urandom
lrwxr-xr-x  1 root  wheel  6 Nov 20 18:49 /dev/urandom - random
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Marsh Ray

On 02/17/2012 01:32 PM, Thierry Moreau wrote:


Isn't /dev/urandom BY DEFINITION of limited true entropy?


It depends on the model you use.

In the model that makes sense to me, one in which the attacker has 
finite computational resources (i.e., insufficient to brute-force the 
search space of your pool) and all output is produced through a one-way 
function, then no, the entropy perceived by the attacker of /dev/urandom 
is not different from that of /dev/random.



True entropy
collection may take time (and is inescapably based on environmental
assumptions) while /dev/urandom is defined as non-blocking. No matter
the theoretical properties of the (deterministic) PRNG component of
/dev/urandom, they can not expand *true* entropy.


OK, but to what extent is this distinction between true and pseudo 
entropy equally theoretical when the system as a whole is considered?


These authors seem to be attempting to formalize this intuitive notion 
of unpredictability entropy which (somewhat surprisingly to me) they 
seem to say has not been sufficiently considered as distinct from its 
traditional statistical and incompressibility definitions.

http://www.cs.bu.edu/~reyzin/papers/entropy.ps


And this is so, no matter the amount of details you delegate to reputed
security software developers.


It's still an interesting enough topic that honest people can disagree.

Personally, I'd like to see it get sorted out well enough that kernels 
can save the tens of KiB of nonpageable RAM they use for their entropy 
pools and use instead some small multiple of an attacker's brute-force 
capability. For example, we could estimate an upper bound on an 
attacker's resources at 2^80 and size the entropy pool at a generous 256 
bits to match a modern hash function. Make that a few separate 
accumulation pools for catastrophic reseeding and we're still well under 
100 bytes.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Nico Williams
On Fri, Feb 17, 2012 at 2:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
 If your /dev/urandom never blocks the requesting task irrespective of the
 random bytes usage, then maybe your /dev/random is not as secure as it might
 be (unless you have an high speed entropy source, but what is high speed
 in this context?)

I'd like for /dev/urandom to block, but only early in boot.  Once
enough entropy has been gathered for it to start it should never
block.  One way to achieve this is to block boot progress early enough
in booting by reading from /dev/random, thus there'd be no need for
/dev/urandom to ever block.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Nico Williams
On Fri, Feb 17, 2012 at 2:51 PM, Jon Callas j...@callas.org wrote:
 On Feb 17, 2012, at 12:41 PM, Nico Williams wrote:
 On Fri, Feb 17, 2012 at 2:39 PM, Thierry Moreau
 thierry.mor...@connotech.com wrote:
 If your /dev/urandom never blocks the requesting task irrespective of the
 random bytes usage, then maybe your /dev/random is not as secure as it might
 be (unless you have an high speed entropy source, but what is high speed
 in this context?)

 I'd like for /dev/urandom to block, but only early in boot.  Once
 enough entropy has been gathered for it to start it should never
 block.  One way to achieve this is to block boot progress early enough
 in booting by reading from /dev/random, thus there'd be no need for
 /dev/urandom to ever block.

 I can understand why you might want that, but that would be wrong with a 
 capital W. The whole *point* of /dev/urandom is that it doesn't block. If you 
 want blocking behavior, you should be calling /dev/random. The correct 
 solution is to have early-stage boot code call /dev/random if it wants 
 blocking behavior.

I was hoping you'd read the second sentence, where I basically say
that /dev/urandom shouldn't block, that the system should not progress
past where /dev/urandom is needed until /dev/urandom has enough
entropy.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Marsh Ray

On 02/17/2012 02:51 PM, Jon Callas wrote:


On Feb 17, 2012, at 12:41 PM, Nico Williams wrote:


I'd like for /dev/urandom to block, but only early in boot.  Once
enough entropy has been gathered for it to start it should never
block.  One way to achieve this is to block boot progress early
enough in booting by reading from /dev/random, thus there'd be no
need for /dev/urandom to ever block.


I can understand why you might want that, but that would be wrong
with a capital W. The whole *point* of /dev/urandom is that it
doesn't block. If you want blocking behavior, you should be calling
/dev/random.


Alternatively, we could specify a /dev/nrandom which has the behavior 
Nico desires.



The correct solution is to have early-stage boot code
call /dev/random if it wants blocking behavior.


Except when /dev/random is equivalent to to /dev/urandom, as in OpenBSD 
and whatever Ben just posted from (FreeBSD perhaps?).



(Note that I have completely ignored an argument of why blocking is
rarely a good idea, which is the reason people call /dev/urandom. No
one said software engineering was easy.)


Don't block unless it's truly so soon after startup that the kernel's 
(nondecreasing) accumulated entropy estimate is pathologically low 
ought to be a satisfiable requirement.


The guy who writes the ssh_keygen program shouldn't have to try to 
figure out if he's being called from /etc/rc*, he should be able to get 
what he needs from a standard device.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Adam Back
Further the fact that the entropy seeding is so bad that some
implementations are generating literally the same p value (but seemingly
different q values) I would think you could view the fact that this can be
detected and efficiently exploited via batch GCD as an indication of an even
bigger problem.

Namely if the seeding is that bad you could outright compute all possible
values of p even for cases where p was not shared, by running through the
evidently small by cryptographic standards number of possible PRNG states...

Then you might be looking at more than 1% or whatever the number is that
literally collide in specific p value.  Assuming p is more vulnerable than
q, you could then use the same batch GCD to test.

Adam


On 16 February 2012 21:47, Nico Williams n...@cryptonector.com wrote:
 On Thu, Feb 16, 2012 at 12:28 PM, Jeffrey Schiller j...@qyv.net wrote:
 Are you thinking this is because it causes the entropy estimate in the RNG 
 to be higher than it really is? Last time I checked OpenSSL it didn't block 
 requests for numbers in cases of low entropy estimates anyway, so line 3 
 wouldn't reduce security for that reason.

 I  am thinking this because in low entropy cases where multiple boxes 
 generate the same first prime adding that additional entropy before the 
 second prime is generated means they are likely to generate a different 
 second prime leading to the GCD attack.

 I'd thought that you were going to say that so many devices sharing
 the same key instead of one prime would be better on account of the
 problem being more noticeable.  Otherwise I don't see the difference
 between one low-entropy case and another -- both are catastrophic
 failures.

 Nico
 --
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Peter Gutmann
Adam Back a...@cypherspace.org writes:

Further the fact that the entropy seeding is so bad that some implementations
are generating literally the same p value (but seemingly different q values)
I would think you could view the fact that this can be detected and
efficiently exploited via batch GCD as an indication of an even bigger
problem.

Do we know that this is accidental rather than deliberate?  A cute
optimisation for keygen would be to only randomly generate one half of the
{p,q} pair.  It's plenty of randomness after all, surely you don't really need
both to be generated randomly, only one will do, and it'll halve the keygen
time...

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Florian Weimer
* Werner Koch:

 On Wed, 15 Feb 2012 23:22, f...@deneb.enyo.de said:

 implementations seem to interpret it as a hard limit.  The V4 key
 format has something which the OpenPGP specification calls an
 expiration date, but its not really enforceable because it can be
 stripped by an attacker and extended by someone who has access to the
 private key, by creating a new self-signature.  In this sense, the

 The first part of your claim is wrong.  The expiration date can't be
 stripped by an attacker because it is bound by a self-signature to the
 key.  The self-signature is mandatory for OpenPGP keys.  In that sense
 it is the same as with the NotAfter date in X.509.

In X.509, certification signatures cover the value of the notAfter
attribute.  If I'm not mistaken, this is true for V3 keys as well.
However, when a V4 key is signed, the certification signature does not
cover the expiration date.  The key holder (legitimate or not) can
therefore arbitrarily extend the key life time, while keeping the key
in the web of trust.

This has advantages and disadvantages, of course.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Werner Koch
On Thu, 16 Feb 2012 11:00, f...@deneb.enyo.de said:

 In X.509, certification signatures cover the value of the notAfter
 attribute.  If I'm not mistaken, this is true for V3 keys as well.

Right.  They are also covered by the fingerprint (The fingerprint used
for X.509 is only a de-facto standard).

 However, when a V4 key is signed, the certification signature does not
 cover the expiration date.  The key holder (legitimate or not) can

Wrong.  Look at my key:

  :public key packet:
  version 4, algo 17, created 1199118275, expires 0
  pkey[0]: [2048 bits]
  pkey[1]: [224 bits]
  pkey[2]: [2046 bits]
  pkey[3]: [2048 bits]
  :user ID packet: Werner Koch w...@g10code.com
  :signature packet: algo 17, keyid F2AD85AC1E42B367
  version 4, created 1199118881, md5len 0, sigclass 0x13
  digest algo 11, begin of digest 2a 29
  hashed subpkt 27 len 1 (key flags: 03)
  hashed subpkt 9 len 4 (key expires after 11y2d12h35m)
 [...]
  subpkt 16 len 8 (issuer key ID F2AD85AC1E42B367)
  
The signature packet is the certification for the key and user id.  A
signature packet consist of subpackets which may either be hashed or
unhashed.  Hashed subpackets are part of the signed material and thus
can't be removed.

You are right that RFC4880 does not demand that the key expiration date
is put into a hashed subpacket.  But not doing so would be stupid.


Salam-Shalom,

   Werner

-- 
Die Gedanken sind frei.  Ausnahmen regelt ein Bundesgesetz.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Florian Weimer
* Werner Koch:

 However, when a V4 key is signed, the certification signature does not
 cover the expiration date.  The key holder (legitimate or not) can

 Wrong.  Look at my key:

   :public key packet:
   version 4, algo 17, created 1199118275, expires 0
   pkey[0]: [2048 bits]
   pkey[1]: [224 bits]
   pkey[2]: [2046 bits]
   pkey[3]: [2048 bits]
   :user ID packet: Werner Koch w...@g10code.com
   :signature packet: algo 17, keyid F2AD85AC1E42B367
   version 4, created 1199118881, md5len 0, sigclass 0x13
   digest algo 11, begin of digest 2a 29
   hashed subpkt 27 len 1 (key flags: 03)
   hashed subpkt 9 len 4 (key expires after 11y2d12h35m)
  [...]
   subpkt 16 len 8 (issuer key ID F2AD85AC1E42B367)
   
 The signature packet is the certification for the key and user id.  A
 signature packet consist of subpackets which may either be hashed or
 unhashed.  Hashed subpackets are part of the signed material and thus
 can't be removed.

Isn't this a self-signature?  I was talking about third-party
signatures on the key.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Bodo Moeller

 Isn't this a self-signature?


Oh, in this case it's a self-signature. Werner, the problem (aka feature)
is that expiry according to self-signatures isn't carried forward into
third-party certification signatures -- so if an attacker gets hold of the
(not-so-)private key, the attacker can just extend the key lifetime as
needed. (This is unlike with the original V3 format where certifications
necessarily cover the expiry date, and unlike X.509 where certifications
always come with *some* notAfter date.)

Bodo
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Werner Koch
On Thu, 16 Feb 2012 13:03, bmoel...@acm.org said:

 Oh, in this case it's a self-signature. Werner, the problem (aka feature)
 is that expiry according to self-signatures isn't carried forward into
 third-party certification signatures -- so if an attacker gets hold of the

That depends on how the third party does the key-signing.  OpenPGP
allows to provide an expiration date for the third party certification
(aka key signing).  This solves the problem of OpenPGP CAs - it does
not solve the general problem of CAs at all.

The commonly used WoT semantics don't require you to check the
expiration date of a passport or driver license either.  The signature
expiration dates, as used by some folks, try to add some extra value
into their key signatures for no good reason: Either the identity has
been verified or not - the identity will not change after the expiration
date.  Even if you change your name later, back at the key signing time
you were known under the certified name.

 necessarily cover the expiry date, and unlike X.509 where certifications
 always come with *some* notAfter date.)

A better name for notAfter would be payableBefore.


Shalom-Salam,

   Werner

-- 
Die Gedanken sind frei.  Ausnahmen regelt ein Bundesgesetz.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Werner Koch
On Thu, 16 Feb 2012 12:30, bmoel...@acm.org said:

 I call it a protocol failure, you call it stupid, but Jon calls it a
 feature (http://article.gmane.org/gmane.ietf.openpgp/4557/).

It is a feature in the same sense as putting your thumb between the nail
head and the hammer to silently peg up a picture frame.


Salam-Shalom,

   Werner

-- 
Die Gedanken sind frei.  Ausnahmen regelt ein Bundesgesetz.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Jon Callas

On 16 Feb, 2012, at 3:30 AM, Bodo Moeller wrote:

 On Thu, Feb 16, 2012 at 12:05 PM, Werner Koch w...@gnupg.org wrote:
  
 You are right that RFC4880 does not demand that the key expiration date
 is put into a hashed subpacket.  But not doing so would be stupid.
 
 I call it a protocol failure, you call it stupid, but Jon calls it a 
 feature (http://article.gmane.org/gmane.ietf.openpgp/4557/).

That's not what I said. Or perhaps not what I meant.

I think it is indeed a feature that the expiry is a part of the certification, 
not part an intrinsic property of the key material. That permits you to do very 
cool things like rolling certification lifetimes.

Putting that into an unhashed packet is stupid, as Werner said.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Ben Laurie
On Thu, Feb 16, 2012 at 5:05 PM, Jeffrey I. Schiller j...@qyv.net wrote:
 What I found most interesting in Nadia's blog entry is this snippet of
 (pseudo) code from OpenSSL:

 1       prng.seed(seed)
 2       p = prng.generate_random_prime()
 3       prng.add_randomness(bits)
 4       q = prng.generate_random_prime()
 5       N = p*q

 In theory line 3 helps improve security by adding more entropy prior to
 generating the second prime Q. However, and this is
 counter-intuitive (like many things in security), it in fact reduces
 security in low-entropy situations. As she explains, a lot of the poor
 RSA keys found *may* be the results of key generations performed by
 embedded devices and things like home routers NOT LONG AFTER THEIR
 FIRST POWER ON. This would be a very low entropy time.[1]

 If line 3 was omitted, many devices would have the same key. This
 isn't great, but it is a far better situation then we have now with a
 lot of devices having the same first prime!

However, if line 3 was omitted, then something much worse would
happen: if a process forked, then the new process would produce the
same PRNG stream as the old one.

The problem with the pseudo-code is it doesn't reflect what's really
going on (the text explains it, but not why it does it). The purpose
of the extra entropy is to make the two PRNG streams different, not
to increase entropy. You're supposed to have sufficient entropy in the
first place.

So, the underlying issue is not a poor design choice in OpenSSL, but
poor seeding in some applications.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread dj

 So, the underlying issue is not a poor design choice in OpenSSL, but
 poor seeding in some applications.

That's why we're putting it on-chip and in the instruction set from Ivy
Bridge onwards. http://en.wikipedia.org/wiki/RdRand


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Nico Williams
On Thu, Feb 16, 2012 at 12:28 PM, Jeffrey Schiller j...@qyv.net wrote:
 Are you thinking this is because it causes the entropy estimate in the RNG 
 to be higher than it really is? Last time I checked OpenSSL it didn't block 
 requests for numbers in cases of low entropy estimates anyway, so line 3 
 wouldn't reduce security for that reason.

 I  am thinking this because in low entropy cases where multiple boxes 
 generate the same first prime adding that additional entropy before the 
 second prime is generated means they are likely to generate a different 
 second prime leading to the GCD attack.

I'd thought that you were going to say that so many devices sharing
the same key instead of one prime would be better on account of the
problem being more noticeable.  Otherwise I don't see the difference
between one low-entropy case and another -- both are catastrophic
failures.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread D. J. Bernstein
  My thoughts exactly, I've always stayed away from DLP-based PKCs
  (except DH) because they're extraordinarily brittle, with RSA you
  have to get entropy use right just once, with DLP PKCs you have to
  get it right every single time you use them. For embedded systems
  in particular that's just too risky.
 Just to make it clear, if you re-use the random input value to a DSA
 signature, you not only compromise the signature, you compromise the
 private key. In my opinion this makes DSA much more brittle then RSA
 (I wrote a paper about this for one of the early NDSS papers).

ObPlug: DSA certainly has this problem (as illustrated by the Sony PS3
disaster), but there are other discrete-log systems such as Ed25519---

   http://ed25519.cr.yp.to
   
---that completely eliminate this class of problems. There's no need for
any randomness after the initial key generation; the same message always
generates the same signature.

Ed25519 in particular uses just 32 bytes (256 bits) of randomness to
generate its key, and is deterministic after that. It's also reasonably
robust against deficiencies in these 256 bits: nothing goes wrong if,
e.g., the first 30% of the random bits are all replaced by 0, and the
system still isn't trivial to break if the first 70% of the random
bits are all replaced by 0.

There are of course more defenses that one can add to provide resilience
against more severe randomness deficiencies: one can start with more
random bits and hash them down to 256 bits; use repeated RDTSC calls as
auxiliary randomness input; etc. These details have essentially nothing
to do with the choice of cryptographic primitive, and the whole point of
/dev/urandom is to centralize these details and get them right rather
than having everybody reimplement them badly. It would be interesting to
understand how /dev/urandom failed for the repeated RSA primes---I'm
presuming here that /dev/urandom was in fact the main culprit.

---D. J. Bernstein
   Research Professor, Computer Science, University of Illinois at Chicago
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Jeffrey I. Schiller
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/16/2012 03:59 PM, Ben Laurie wrote:
 I will quote the text you have obviously not bothered to read:

 OpenSSL's RSA key generation functions this way: each time random
 bits are produced from the entropy pool to generate the primes p and
 q, the current time in seconds is added to the entropy pool.

 Or you could read the code.

I've read the code, I know how it works... That's my point. By adding
additional entropy (in this case the time) between the generation of P
and Q you setup a situation where it is more likely that two hosts
will share a P but not a Q. Without that additional entropy P and Q
would likely be the same.

I'm not placing blame or finding fault. As I said, it is counter
intuitive that adding entropy would ever be a bad thing. But then no
one anticipated this particular attack vector. In fact I am quite
surprised (though probably I shouldn't be) that there are devices
where the entropy is/was so low that identical values would be chosen!

 As for ensuring that bits are not reused, the random state should
 be tossed and regenerated after the key is generated. As I recall,
 the source for PGP 5.0 did something like this.

 I challenge you to give a good justification for this strategy.

I was probably less precise in my wording then I should have
been. What I mean is that once P and Q are generated, the state of the
pseudo random number generator should be destroyed and never used
again. Although in theory it shouldn't be possible to determine
previous state from current state (i.e., run it backwards to determine
P and Q based on the state when it was done doing so) it is probably
safer to assume that the previous state can be derived.

-Jeff

- --
___
Jeffrey I. Schiller
MIT Technologist, Consultant, and Cavy Breeder
Cambridge, MA 02139-4307
617.910.0259 - Voice
j...@qyv.net
http://jis.qyv.name
___

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFPPb6P8CBzV/QUlSsRApwgAJ96ewjU3g5FI00xd+zSReeC5D3kTgCg7NrI
4+iIpYktScV+sKqcmn/HlTg=
=227P
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-16 Thread Marsh Ray

On 02/16/2012 08:42 PM, Jeffrey I. Schiller wrote:


I've read the code, I know how it works... That's my point. By adding
additional entropy (in this case the time) between the generation of P
and Q you setup a situation where it is more likely that two hosts
will share a P but not a Q.


It is entirely possible to engineer a CSPRNG such that (from the 
perspective of the attacker) all valid values of P are equally likely.


Yes, this is a bit more difficult to do in the first two seconds after 
cold boot on platforms that wasn't designed for it. But still, this is a 
basic test of competence for any crypto system.


If P is generated so badly that two identical Ps are ever likely to be 
produced in the history of the Earth, then the discussion stops there.



Without that additional entropy P and Q
would likely be the same.


There is no point in trying to reason about the security of Q. In my 
experience, it is likely to be counterproductive.


If there was some mechanism available by which Q could be generated 
securely after P was not, then you should have used that method for P.


The idea that mixing in the current Unix time between P and Q is going 
to significantly improve matters is particularly absurd. The date of 
generation is in the freaking certificate or PGP key!



I was probably less precise in my wording then I should have
been. What I mean is that once P and Q are generated, the state of the
pseudo random number generator should be destroyed and never used
again.


Your entropy pool has hopefully been accumulating entropy since system 
boot (or even longer if some was persisted to disk). By throwing away 
what you have, all you are doing is *guaranteeing* worst-case operation.



 Although in theory it shouldn't be possible to determine
previous state from current state (i.e., run it backwards to determine
P and Q based on the state when it was done doing so) it is probably
safer to assume that the previous state can be derived.


No, one-way functions do exist! If they do not, the system has far 
bigger problems.


If your CSPRNG is not using them, then it is broken and should be thrown 
out.


It's good to be conservative and not rely on more properties than are 
actually needed. But don't compromise simplicity and real robustness in 
the design in order to eliminate dependence on a fundamental and 
well-tested primitive like a OWF.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Peter Gutmann
Michael Nelson nelson_mi...@yahoo.com writes:

Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two
of every one thousand RSA moduli that they collected from the web offer no
security. An astonishing number of generated pairs of primes have a prime in
common.

The title of the paper Ron was wrong, Whit is right I think is rather
misleading, since virtually all the DSA keys were PGP-generated and there was
only one ECDSA key, while there were vast numbers of RSA keys from all manner
of software.  So what it should really say is PGP got DSA keygen right, the
sample size for ECDSA is too small to make any meaingful comment, and some RSA
implementations aren't so good.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Ralph Holz
Hi,

 Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two
 of every one thousand RSA moduli that they collected from the web offer no
 security. An astonishing number of generated pairs of primes have a prime in
 common.
 
 The title of the paper Ron was wrong, Whit is right I think is rather
 misleading, since virtually all the DSA keys were PGP-generated and there was
 only one ECDSA key, while there were vast numbers of RSA keys from all manner
 of software.  So what it should really say is PGP got DSA keygen right, the
 sample size for ECDSA is too small to make any meaingful comment, and some RSA
 implementations aren't so good.

Their survey seems to be very nice work. But they reach this conclusion
in the abstract that RSA is significantly riskier than ElGamal/DSA. In
the body of the paper, they indicate (although they are much more
defensive already) that this is due to the fact that you need two
factors and more randomness to go into the key creation. The larger
number of weak RSA keys is then taken as an indication that this is
inherently a problem of RSA. It's a leap. If you need more input, more
can go wrong; but it does not seem conclusive evidence here. It would be
conclusive if they compared keys created with the help of the same
source of randomness and primality testers.

Interestingly, in their own conclusions section they do not reiterate
the above statement.

Ralph

-- 
Ralph Holz
Network Architectures and Services
Technische Universität München
http://www.net.in.tum.de/de/mitarbeiter/holz/
PGP: A805 D19C E23E 6BBB E0C4  86DC 520E 0C83 69B0 03EF



signature.asc
Description: OpenPGP digital signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Steven Bellovin

On Feb 14, 2012, at 10:02 PM, Jon Callas wrote:

 
 On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:
 
 The practical import is unclear, since there's (as far as is known) no
 way to predict or control who has a bad key.
 
 To me, the interesting question is how to distribute the results.  That
 is, how can you safely tell people you have a bad key, without letting
 bad guys probe your oracle.  I suspect that the right way to do it is to
 require someone to sign a hash of a random challenge, thereby proving
 ownership of the private key, before you'll tell them if the
 corresponding public key is in your database.
 
 Yeah, but if you're a bad guy, you can download the EFF's SSL Observatory and 
 just construct your own oracle. It's a lot like rainbow tables in that once 
 you learn the utility of the trick, you just replicate the results. If you 
 implement something like the Certificate Transparency, you have an 
 authenticated database of authoritative data to replicate the oracle with.
 
 Waving my hand and making software magically appear, I'd combine Certificate 
 Transparency and such an oracle be combined, and compute the status of the 
 key as part of the certificate logs and proofs.


Note that they very carefully didn't say how they did it.  I have my own ideas 
-- but they're just that, ideas; I haven't analyzed them carefully, let alone 
coded them.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Alexander Klimov
On Wed, 15 Feb 2012, Ralph Holz wrote:

 But they reach this conclusion in the abstract that RSA is
 significantly riskier than ElGamal/DSA. In the body of the paper,
 they indicate (although they are much more defensive already) that
 this is due to the fact that you need two factors and more
 randomness to go into the key creation.

While the RSA may be easier to break if the entropy during the key
*generation* is low, the DSA is easier to break if the entropy during
the key *use* is low. Obviously, if you have access only to the public
keys, the first issue is more spectacular, but usually a key is used
more often than generated.

-- 
Regards,
ASK
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Ben Laurie
On Wed, Feb 15, 2012 at 4:56 PM, Ben Laurie b...@links.org wrote:
 On Wed, Feb 15, 2012 at 4:13 PM, Steven Bellovin s...@cs.columbia.edu wrote:

 On Feb 14, 2012, at 10:02 PM, Jon Callas wrote:


 On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:

 The practical import is unclear, since there's (as far as is known) no
 way to predict or control who has a bad key.

 To me, the interesting question is how to distribute the results.  That
 is, how can you safely tell people you have a bad key, without letting
 bad guys probe your oracle.  I suspect that the right way to do it is to
 require someone to sign a hash of a random challenge, thereby proving
 ownership of the private key, before you'll tell them if the
 corresponding public key is in your database.

 Yeah, but if you're a bad guy, you can download the EFF's SSL Observatory 
 and just construct your own oracle. It's a lot like rainbow tables in that 
 once you learn the utility of the trick, you just replicate the results. If 
 you implement something like the Certificate Transparency, you have an 
 authenticated database of authoritative data to replicate the oracle with.

 Waving my hand and making software magically appear, I'd combine 
 Certificate Transparency and such an oracle be combined, and compute the 
 status of the key as part of the certificate logs and proofs.


 Note that they very carefully didn't say how they did it.  I have my own 
 ideas -- but they're just that, ideas; I haven't analyzed them carefully, 
 let alone coded them.

 I did this years ago for PGP keys. Easy: take all the keys, do
 pairwise GCD. Took 24 hours on my laptop for all the PGP keys on
 keyservers at the time. I'm trying to remember when this was, but I
 did it during PETS at Toronto, so that should narrow it down. With
 Matthias XXX (I've forgotten his surname!).

 Now wish I'd repeated the experiment for SSL :-)

BTW, we wrote the code and ran the keys during PETS, so not exactly
rocket science.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Alexander Klimov
On Wed, 15 Feb 2012, Steven Bellovin wrote:
 Note that they very carefully didn't say how they did it.  I have my
 own ideas -- but they're just that, ideas; I haven't analyzed them
 carefully, let alone coded them.

If one limits the same-factor search to the keys of the same model of
each device, one can even do the trivial pair-wise search among few
thousand keys, but the general technique is also not a secret:

https://freedom-to-tinker.com/blog/nadiah/new-research-theres-no-need-panic-over-factorable-keys-just-mind-your-ps-and-qs

-- 
Regards,
ASK
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Tom Ritter
On 15 February 2012 11:56, Ben Laurie b...@links.org wrote:
 I did this years ago for PGP keys. Easy: take all the keys, do
 pairwise GCD. Took 24 hours on my laptop for all the PGP keys on
 keyservers at the time. I'm trying to remember when this was, but I
 did it during PETS at Toronto, so that should narrow it down. With
 Matthias XXX (I've forgotten his surname!).

I mentioned this a few months ago, you had said you did it at PETS 2004. [0,1]

Something I found strange in their paper was this quote:

PGP keys have no expiration dates or hashes. All public keys were
further analysed as described below. (bottom of page 4)

PGP keys *may* have no expiration date, but they may, and anecdotally
most I've seen do.  Likewise, nearly all keys have a self-signed UID
associated with them, and that signature uses a hash algorithm.

-tom

[0] Original Thread:
http://lists.randombit.net/pipermail/cryptography/2011-September/001301.html
[0] Your Reply:
http://lists.randombit.net/pipermail/cryptography/2011-September/001305.html
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Peter Gutmann
Alexander Klimov alser...@inbox.ru writes:

While the RSA may be easier to break if the entropy during the key 
*generation* is low, the DSA is easier to break if the entropy during the key 
*use* is low. Obviously, if you have access only to the public keys, the first 
issue is more spectacular, but usually a key is used more often than generated.

My thoughts exactly, I've always stayed away from DLP-based PKCs (except DH) 
because they're extraordinarily brittle, with RSA you have to get entropy use 
right just once, with DLP PKCs you have to get it right every single time you 
use them.  For embedded systems in particular that's just too risky.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Nico Williams
On Wed, Feb 15, 2012 at 5:57 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
 Alexander Klimov alser...@inbox.ru writes:

While the RSA may be easier to break if the entropy during the key
*generation* is low, the DSA is easier to break if the entropy during the key
*use* is low. Obviously, if you have access only to the public keys, the first
issue is more spectacular, but usually a key is used more often than 
generated.

 My thoughts exactly, I've always stayed away from DLP-based PKCs (except DH)
 because they're extraordinarily brittle, with RSA you have to get entropy use
 right just once, with DLP PKCs you have to get it right every single time you
 use them.  For embedded systems in particular that's just too risky.

Of course, if you're doing RSA key transport and the client selects
the key and has little or no entropy then the client still has a
problem (and the server may not know).

Most cryptographic protocols call for random keys, nonces,
confounders, IVs, and so on somewhere.  Typically the security of the
system depends to a large degree, if not entirely, on those random
items.

What can you do with RSA keys if you can't generate good entropy?  You
can sign.  What else?  You can encrypt  messages small enough that
there's no need to generate a symmetric key for encrypting the message
(or you can chunk the message and encrypt each chunk).  Oh, there is
one thing one can do with RSA keys but without good enough entropy:
one can *ask* a remote system for entropy (the remote system encrypts
some entropy in the client's RSA public key, then signs this in the
server's public key) -- much better than having no good entropy at
all.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Jonathan Katz

On Wed, 15 Feb 2012, Steven Bellovin wrote:



On Feb 15, 2012, at 11:56 45AM, Ben Laurie wrote:



I did this years ago for PGP keys. Easy: take all the keys, do
pairwise GCD. Took 24 hours on my laptop for all the PGP keys on
keyservers at the time. I'm trying to remember when this was, but I
did it during PETS at Toronto, so that should narrow it down. With
Matthias XXX (I've forgotten his surname!).



How many keys?  They had 11M keys, meaning there are 121e12 pairs.  That's
a lot of GCD operations...


The blog post here:

https://freedom-to-tinker.com/blog/nadiah/new-research-theres-no-need-panic-over-factorable-keys-just-mind-your-ps-and-qs

Explains how it can be done in linear time.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Duplicate primes in lots of RSA moduli

2012-02-14 Thread Michael Nelson
Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two 
out of every one thousand RSA moduli that they collected from the web offer no 
security.  An astonishing number of generated pairs of primes have a prime in 
common.  Once again, it shows the importance of proper randomness (my remark).

http://www.nytimes.com/2012/02/15/technology/researchers-find-flaw-in-an-online-encryption-method.html?_r=1hp


The paper:

http://eprint.iacr.org/2012/064.pdf


Mike
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-14 Thread Steven Bellovin

On Feb 14, 2012, at 7:50 14PM, Michael Nelson wrote:

 Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two 
 out of every one thousand RSA moduli that they collected from the web offer 
 no security.  An astonishing number of generated pairs of primes have a prime 
 in common.  Once again, it shows the importance of proper randomness (my 
 remark).
 
 http://www.nytimes.com/2012/02/15/technology/researchers-find-flaw-in-an-online-encryption-method.html?_r=1hp
 
 
 The paper:
 
 http://eprint.iacr.org/2012/064.pdf


The practical import is unclear, since there's (as far as is known) no
way to predict or control who has a bad key.

To me, the interesting question is how to distribute the results.  That
is, how can you safely tell people you have a bad key, without letting
bad guys probe your oracle.  I suspect that the right way to do it is to
require someone to sign a hash of a random challenge, thereby proving
ownership of the private key, before you'll tell them if the
corresponding public key is in your database.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-14 Thread Jon Callas

On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:

 The practical import is unclear, since there's (as far as is known) no
 way to predict or control who has a bad key.
 
 To me, the interesting question is how to distribute the results.  That
 is, how can you safely tell people you have a bad key, without letting
 bad guys probe your oracle.  I suspect that the right way to do it is to
 require someone to sign a hash of a random challenge, thereby proving
 ownership of the private key, before you'll tell them if the
 corresponding public key is in your database.

Yeah, but if you're a bad guy, you can download the EFF's SSL Observatory and 
just construct your own oracle. It's a lot like rainbow tables in that once you 
learn the utility of the trick, you just replicate the results. If you 
implement something like the Certificate Transparency, you have an 
authenticated database of authoritative data to replicate the oracle with.

Waving my hand and making software magically appear, I'd combine Certificate 
Transparency and such an oracle be combined, and compute the status of the key 
as part of the certificate logs and proofs.

Jon

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-14 Thread Marsh Ray

On 02/14/2012 09:02 PM, Jon Callas wrote:


If you implement something like the
Certificate Transparency, you have an authenticated database of
authoritative data to replicate the oracle with.


How important is it that the data be authenticated/authoritative in this 
case?



Waving my hand and making software magically appear, I'd combine
Certificate Transparency and such an oracle be combined, and compute
the status of the key as part of the certificate logs and proofs.


CAs are sort of taking a beating in the public view these days. Such a 
service could be the kind of thing they either use as a QoS 
differentiator, or something they collaborate on as an industry to help 
build some public trust.


I bet there are some graduate students looking for nice, limited-scope 
summer internship projects...but it may be bigger scope than that.


- Marsh
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography