Further the fact that the entropy seeding is so bad that some
implementations are generating literally the same p value (but seemingly
different q values) I would think you could view the fact that this can be
detected and efficiently exploited via batch GCD as an indication of an even
bigger
On 22/02/12 13:31 PM, Kevin W. Wall wrote:
So, let's bring this back to cryptography. I'm going to assume that
virtually all of you are a somewhat altruistic and are not in this game just
to make a boatload of money by keeping all the crypto knowledge
within the secret priesthood thereby
Well, that was a long post, Marsh!
I think it is a good perspective. And it occurs to me that if this is a
real problem there might be a real solution.
I suggest going to NIST and asking them to run a design competition for
a hardware cell that produces good entropy. Hardware designs aka
On 02/22/2012 10:55 PM, Marsh Ray wrote:
I'm putting myself in the position of an engineer who's designing the
logic and writing some low-level firmware for the next consumer grade
$50 blue box home router/wifi/firewall appliance:
=== [cue dream sequence wavy blur effect]
I'm an EE
On 02/23/2012 02:27 PM, Ondrej Mikle wrote:
On 02/22/2012 10:55 PM, Marsh Ray wrote:
I'm putting myself in the position of an engineer who's designing the
logic and writing some low-level firmware for the next consumer grade
$50 blue box home router/wifi/firewall appliance:
=== [cue dream
On 02/24/2012 12:00 AM, Michael Nelson wrote:
Ondrej Mikle wrote:
I took some first 80 results from crunching the moduli
and mapped them back to certificates. In EFF's SSL
Observatory there were 3912
unique certs sharing those
factorized moduli (all embedded devices), couple
extra
in
While commenting about
http://www.cs.bris.ac.uk/Research/CryptographySecurity/knowledge.html
, Marsh Ray wrote:
It talks about entropy exclusively in terms of 'unpredictability', which
I think misses the essential point necessary for thinking about actual
systems: Entropy is a measure of
On 02/22/2012 09:32 AM, Thierry Moreau wrote:
While commenting about
http://www.cs.bris.ac.uk/Research/CryptographySecurity/knowledge.html
, Marsh Ray wrote:
It talks about entropy exclusively in terms of 'unpredictability',
which I think misses the essential point necessary for thinking
On Wed, Feb 22, 2012 at 2:53 AM, James A. Donald jam...@echeque.com wrote:
On 2012-02-22 12:31 PM, Kevin W. Wall wrote:
1) They think that key size is the paramount thing; the bigger the
better.
2) The have no clue as to what cipher modes are. It's ECB by default.
3) More importantly, they
On 02/22/2012 05:49 PM, Jeffrey Walton wrote:
Remember, OpenSSL gave tacit approval: If it helps with debugging,
I'm in favor of removing them,
http://www.mail-archive.com/openssl-dev@openssl.org/msg21156.html.
The full quote from Ulf Möller is:
Kurt Roeckx schrieb:
What I currently see as
On Wed, Feb 22, 2012 at 7:37 PM, Marsh Ray ma...@extendedsubset.com wrote:
On 02/22/2012 05:49 PM, Jeffrey Walton wrote:
Remember, OpenSSL gave tacit approval: If it helps with debugging,
I'm in favor of removing them,
http://www.mail-archive.com/openssl-dev@openssl.org/msg21156.html.
The
On 2012-02-23 9:49 AM, Jeffrey Walton wrote:
On Wed, Feb 22, 2012 at 2:53 AM, James A. Donaldjam...@echeque.com wrote:
On 2012-02-22 12:31 PM, Kevin W. Wall wrote:
1) They think that key size is the paramount thing; the bigger the
better.
2) The have no clue as to what cipher modes are. It's
On 21/02/12 04:22 AM, Thierry Moreau wrote:
Ben Laurie wrote:
On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:
In any case, I think the design of urandom in Linux is flawed and
should be fixed.
In FreeBSD random (and hence urandom) blocks at startup, but never again.
...
The
On 2012-02-21 10:57 PM, ianG wrote:
if you don't care that much, it's good enough. If you care
an awful lot, you have to do it yourself anyway.
My now outdated Crypto Kong maintained its own non volatile file of
randomness, stored it to disk on program shutdown. On each program
startup, it
On 02/21/2012 08:31 PM, Kevin W. Wall wrote:
Apologies for this being a bit OT as far as the charter of this list
goes, and perhaps a bit self-serving as well. I hope you will bear
with me.
Meh. I think I've seen worse. :-)
To a degree, I think it is more ignorance than it is outright
On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:
In any case, I think the design of urandom in Linux is flawed and
should be fixed.
Do you have specific suggestions?
Short of making it block, I can think of the following:
1. More distros may follow the suggestion in the Ensuring
On Mon, Feb 20, 2012 at 12:42 PM, Solar Designer so...@openwall.com wrote:
On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:
In any case, I think the design of urandom in Linux is flawed and
should be fixed.
Do you have specific suggestions?
Short of making it block, I can think
Ben Laurie wrote:
On Sun, Feb 19, 2012 at 05:57:37PM +, Ben Laurie wrote:
In any case, I think the design of urandom in Linux is flawed and
should be fixed.
In FreeBSD random (and hence urandom) blocks at startup, but never again.
Thanks for bringing this freebsd random source design
On Feb 20, 2012, at 9:22 AM, Thierry Moreau wrote:
First, let me put aside the initial entropy assessment issue -- it's not
solvable without delving into the details, and let me assume the freebsd
entropy collection is good, at the possible cost of slowing down the boot
process.
But that
On Mon, Feb 20, 2012 at 5:22 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
Then, basically the freebsd design is initial seeding of a deterministic
PRNG. If a) the PRNG design is cryptographically strong (a qualification
which can be fairly reliable if done with academic scrutiny),
On Mon, Feb 20, 2012 at 7:07 AM, Ben Laurie b...@links.org wrote:
In FreeBSD random (and hence urandom) blocks at startup, but never again.
So, not exactly a terribly wrong thing to do, eh? ;)
What OSes have parallelized rc script/whatever nowadays? Quite a few,
it seems (several Linux
Ben Laurie wrote:
On Fri, Feb 17, 2012 at 8:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
Ben Laurie wrote:
On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
Isn't /dev/urandom BY DEFINITION of limited true entropy?
$ ls -l /dev/urandom
On Sun, Feb 19, 2012 at 5:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
Ben Laurie wrote:
On Fri, Feb 17, 2012 at 8:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
Ben Laurie wrote:
On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
I also was pondering as to how the implementers could have arrived at
this situation towards evaluating Stephen Farrell's draft idea to have
a service that double checks at key gen time (that none of the p, q
values are reused). http://www.ietf.org/id/draft-farrell-kc-00.txt
(Which I dont think
On 2012-02-18 7:40 PM, Adam Back wrote:
Occam's razor suggests cryptographic incompetence.. number one reason
deployed systems have crypto fails. Who needs to hire crypto people,
the developer can hack it together, how hard can it be etc. There's a
psychological theory of why this kind of
On 18/02/12 23:05 PM, Peter Gutmann wrote:
Morlock Elloimorlockel...@yahoo.com writes:
Properly designed rngs should refuse to supply bits that have less than
specified (nominal) entropy. The requestor can go away or wait.
So you're going to sacrifice availability for some nebulous (to the
I missed a favorite of mine that I've personally found multiple times
in deployed systems from small (10k users) to large (mil plus users),
without naming the guilty:
4. The RNG used was rand(3), and while there were multiple entropy
additions, they were fed using multiple invocations of
It was (2), they didn't wait.
Come on -- every one of these devices is some distribution of Linux that comes
with a stripped-down kernel and Busybox. It's got stripped-down startup, and no
one thought that it couldn't have enough entropy. These are *network* people,
not crypto people, and the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/16/2012 03:47 PM, Nico Williams wrote:
I'd thought that you were going to say that so many devices sharing
the same key instead of one prime would be better on account of the
problem being more noticeable. Otherwise I don't see the
On Fri, Feb 17, 2012 at 8:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
Ben Laurie wrote:
On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
Isn't /dev/urandom BY DEFINITION of limited true entropy?
$ ls -l /dev/urandom
lrwxr-xr-x 1 root
On Feb 16, 2012, at 9:52 AM, Marsh Ray wrote:
How often have we seen Linux distros generate SSH keys 2 seconds after the
first boot?
The paper that started this thread was talking about keys used for TLS, not
SSH. TLS certs are not usually generated during first boot. The devices had
plenty
On Sat, Feb 18, 2012 at 12:57:30PM -0500, Jeffrey I. Schiller wrote:
The problem is that ssh-keygen uses /dev/urandom and it should really
use /dev/random. I suspect that once upon a time it may have (I don't
have the history off hand) and someone got annoyed when it blocked and
solved the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/18/2012 01:50 PM, Thor Lancelot Simon wrote:
Um, why would it ever _unblock_, on such a device under typical
first-boot conditions?
The idea would be that bootstrap would continue without the key being
generated. The key generation could then
On Feb 18, 2012, at 11:37 AM, Jeffrey I. Schiller wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/18/2012 01:50 PM, Thor Lancelot Simon wrote:
Um, why would it ever _unblock_, on such a device under typical
first-boot conditions?
The idea would be that bootstrap would
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/18/2012 03:02 PM, Paul Hoffman wrote:
Really? Many cryptographers would say that number of unpredictable
bits is very much a part of the question? ...
Of course it is. What I meant was that if /dev/random returns data,
its contract is to
D. J. Bernstein wrote:
[...]
There are of course more defenses that one can add to provide resilience
against more severe randomness deficiencies: one can start with more
random bits and hash them down to 256 bits; use repeated RDTSC calls as
auxiliary randomness input; etc. These details have
On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
Isn't /dev/urandom BY DEFINITION of limited true entropy?
$ ls -l /dev/urandom
lrwxr-xr-x 1 root wheel 6 Nov 20 18:49 /dev/urandom - random
___
cryptography mailing
On 02/17/2012 01:32 PM, Thierry Moreau wrote:
Isn't /dev/urandom BY DEFINITION of limited true entropy?
It depends on the model you use.
In the model that makes sense to me, one in which the attacker has
finite computational resources (i.e., insufficient to brute-force the
search space of
On Fri, Feb 17, 2012 at 2:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
If your /dev/urandom never blocks the requesting task irrespective of the
random bytes usage, then maybe your /dev/random is not as secure as it might
be (unless you have an high speed entropy source, but what
On Fri, Feb 17, 2012 at 2:51 PM, Jon Callas j...@callas.org wrote:
On Feb 17, 2012, at 12:41 PM, Nico Williams wrote:
On Fri, Feb 17, 2012 at 2:39 PM, Thierry Moreau
thierry.mor...@connotech.com wrote:
If your /dev/urandom never blocks the requesting task irrespective of the
random bytes
On 02/17/2012 02:51 PM, Jon Callas wrote:
On Feb 17, 2012, at 12:41 PM, Nico Williams wrote:
I'd like for /dev/urandom to block, but only early in boot. Once
enough entropy has been gathered for it to start it should never
block. One way to achieve this is to block boot progress early
Further the fact that the entropy seeding is so bad that some
implementations are generating literally the same p value (but seemingly
different q values) I would think you could view the fact that this can be
detected and efficiently exploited via batch GCD as an indication of an even
bigger
Adam Back a...@cypherspace.org writes:
Further the fact that the entropy seeding is so bad that some implementations
are generating literally the same p value (but seemingly different q values)
I would think you could view the fact that this can be detected and
efficiently exploited via batch GCD
* Werner Koch:
On Wed, 15 Feb 2012 23:22, f...@deneb.enyo.de said:
implementations seem to interpret it as a hard limit. The V4 key
format has something which the OpenPGP specification calls an
expiration date, but its not really enforceable because it can be
stripped by an attacker and
On Thu, 16 Feb 2012 11:00, f...@deneb.enyo.de said:
In X.509, certification signatures cover the value of the notAfter
attribute. If I'm not mistaken, this is true for V3 keys as well.
Right. They are also covered by the fingerprint (The fingerprint used
for X.509 is only a de-facto
* Werner Koch:
However, when a V4 key is signed, the certification signature does not
cover the expiration date. The key holder (legitimate or not) can
Wrong. Look at my key:
:public key packet:
version 4, algo 17, created 1199118275, expires 0
pkey[0]: [2048 bits]
Isn't this a self-signature?
Oh, in this case it's a self-signature. Werner, the problem (aka feature)
is that expiry according to self-signatures isn't carried forward into
third-party certification signatures -- so if an attacker gets hold of the
(not-so-)private key, the attacker can just
On Thu, 16 Feb 2012 13:03, bmoel...@acm.org said:
Oh, in this case it's a self-signature. Werner, the problem (aka feature)
is that expiry according to self-signatures isn't carried forward into
third-party certification signatures -- so if an attacker gets hold of the
That depends on how the
On Thu, 16 Feb 2012 12:30, bmoel...@acm.org said:
I call it a protocol failure, you call it stupid, but Jon calls it a
feature (http://article.gmane.org/gmane.ietf.openpgp/4557/).
It is a feature in the same sense as putting your thumb between the nail
head and the hammer to silently peg up a
On 16 Feb, 2012, at 3:30 AM, Bodo Moeller wrote:
On Thu, Feb 16, 2012 at 12:05 PM, Werner Koch w...@gnupg.org wrote:
You are right that RFC4880 does not demand that the key expiration date
is put into a hashed subpacket. But not doing so would be stupid.
I call it a protocol failure,
On Thu, Feb 16, 2012 at 5:05 PM, Jeffrey I. Schiller j...@qyv.net wrote:
What I found most interesting in Nadia's blog entry is this snippet of
(pseudo) code from OpenSSL:
1 prng.seed(seed)
2 p = prng.generate_random_prime()
3 prng.add_randomness(bits)
4 q =
So, the underlying issue is not a poor design choice in OpenSSL, but
poor seeding in some applications.
That's why we're putting it on-chip and in the instruction set from Ivy
Bridge onwards. http://en.wikipedia.org/wiki/RdRand
___
cryptography
On Thu, Feb 16, 2012 at 12:28 PM, Jeffrey Schiller j...@qyv.net wrote:
Are you thinking this is because it causes the entropy estimate in the RNG
to be higher than it really is? Last time I checked OpenSSL it didn't block
requests for numbers in cases of low entropy estimates anyway, so line
My thoughts exactly, I've always stayed away from DLP-based PKCs
(except DH) because they're extraordinarily brittle, with RSA you
have to get entropy use right just once, with DLP PKCs you have to
get it right every single time you use them. For embedded systems
in particular that's
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/16/2012 03:59 PM, Ben Laurie wrote:
I will quote the text you have obviously not bothered to read:
OpenSSL's RSA key generation functions this way: each time random
bits are produced from the entropy pool to generate the primes p and
q, the
On 02/16/2012 08:42 PM, Jeffrey I. Schiller wrote:
I've read the code, I know how it works... That's my point. By adding
additional entropy (in this case the time) between the generation of P
and Q you setup a situation where it is more likely that two hosts
will share a P but not a Q.
It is
Michael Nelson nelson_mi...@yahoo.com writes:
Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two
of every one thousand RSA moduli that they collected from the web offer no
security. An astonishing number of generated pairs of primes have a prime in
common.
The title of
Hi,
Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two
of every one thousand RSA moduli that they collected from the web offer no
security. An astonishing number of generated pairs of primes have a prime in
common.
The title of the paper Ron was wrong, Whit is
On Feb 14, 2012, at 10:02 PM, Jon Callas wrote:
On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:
The practical import is unclear, since there's (as far as is known) no
way to predict or control who has a bad key.
To me, the interesting question is how to distribute the results.
On Wed, 15 Feb 2012, Ralph Holz wrote:
But they reach this conclusion in the abstract that RSA is
significantly riskier than ElGamal/DSA. In the body of the paper,
they indicate (although they are much more defensive already) that
this is due to the fact that you need two factors and more
On Wed, Feb 15, 2012 at 4:56 PM, Ben Laurie b...@links.org wrote:
On Wed, Feb 15, 2012 at 4:13 PM, Steven Bellovin s...@cs.columbia.edu wrote:
On Feb 14, 2012, at 10:02 PM, Jon Callas wrote:
On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:
The practical import is unclear, since there's
On Wed, 15 Feb 2012, Steven Bellovin wrote:
Note that they very carefully didn't say how they did it. I have my
own ideas -- but they're just that, ideas; I haven't analyzed them
carefully, let alone coded them.
If one limits the same-factor search to the keys of the same model of
each
On 15 February 2012 11:56, Ben Laurie b...@links.org wrote:
I did this years ago for PGP keys. Easy: take all the keys, do
pairwise GCD. Took 24 hours on my laptop for all the PGP keys on
keyservers at the time. I'm trying to remember when this was, but I
did it during PETS at Toronto, so that
Alexander Klimov alser...@inbox.ru writes:
While the RSA may be easier to break if the entropy during the key
*generation* is low, the DSA is easier to break if the entropy during the key
*use* is low. Obviously, if you have access only to the public keys, the first
issue is more spectacular,
On Wed, Feb 15, 2012 at 5:57 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:
Alexander Klimov alser...@inbox.ru writes:
While the RSA may be easier to break if the entropy during the key
*generation* is low, the DSA is easier to break if the entropy during the key
*use* is low. Obviously, if
On Wed, 15 Feb 2012, Steven Bellovin wrote:
On Feb 15, 2012, at 11:56 45AM, Ben Laurie wrote:
I did this years ago for PGP keys. Easy: take all the keys, do
pairwise GCD. Took 24 hours on my laptop for all the PGP keys on
keyservers at the time. I'm trying to remember when this was, but I
Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two
out of every one thousand RSA moduli that they collected from the web offer no
security. An astonishing number of generated pairs of primes have a prime in
common. Once again, it shows the importance of proper
On Feb 14, 2012, at 7:50 14PM, Michael Nelson wrote:
Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two
out of every one thousand RSA moduli that they collected from the web offer
no security. An astonishing number of generated pairs of primes have a prime
in
On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:
The practical import is unclear, since there's (as far as is known) no
way to predict or control who has a bad key.
To me, the interesting question is how to distribute the results. That
is, how can you safely tell people you have a bad
On 02/14/2012 09:02 PM, Jon Callas wrote:
If you implement something like the
Certificate Transparency, you have an authenticated database of
authoritative data to replicate the oracle with.
How important is it that the data be authenticated/authoritative in this
case?
Waving my hand and
70 matches
Mail list logo