Re: [cryptography] How are expired code-signing certs revoked?

2011-12-09 Thread Thor Lancelot Simon
On Fri, Dec 09, 2011 at 01:01:05PM -0800, Jon Callas wrote:
 
 
 If you have a certificate issue a revocation for itself, there is an obvious, 
 correct interpretation. That interpretation is what Michael Heyman said, and 
 what OpenPGP does. That certificate is revoked and any subordinate 
 certificates are also implicitly revoked. It's also like making a CRL for 
 everything you issued.

Indeed.  Non-temporal logic is a very poor substitute for temporal logic
in any real-world situation.  But some simple definitions should make the
matter clear in any event:

Q: When is a certificate valid?
A: Until it is revoked, and if some other conditions are met.

Q: When is a certificate revoked?
A: At any time AFTER an authorized party revokes the certificate.

Q: Who is an authorized party for the purpose of revoking a certificate?
A: The signer of the certificate*

* one can envision systems in which the rule is ...or the party identified
  by the certificate, too, but when talking about PKI, generally, that is
  not the rule that is used.  Fortunately self-signed certs let us reason
  about this issue in a vacuum.

Now the problem degenerates to the basic quarrel over retroactive
revocations.  But, depending what your norms are there, with appropriate
choice of a temporal frame of reference it's no harder to solve.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] CAPTCHA as a Security System?

2012-01-02 Thread Thor Lancelot Simon
On Tue, Jan 03, 2012 at 01:57:10AM -0500, Randall Webmail wrote:
 
 There is one girl (and it is always a girl) who is at the control center.   
 She comes to the checkout station to override the system when the shopper 
 scans beer.  No one watches to see if you scan every item in your cart.
 Most people don't steal, and it's cheaper for Wal*Mart to allow the thieves 
 to ply their trade than it is to put $8.00/hour girls in place to (mostly) 
 stop those who do.

You have more faith in human nature (or perhaps a considerably less
sophisticated understanding of the costs of inventory shrinkage) than 
Walmart does.

Look up.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] folded SHA1 vs HMAC for entropy extraction

2012-01-04 Thread Thor Lancelot Simon
I'm working on the entropy-pool code in NetBSD, which began its life
many years ago as a simplified implementation of the same ideas behind
the Linux /dev/random implementation.

The NetBSD implementation now keys a stream generator from the pool
rather than directly outputting pool bits, but the underlying entropy
gathering and mixing model is basically the same -- a very large LFSR
using SHA1 to distill entropy on output.

Eventually I will replace it with a multi-pool implementation like
Fortuna.  However, I'm trying to make incremental improvements while
waiting for that mythical great extent of free time to appear.

One thing that's always bothered me has been the use of an odd
folded SHA1 construct to generate output bits.  What is done is
this:
The pool is 4096 bits long.  It is hashed with SHA1, producing S1.

S1 is split in half, the first 80 bits and last 80 bits,
producing H1 and H2.

H1 xor H2 is computed to produce R, which is returned to the caller.

S1 is mixed back into the entropy pool as input.

The Linux code had a weakness in this area which is described by
Gutterman, Pinkas, and Reinman's paper from 2006.  I don't believe the
NetBSD code has this problem.  However, while looking at it I have
been wondering why something simpler and better analyzed than the folded
SHA should not be used.

In particular, I do not see why HMAC with different, known keys should not
be used:

The pool is 4096 bits long.  HMAC(K1, pool) is computed,
producing R, which is returned to the caller.

HMAC(K2, pool) is computed and mixed back into the entropy
pool as input.

I would appreciate comments on this general idea.

-- 
Thor Lancelot Simont...@panix.com
  All of my opinions are consistent, but I cannot present them all
   at once.-Jean-Jacques Rousseau, On The Social Contract
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] folded SHA1 vs HMAC for entropy extraction

2012-01-05 Thread Thor Lancelot Simon
On Thu, Jan 05, 2012 at 12:45:14PM +1300, Peter Gutmann wrote:
 Thor Lancelot Simon t...@panix.com writes:
 
 However, while looking at it I have been wondering why something simpler and
 better analyzed than the folded SHA should not be used.
 
 Folding the output is belt-and-suspenders security, it denies an attacker
 direct access to the raw output of whatever the last stage of processing
 (3DES/AES/SHA1/HMAC-xxx/whatever) is.  For example my generator is designed on
 the basis that any part of it should be able to fail completely (replacing a
 crypto step with memcpy() or using all-zero keys) without it affecting the
 security of the overall design, and to do that you need a lot of redundant
 security.  Sure, using HMAC is cryptographically sound, but what happens if
 your HMAC key is compromised, or an attacker can glitch the hashing operation,
 or something else goes wrong?

I'm proposing to use HMAC with two different, non-secret keys: one to
generate the data supplied to the output stage, one to generate the
data mixed back in.  It seems to me this uses the same number of
invocations of the hash function per output byte, and, unless I'm missing
something, the folding surely isn't _more_ secure.

Am I missing something?

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] folded SHA1 vs HMAC for entropy extraction

2012-01-05 Thread Thor Lancelot Simon
On Fri, Jan 06, 2012 at 07:59:30AM +1100, ianG wrote:
 
 The way I treat this problem is that it is analogous to inventing
 ones own algorithm.  From that perspective, one can ask:

What is?  The folded SHA, or the use of HMAC?

You do understand why it's important to obscure what's mixed back in,
I assume.  If not, read the paper I referenced on the Linux RNG;
by insufficently obscuring what went back into the pool, the
implementors made an attack with only 2^64 complexity possible.

With the constraint that you can't just output exactly what you
mix back in, a plain hash function without some further transformation
won't suffice, whether it's MD4 or SHA512.  I am asking whether the
use of HMAC with two different, well known keys, one for each purpose,
is better or worse than using the folded output of a single SHA
invocation for one purpose and the unfolded output of that same
invocation for the other.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Thor Lancelot Simon
On Tue, Feb 14, 2012 at 03:51:16PM +0100, Ralph Holz wrote:
 Hi,
 
  Well I am not sure how they can hope to go very far underground.  Any and
  all users on their internal network could easily detect and anonymously
  report the mitm cert for some public web site with out any significant risk
  of it being tracked back to them.  Game over.  So removal of one CA from a
  major browser like mozilla would pretty much end this practice if it is
  true
  that any CAs other than trustwave actually did this...
 
 If all users used a tool like Crossbear that does automatic reporting,
 yes.

Not really -- and this I think goes to the root of why what was done here
is so evil.

It is common practice on many networks in certain industries to deploy
SSL MITM devices which terminate, decrypt, examine, and reencrypt all
traffic.  However, the usual way to do this is to generate a new CA
certificate for the MITM device and load it into all the systems expected
to be connected to the network in question as a trusted root.

In this case, the owner of the network has chosen, by policy, to not
allow devices to perform SSL unless they trust the network's own CA,
and that CA has an effective policy which expressly allows it to
facilitate MITM of SSL traffic.  I do not find this unreasonable for
certain environments, and if users choose to bring their private devices
onto those networks, they have to take a positive step to facilitate
this examination of their traffic -- they have to install the MITM CA's
certificate as a trusted root.

But what Trustwave did is very, very different.  They sold a sub-root
that seems almost tailor-made to deceive users into thinking that MITM
was *not* taking place.  After all, if the intent were not to deceive
the network's users, the usual solution (where the client node's
administrator must accept the MITM device's CA) would have sufficed.

If the intent was not (primarily) to deceive but rather to allow MITM
device deployment with less administrative hassle, I can say only
these things:

A) It might be easier for me to get petty cash for my legitimate
   business purposes by mugging people in the street than by
   filling out corporate paperwork but that does not make it OK
   to mug people in the street.

B) If we are to believe Trustwave's claims about how they
   secured and audited the device on which this CA's keys were
   stored, is it really plausible that this was done for ease
   of administration, compared to the standard solution?

It is not so hard really to see the conceptual difference between the two
cases.  But to tools like Crossbear, they basically look the same.

Bad, bad, bad.

Thor

P.S. If one really wanted to know what CAs were in the business of selling
 these, one might try using any leverage one had handy to press the
 manufacturers of the MITM devices, who very likely know because their
 support or engineering personnel will have seen it in the field.  I
 can think of some pretty simple ways Mozilla could seek to obtain
 this information from the device manufacturers, if Mozilla wanted to
 play hardball.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Thor Lancelot Simon
On Tue, Feb 14, 2012 at 09:13:11PM +0100, Ralph Holz wrote:
 
  It is not so hard really to see the conceptual difference between the two
  cases.  But to tools like Crossbear, they basically look the same.
 
 Why? Crossbear sends the full certificate chain it sees to the CB
 server, where it is compared with the full chain that the CB server sees
 (plus a few more servers, too, actually, that it can ask). Convergence,
 AFAICT, does the same. If you're inside the corporate network, the
 certificate chain in the SSL handshake cannot be the same, and both
 systems will detect them.

In both cases, Crossbear will detect a MITM device, yes?  But in one
case, the device is authorized to sign for the entities it's signing
certificates for, and in the other, it's not.

This does not in any way diminish the usefulness of Crossbear as a tool
for detecting MITM devices.  But what's interesting about what happens
in these two cases is that it's _whether the user is being deceived_
that differs.  Crossbear can't know that -- the user has to supply the
knowledge of whether there is, in fact, an authorized MITM in place.

And that is precisely what is wrong with what Trustwave did: they tried
to make it look like there was no MITM in place instead of an unauthorized
one, where in this case authorized means the administrator of the client
node positively agreed to have that node's traffic MITMed.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Thor Lancelot Simon
On Tue, Feb 14, 2012 at 09:35:45PM +0100, Ralph Holz wrote:
 
 As Crossbear's assessment is not something everyday users will
 understand, we ourselves view Crossbear as the tool that, e.g., a
 travelling security afficionado/hacker/interested person might want to
 use, but not your average guy. Our goal is to find out how many Mitm
 actually happen, and how, and where. That's why Crossbear has this
 second component, the hunting tasks.

Interesting -- will this work, in the case of authorized MITM of the
network the client's on?  The second SSL connection will always fail,
since the MITM device will MITM it.  Perhaps there should be an option
to retrieve results separately and later?

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Thor Lancelot Simon
On Sat, Feb 18, 2012 at 12:57:30PM -0500, Jeffrey I. Schiller wrote:
 
 The problem is that ssh-keygen uses /dev/urandom and it should really
 use /dev/random. I suspect that once upon a time it may have (I don't
 have the history off hand) and someone got annoyed when it blocked and
 solved the problem.

Um, why would it ever _unblock_, on such a device under typical
first-boot conditions?

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin in endgame

2012-02-24 Thread Thor Lancelot Simon
On Sat, Feb 25, 2012 at 05:08:44AM +1000, James A. Donald wrote:
 If the users of bitcoin are primarily criminals, that is pretty much
 what the founders intended.  Every middle class man of affairs and
 business commits three felonies a day.
 
 The paper presupposes that criminals are such horrible people that
 everything they touch turns to shit.
 
 My experience is that ordinary unorganized criminals are anti social
 and need killing, but that organized crime is pretty good, or at
 least not nearly so bad police and regulatory authorities.

Is the major purpose of this mailing list really the discussion of
political and social theory?  I thought I had subscribed to
cryptography@randombit.net, not I already spent four years doing
political science, thanks.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Symantec/Verisign DV certs issued with excessive validity period of 6 years

2012-04-24 Thread Thor Lancelot Simon
On Tue, Apr 24, 2012 at 12:07:33PM -0500, Nico Williams wrote:
 On Tue, Apr 24, 2012 at 11:20 AM, Marsh Ray ma...@extendedsubset.com wrote:
  On 04/23/2012 08:47 PM, Peter Maxwell wrote:
  I look at it this way:
 
  * Revocation is junk. It doesn't work. It especially doesn't work when an
  attacker wants it not to work.
 
  It is so broken that Chrome isn't even going to bother with OCSP checking
  anymore:
  http://www.imperialviolet.org/2012/02/05/crlsets.html
 
 But this too is revocation.
 
 Assuming some revocation scheme works at all, then longer lived certs
 merely increase the size of the revocation database.  This is at least
 obnoxious.
 
 If no revocation scheme works then the only revocation mechanism left
 is certificate expiry.  Until now no revocation mechanism has worked
 well or universally, so shorter certificate lifetimes are better.
 
 That said, short certificate lifetimes do nothing to mitigate
 undetected private key compromises when the new certificates have the
 same subject public key as the ones they replace.

That said, those who follow the relevant NIST recommendations will not
do that (reuse keys when writing new certs).  The recommendations on
cryptoperiod are with regard to keys, with a recommendation per algorithm,
and though there are many ugly holes punched in the standard for commercial
PKI implementations, I do not see one that would allow writing a new cert
with an old key if the old key is past its allowed cryptoperiod.

NIST says 2048 bit RSA keys should have a 3 year lifetime.  Who here really
wants to explain to customers (or investors!) that he willfully ignored
that recommendation and just reused the same old key when making the CSR
for that new certificate?

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Microsoft Sub-CA used in malware signing

2012-06-04 Thread Thor Lancelot Simon
On Mon, Jun 04, 2012 at 10:20:33AM +0200, Erwann Abalea wrote:
 It's also not clear about what could have been done with TS certificates.
 Is it only codesigning, or TLS server as well?

I'm surprised they can be used for code signing at all.  TS (in its modern
incarnation) is a TLS-encapsulated protocol.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Microsoft Sub-CA used in malware signing

2012-06-12 Thread Thor Lancelot Simon
On Tue, Jun 12, 2012 at 10:51:59AM -0500, Marsh Ray wrote:
 
 What is unclear is if there are any effective costs or rate
 limitations on how often one can 'activate' an MSTS license server.
 A compute cluster faster than 200 PS3s could cut down on the number
 of license certs that were burned to make the attack work.

One wonders what Microsoft knows about who requested all those licenses.
Presumably there was some effort put into plausible deniability.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Thor Lancelot Simon
On Mon, Jun 18, 2012 at 09:58:59PM -0700, coderman wrote:
 
 this is very useful to have in some configurations (not just testing).
 for example: a user space entropy daemon consuming raw, biased,
 un-whitened, full throughput bits of lower entropy density which is
 run through sanity checks, entropy estimates, and other vetting before
 mixing/obscuring state, and feeding into host or application entropy
 pools.

Sanity checks, entropy estimates, and other vetting *which the output
of a DRBG keyed in a known way by your adversary will pass without
a hint of trouble*.

It seems to me the only reason you'd benefit from access to the raw
source would be if you believed Intel might have goofed the sanity
checks.  For my part, I am happy to rely on CRI's assurance that Intel's
sanity checks are good.

The only defense against a deliberately compromised hardware RNG is to
mix it with something else.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-19 Thread Thor Lancelot Simon
On Tue, Jun 19, 2012 at 07:35:03PM -0700, coderman wrote:
 
 is there any literature on the typical failure modes of TRNG/entropy
 sources in deployed systems?
 
 my understanding is that they tend to fail catastrophically, in a way
 easily detected by FIPS sanity checks. E.g. clearly broken.

I know of one case in which a design mistake may have caused related bits
to be output.  I think the FIPS statistical tests might have turned it
up, but the continuous-output test might well not have.

This was a design by Hifn where they reused an existing RNG block but
changed the output LFSR and thus had to rework the interface to register
exposed to the PCI bus in which they reported results.  They left out a
latch, so you could accidentally get the same bits from the LFSR twice
or get an intermediate state where some bits were from the previous state
and some were fresh.  The COT would have caught the former, but given
the clocks involved the former case would have been very, very unlikely.
It would not have caught the latter.

I never got a clear answer from Hifn whether they actually left the latch
out of the silicon or just out of the documentation.  However, I tried
very hard to give them opportunities to tell me it was just the docs that
were wrong, and they didn't.  The workaround was to simply read the register
repeatedly, discarding results, until one knew all the bits had to be fresh
given the other clocks involved; inefficient, but it got the job done.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Devices and protocols that require PKCS 1.5 padding

2012-07-02 Thread Thor Lancelot Simon
On Mon, Jul 02, 2012 at 01:26:20PM -0400, Matthew Green wrote:
 
 More generally, padding oracle attacks exist against OAEP as well
 (Manger's attack). In practice you typically have to construct the
 oracle by measuring a timing differential in the decryption process.
 That's hard over a network, but if you're directly attached to the
 device and have a cycle-accurate timer at your disposal, maybe not.

 These devices are slow!

Indeed they are.  And they are part of a class of slow devices that,
of course, includes not just so-called OTP tokens that have smartcard
functionality, but pure smartcards and their close cousins TPM chips.

I am in particular worried about protocols like OpenPGP, which *require*
version 1.5 padding and which are often used with these kinds of devices
by the security conscious on the theory that security is thereby improved.

Where these protocols can't be changed I suppose the only real
countermeasure is fairly draconian blinding implemented in software around
all the hardware implementations of RSA operations with v1.5 padding.

Besides PGP, what other standard, widely-deployed protocols require the
use of padding types other than OAEP?

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] So, PKI lets know who we're doing business with?

2013-01-08 Thread Thor Lancelot Simon
What do you do if even they don't know?  Today I tried to help someone
who was mid-transaction on Amex's cardholder web site, associating a
new card with their account, when the next step of their process hopped
us over to https://www203.americanexpress.com.

Which has an EV certificate from VeriSign that's been expired since
October last year.  Of course this is more likely due to error than
malfeasance, but nonetheless.  It's what it would look like, eventually,
if an attacker stole a private key just once, right?  So this isn't
something you want to go typing your financial secrets into.

Approximately an hour on the phone with American Express produced
approximately as much head-scratching among Amex employees as on my
end.  An expired certificate for a back-end server isn't among the
problems their online services help desk knows how to test for nor
can report.  Their fraud protection department refers all complaints
of web site misbehavior, even security-related, to their online services
help desk.  Their high-limit corporate card support team can create
tickets in their web development queue but evidently does not have
contact information for any relevant security department at American
Express.  The technical contacts for their domain don't answer the
phone.

In other words, even *they* don't know if the certificate in question
really vouches for them or not, and don't have any way to find out.

Can we really expect that end users will ever get that decision right?
Sure.  Sure we can.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Isn't it odd that...

2013-01-29 Thread Thor Lancelot Simon
...despite all the attacks we've seen on compresion-before-encryption, and all 
the timing
atatacks we've seen on encryption, and the highly data-dependent computational 
effort
associated with compression (including in widespread, hardware-accelerated, and 
valuable
applications like lossy video compression, where the complexity of the input, 
even in
applications where hardware escrow of some kind hides a session key, is exposed 
by the
encoding of the output)...

..we haven't really seen any known-plaintext key recovery attacks facilitated 
by timing
analysis of compressors applied prior to encryption?

-- 
 Thor Lancelot Simon  t...@panix.com

It's very complicated.  It's very cumbersome.  There's a
lot of numbers involved with it.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] 100 Gbps line rate encryption

2013-07-16 Thread Thor Lancelot Simon
On Tue, Jul 16, 2013 at 03:23:01AM -0400, William Allen Simpson wrote:
 On 6/22/13 8:24 PM, Greg Rose wrote:
 
 On Jun 22, 2013, at 15:31 , James A. Donald jam...@echeque.com wrote:
 
 On 2013-06-23 6:47 AM, Peter Maxwell wrote:
 I think Bernstein's Salsa20 is faster and significantly more secure than 
 RC4, whether you'll be able to design hardware to run at line-speed is 
 somewhat more questionable though (would be interested to know if it's 
 possible right enough).
 
 I would be surprised if it is faster.
 
 Be surprised, then... almost all of the recent word- or block- oriented 
 stream ciphers are faster than RC4. And NOTHING should still be using RC4; 
 by today's standards it is quite insecure.
 
 So I spent some (much too much) time reading old PPP archives on our
 earlier discussions selecting an algorithm.  Sadly, 3DES was chosen,
 but rarely implemented.
 
 I cobbled together a draft based on old discussion for ARC4.  It
 surely needs more work.  Although (as you mention) that's old stuff,
 it has the advantage of having running code in most existing systems,
 and could be rolled out quickly on high speed connections.
 
 http://tools.ietf.org/html/draft-simpson-ppp-arc4-00

If you're really going to publish a new RFC -- even an Experimental
one -- using RC4, you should really use RC4-drop-N.  For even moderately
sized packets and reasonable values of N, if you effectively rekey every
packet, you will end up wasting 25-50% of the throughput of the system.

Conclusion: RC4 is particularly poorly suited for this application
in the modern day.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] 100 Gbps line rate encryption

2013-07-17 Thread Thor Lancelot Simon
On Wed, Jul 17, 2013 at 03:50:50AM -0400, William Allen Simpson wrote:
 On 7/16/13 11:15 AM, Matthew Green wrote:
 http://www.isg.rhul.ac.uk/tls/RC4biases.pdf
 
 Thanks for bringing this pre-print link to my attention!
 
 
 In summary, don't use RC4. Don't use it carelessly with IVs. And don't use 
 RC4.
 
 RC4 is available in many libraries and platforms.  For the
 immediate future, it is most easily and likely implemented.

So is single-DES.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] urandom vs random

2013-08-16 Thread Thor Lancelot Simon
On Fri, Aug 16, 2013 at 10:33:11PM -0400, shawn wilson wrote:
 On Fri, Aug 16, 2013 at 10:01 PM, James A. Donald jam...@echeque.com wrote:
  At startup, likely to be short of entropy.
 
 
  If /dev/urandom seeded at startup, and then seeded no further, bad, but not
  very bad.
 
  If /dev/urandom seeded at startup from /dev/random, then should block at
  startup.
 
  If /dev/urandom never blocks, bad.  Should block at startup waiting to
  receive 160 bits from /dev/random, and never block again.
 
 
 I don't follow this - I understand why lack of entropy should block
 urandom but, why shouldn't it block on a running system that
 low_bound?

Please explain what it means, exactly, to reduce the amount of
entropy in the system in question.

Emphasis on exactly.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Random number generation influenced, HW RNG

2013-09-07 Thread Thor Lancelot Simon
On Sun, Sep 08, 2013 at 08:34:53AM +1000, James A. Donald wrote:
 
 Well, since you personally did this, would you care to explain the
 very strange design decision to whiten the numbers on chip, and not
 provide direct access to the raw unwhitened output.

You know as soon as anyone complained about this, they turned around
and provided access to the unwhitened output in the next major version
of the same product family, right?

 A decision that even assuming the utmost virtue on the part of the
 designers, leaves open the possibility of malfunctions going
 undetected.

And one that echoes what about 50% of the other people who have built
hardware random number generators also made.

 That is a question a great many people have asked, and we have not
 received any answers.

No answers aside from Intel actually providing exactly what you asked
for, next chance they got.

 Access to the raw output would have made it possible to determine
 that the random numbers were in fact generated by the physical
 process described, since it is hard and would cost a lot of silicon
 to simulate the various subtle offwhite characteristics of a well
 described actual physical process.

I am extremely skeptical of this claim.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Random number generation influenced, HW RNG

2013-09-08 Thread Thor Lancelot Simon
On Sun, Sep 08, 2013 at 03:00:39PM +1000, James A. Donald wrote:
 On 2013-09-08 1:25 PM, Thor Lancelot Simon wrote:
 On Sun, Sep 08, 2013 at 08:34:53AM +1000, James A. Donald wrote:
 Well, since you personally did this, would you care to explain the
 very strange design decision to whiten the numbers on chip, and not
 provide direct access to the raw unwhitened output.
 You know as soon as anyone complained about this, they turned around
 and provided access to the unwhitened output in the next major version
 of the same product family, right?
 
 I am not aware of this.  Could you provide further details?

http://software.intel.com/en-us/blogs/2012/11/17/the-difference-between-rdrand-and-rdseed

 And since no one needs high bandwidth true random numbers, why the
 on chip whitening?  Surely there was some internal discussion of
 this decision?

A) I don't agree with your premise.

B) Are you under the misapprehension that I work for Intel?

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Cryptanalysis of RADIUS MD5 cipher?

2015-02-04 Thread Thor Lancelot Simon
For at least 15 years there's been general grumbling that the MD5 based
stream cipher used for confidentiality in RADIUS looks like snake oil.

Given how widely used the protocol is, and the failure of various successor
protocols (cute names and all -- TANGENT anyone?) I have always been surprised
that the cipher seems not to have received any serious cryptanalytic
attention.  On the other hand I am not mathy enough to frequently read the
primary literature.

Does anyone know of any work that's been done on this?

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-27 Thread Thor Lancelot Simon
On Fri, Apr 22, 2016 at 04:49:54PM +0200, Sven M. Hallberg wrote:
> > I developed a different approach, which I call Linux Random Number Generator
> > (LRNG) to collect entropy within the Linux kernel. The main improvements
> > compared to the legacy /dev/random is to provide sufficient entropy during 
> > boot
> > time as well as in virtual environments and when using SSDs.
> 
> After reading the paper, it is not clear to me how this goal is
> achieved. As far as I can see, no new sources of entropy are
> introduced; in fact a point is made to use only interrupt timings,
> arguing that these effectively include other events. Why does this
> design make more entropy available during boot and with solid-state
> storage?

Indeed.  I think it would be desirable to clearly separate work to
reorganize and modernize the core entropy-gathering and DRBG
constructions (which closely parallels work Taylor Campbell and I
did in NetBSD) from work to ingest more unpredictable data at
boot time (which I also subsequently did in NetBSD).

On the former score (improvement and modernization of the
mechanism) I think in NetBSD we still have quite a bit of room
for improvement.  On the latter, I think we do about as well as
anyone.  A key insight for me came from Peter Gutmann: it is
important to ingest data you have strong reason to believe are
truly random; but it is also important to ingest data you have
strong reason to believe are merely expensive and difficult for
a prospective adversary to measure or predict.  In the real world,
the latter can often save your butt, or your customers' butts.

So we eat things like the first several seconds of frames from
the network; dmesg output; TOD; IP addresses; hostnames; and other
configuration and nonsecret data that will vary from system to
system and which an adversary would have to measure in the field
rather than predicting _a priori_; as well as data like temperature,
voltage, and fan speed which we know to be related to one another,
but through (to again paraphrase Peter) physical processes which
are very difficult to model; _plus_ the timestamps and other "TRNG"
data that are the traditional food of system entropy pools.  And
we never, ever feed output based solely on input from a single
source.  If you trust your pool and output stream constructions,
this approach can clearly be no worse than relying solely on
timings; and in practice, I believe it is far more expensive for
an adversary to attack.

Thor
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography