Re: [Cryptography] Suite B after today's news

2013-09-06 Thread Jack Lloyd

 I think that any of OCB, CCM, or EAX are preferable from a security
 standpoint, but none of them parallelize as well. If you want to do
 a lot of encrypted and authenticated high-speed link encryption,
 well, there is likely no other answer. It's GCM or nothing.

OCB parallelizes very well in software and I see no reason it would
not also do so in hardware; each block of both the plaintext and
associated data can be processed independently of the others, and all
of OCB's operations (xor, GF(2^128) doubling, Grey codes) seem like
they would be well suited to a fast hardware implementation. And
actually McGrew and Viega's original 2003 paper on GCM specifically
mentions that OCB scales to high speeds in hardware, though they do
not provide references to specific results.

Jack
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-02 Thread Jack Lloyd
On Mon, Sep 02, 2013 at 03:09:31PM -0400, Jerry Leichter wrote:

 a) The very reference you give says that to be equivalent to 128
 bits symmetric, you'd need a 3072 bit RSA key - but they require a
 2048 bit key.  And the same reference says that to be equivalent to
 256 bits symmetric, you need a 521 bit ECC key - and yet they
 recommend 384 bits.  So, no, even by that page, they are not
 recommending equivalent key sizes - and in fact the page says just
 that.

Suite B is specified for 128 and 192 bit security levels, with the 192
bit level using ECC-384, SHA-384, and AES-256. So it seems like if
there is a hint to be drawn from the Suite B params, it's about
AES-192.

 (b) most of the Internet is way behind recommendations that are now
 out there for everyone.  Google recently switched to 2048 bit keys;
 hardly any other sites have done so, and some older software even
 has trouble talking to Google as a result.

Not to mention that our entire PKI system (as well as TLS  1.2, ie
the versions actually supported in browsers) rely on the security of
SHA-1, an algorithm which has a public 2**68 (IIRC) collision attack
and which was phased out by NIST years ago.

Fortunately now TLS 1.2 is finally being forced into most browsers
thanks to BEAST, Lucky13, RC4 breaks, etc but still we're bound to see
some major problems on the PKI side when a practical chosen prefix
SHA-1 collision is found, as I expect at least a few widely used CAs
have still not adopted randomized serial numbers and will have the MD5
experience all over again.

 On the symmetric side, I've already agreed that NSA's approval
 indicated that the considered AES secure 10 years ago, but if
 they've since learned otherwise but think they are and will remain
 the only ones with a viable attack for a while, they would be
 unlikely to admit it by changing their recommendation now.

Worth noting that NIST has announced plans to create AEAD modes based
on Keccak. It will be interesting to see how quickly AES-GCM is phased
out of Suite B once that occurs.

Jack
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Formal notice given of rearrangement of deck chairs on RMS PKItanic

2010-10-06 Thread Jack Lloyd
On Wed, Oct 06, 2010 at 04:52:46PM +1300, Peter Gutmann wrote:

 Right, because the problem with commercial PKI is all those attackers who are
 factoring 1024-bit moduli, and apart from that every other bit of it works
 perfectly.

_If_ Mozilla and the other browser vendors actually go through with
removing all 2048 bit CA certs (which I doubt will happen because I
suspect most CAs will completely ignore this), it would have one
tangible benefit:

(Some of, though unfortunately not nearly all) the old CA certificates
that have been floating around since the dawn of time (ie the mid-late
90s), often with poor chains of custody through multiple iterations of
bankruptcies, firesale auctions, mergers, acquisitions, and so on,
will die around 2015 instead of their current expirations of
2020-2038. Sadly this will only kill about 1/3 of the 124 (!!)
trusted roots Mozilla includes by default.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Jack Lloyd
On Wed, Jul 28, 2010 at 08:48:14AM -0400, Steven Bellovin wrote:

 There seem to be at least three different questions here: bad code
 (i.e., that Windows doesn't check the revocation status properly),
 the UI issue, and the conceptual question of what should replace the
 current PKI+{CRL,OCSP} model.  For the last issue, I'd note that
 using pki instead of PKI (i.e., many different per-realm roots,
 authorization certificates rather than identity certificates, etc.)
 doesn't help: Realtek et al. still have no better way or better
 incentive to revoke their own widely-used keys.

With a sufficiently fine grained authorization model inside the OS,
there is no reason in principle that something like attribute
certificates couldn't work - RealTek would have a code signing cert
only valid for drivers that talked to network cards with specific PCI
vendors IDs, and UI tools that talked to that driver - the signature
on the worm binary in question would be valid, but the worm would not
be given the permissions it wants to actually do its thing. (Eg, when
was the last time you had a network driver that needed to access an
MSSQL db). Windows is not that OS. (Neither is Linux or BSD of
course). It looks like Singularity has some features which could
support this sort of model [1].

This is not to suggest this is at all an easy course of action to
take; my point is just that it's possible to do much better here
without having to alter anyone's incentives: the CAs still collect
their rent, and RealTek's drivers still work. Fixing the OS is
probably easier than somehow fixing PKI to do what we'd nominally want
it to do here (though actually revoking the cert would have been a
good start) or modifying the obvious incentives.

-Jack

[1] http://research.microsoft.com/apps/pubs/default.aspx?id=59976

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: deliberately crashing ancient computers (was: Re: A mighty fortress is our PKI)

2010-07-28 Thread Jack Lloyd
On Wed, Jul 28, 2010 at 11:04:30AM -0400, Jonathan Thornburg wrote:
 On Tue, 27 Jul 2010, Jack Lloyd suggested:
  http://www.crashie.com/ - if you're feeling malicious, just include
  the one line JavaScript that will make IE6 crash, maybe eventually the
  user will figure it out. (Or maybe not).
 
 Please stop and think about the consequences before using something
 like this!  People who are still using IE6, Windows 95, etc, are
 usually doing so for reasons which make sense in their lives, things
 like
[...]

Personally I'm not planning on doing anything one way or another to
encourage or discourage people using IE6. In the spectram of social
badness, I'd view using IE6 roughly on par with using heroin - a bad
idea that mostly hurts oneself with some limited (albeit real)
negative externalities. As with using drug rehabilitation versus
prison sentences to reduce use, the real solution to IE6 is education
and assistance for those who want it, not punishment. Some will, for
whatever reason, choose to ignore said educational/assistance efforts,
and eventually will take the consequences of their actions without any
antics by you or I.

And certainly I have better things to do with my time than crash a
decade-old browser.

Thanks,
  Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI

2010-07-27 Thread Jack Lloyd
On Tue, Jul 27, 2010 at 06:07:02PM -0600, Paul Tiemann wrote:


 IE6-is-dead parties.  Could some intelligent web designers come up
 with a few snippets of code in the various web flavors (PHP, ASP,
 JSP, etc) for people to easily install and include on their sites
 (as part of a movement to discourage old browser usage and encourage
 better security on the web...)  If an old browser is detected, a
 friendly warning message or even an error message appears, along
 with links to the site explaining the movement...

Already exists:

http://code.google.com/p/ie6-upgrade-warning/ - JS, displays a nice
dialog telling the user why they should upgrade and links to download
a new IE, Firefox, Chrome, etc.

http://drupal.org/project/iedestroyer - Drupal (CMS) plugin

http://www.crashie.com/ - if you're feeling malicious, just include
the one line JavaScript that will make IE6 crash, maybe eventually the
user will figure it out. (Or maybe not).

Or a block of pretty much plain old HTML:

http://www.codingforums.com/showthread.php?t=196674

Ultimately though, the only thing that's going to get some people off
IE6 is the machines they are running it off of finally dying, either
due to hardware failure or being so badly owned by worms that the
machine becomes inoperable, at which point it goes into the trash
and they buy a new one.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Intel to also add RNG

2010-07-12 Thread Jack Lloyd
On Mon, Jul 12, 2010 at 12:22:51PM -0400, Perry E. Metzger wrote:

 BTW, let me note that if Intel wanted to gimmick their chips to make
 them untrustworthy, there is very little you could do about it. The
 literature makes it clear at this point that short of carefully
 tearing apart and analyzing the entire chip, you're not going to catch
 subtle behavioral changes designed to allow attackers backdoor
 access. Given that, I see little reason not to trust them on an RNG,
 and I wish they would make it a standard part of the architecture
 already.

I think it's important to make the distinction between trusting Intel
not to have made it actively malicious, and trusting them to have
gotten it perfectly correct in such a way that it cannot fail.
Fortunately, the second problem, that it is a well-intentioned but
perhaps slightly flawed RNG [*], could be easily alleviated by feeding
the output into a software CSPRNG (X9.31, a FIPS 186-3 design, take
your pick I guess). And the first could be solved by also feeding your
CSPRNG with anything that you would have fed it with in the absense of
the hardware RNG - in that case, you're at least no worse off than you
were before. (Unless your PRNG's security can be negatively affected
by non-random or maliciously chosen inputs, in which case you've got
larger problems).

-Jack

[*] Even if it were perfectly designed, it seems plausible to me that
manufacturing defects and/or any number of runtime problems (age,
overheating, bad voltage control, cosmic rays, dirty power, etc, etc)
might cause subtle failures/biases that might be difficult to detect
reliably; I would personally be dubious of using any hardware RNGs
output directly for this reason.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: hedging our bets -- in case SHA-256 turns out to be insecure

2009-11-16 Thread Jack Lloyd
On Wed, Nov 11, 2009 at 10:03:45AM +0800, Sandy Harris wrote:

   C(x) = H1(H1(x) || H2(x))
 
 This requires two hash(x) operations. A naive implementation needs
 two passes through the data and avoiding that does not appear to
 be trivial. This is not ideal since you seem very concerned about
 overheads.

If performance is really an issue one could code a combined H1/H2
function which would interleave the operations, which would prevent
needing two passes (which _is_ really important since memory and disk
latencies are usually the biggest factor in performance). Direct
interleaving would also offer better ILP.

But even updating both hashes at the same time would prevent needing
two full passes; something like the code below would offer much better
cache utilization, and would not be at all difficult to implement:

while data_left:
  block = input.read_one_block()
  h1.compress(block)
  h2.compress(block)

If it was really important, choosing a nonstandard H2 could offer even
better performance; for instance let H1=SHA-256 and H2=SHA-~256, where
SHA-~256 is precisely SHA-256 but with all of its constants bitwise
inverted. One could compute both functions in parallel using SIMD
(SSE2, ARM's NEON, etc) [and they could share the message expansion,
which is quite costly in SHA-2]. It's not clear from a quick read of
the paper Zooko referenced (On the Strength of the Concatenated Hash
Combiner when All the Hash Functions are Weak) if this would actually
meet the requirements of sufficiently different for the results
there to apply, though.

 What about this construction:
 
   C(x) = H1(H2(x) || H3(x))

One trouble with this construction that Zooko's does not have is that
it can fail even if H1 is collision resistant due to an inner
collision.

 H3 might be some really cheap fast function invented for the situation.
 As I recall, the GOST hash just used a sum of input blocks, and that's
 enough to defeat the multi-block attacks. If it is simple enough, you
 can code it into your implementation of H2 so you only need one
 pass.

The GOST hash does use the sum of input blocks (as the final input to
the compression function) but it has a number of other components; it
is actually quite slow compared to modern hashes.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Jack Lloyd
On Sat, Oct 17, 2009 at 02:23:25AM -0700, John Gilmore wrote:

 DSA was (designed to be) full of covert channels.

True, but TCP and UDP are also full of covert channels. And if you are
worried that your signing software or hardware is compromised and
leaking key bits, you have larger problems, no matter what algorithm
you use; for instance, with RSA, the signer could intentionally
miscalculate 1 in 2^32 signatures, which would immediately leak the
entire private key to someone who knew to watch for it. (I would have
said that using PSS also introduces a covert channel, but it appears
DNSSEC is using the scheme from PKCS1 v1.5.)

And, for that matter, one can make DSA deterministic by choosing the k
values to be HMAC-SHA256(key, H(m)) - this will cause the k values to
be repeated, but only if the message itself repeats (which is fine,
since seeing a repeated message/signature pair is harmless), or if one
can induce collisions on HMAC with an unknown key (which seems a
profoundly more difficult problem than breaking RSA or DSA).

 RSA was the obvious choice because it was (and is) believed that if
 you can break it, you can factor large numbers (which mathematicians
 have been trying to do for hundreds of years).  No other algorithm
 available at the time came with such a high pedigree.  As far as I
 know, none still does.

As far as I know even now nobody has proven that breaking RSA is
equivalent to factoring; there are results that suggest it, for
instance [http://eprint.iacr.org/2008/260] shows there is no 'generic'
attack that can break RSA without factoring - meaning such an the
attack would have to examine the bit representation of the modulus.  A
full proof of equivalence still seems to be an open problem.

If for some reason one really wanted to ensure their public key
primitives reduces to a hard problem, it would have made much more
sense to use Rabin-Williams, which does have a provable reduction to
factoring.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-16 Thread Jack Lloyd
On Wed, Oct 14, 2009 at 10:43:48PM -0400, Jerry Leichter wrote:
 If the constraints elsewhere in the system limit the number of bits of  
 signature you can transfer, you're stuck.  Presumably over time you'd  
 want to go to a more bit-efficient signature scheme, perhaps using  
 ECC.

Even plain DSA would be much more space efficient on the signature
side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
1024 bit RSA key, and the signatures would be half the size. And NIST
allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
is really that important.

Given that they are attempted to optimize for minimal packet size, the
choice of RSA for signatures actually seems quite bizarre.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: RNG using AES CTR as encryption algorithm

2009-09-08 Thread Jack Lloyd
On Wed, Sep 02, 2009 at 10:58:03AM +0530, priya yelgar wrote:
 Hi all,
 
 I have implemented RNG using AES algorithm in CTR mode.
 
 To test my implementation I needed some test vectors.
 
 How ever I searched on the CSRC site, but found the test vectors for AES_CBC 
 not for AES CTR.
 
 Please? can any one tell me where to look for the test vectors to test RNG 
 using? AES CTR.

NIST SP 800-38A Recommendation for Block Cipher Modes of Operation
contains a set of AES/CTR test vectors in Appendix F.5

http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git

2009-08-19 Thread Jack Lloyd
On Wed, Aug 19, 2009 at 09:28:45AM -0600, Zooko Wilcox-O'Hearn wrote:

 [*] Linus Torvalds got the idea of a Cryptographic Hash Function
 Directed Acyclic Graph structure from an earlier distributed revision
 control tool named Monotone.  He didn't go out of his way to give
 credit to Monotone, and many people mistakenly think that he invented
 the idea.

OT trivia: The idea actually predates either monotone or git; opencm
(http://opencm.org/docs.html) was using a similiar technique for VCS
access control a year or two prior to monotone's first release. AFAIK
Graydon Hoare (the original monotone designer) came up with the
technique independently of the opencm design. I'm actually not certain
that opencm originated the technique, either; all I can say for
certain is that it was using it prior to monotone or git.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


512 bit RSA key used for TI 83+ auth cracked

2009-08-18 Thread Jack Lloyd

It seems the TI-83+ operating system is protected using some form of
code signing scheme using a 512 bit RSA key. That key has now been
factored:

http://www.unitedti.org/index.php?showtopic=

Which apparently will allow custom operating systems to run on the
device.

While this certainly is not the first 512 bit RSA moduli to be
factored, this may be the first one that was performed (publicly, at
least) with the goal of breaking an existing system, rather than
simply demostrating progress in algorithms and hardware.

Interestingly, it was reportedly done with only a single machine, over
the course of 73 days.

Details on the computation are lower down in the thread:


How did I do this? With the best tools I could find for the job. The
best algorithm for factoring really large general numbers (i.e.,
numbers without any special properties) is the general number field
sieve. The best currently-available implementation of the GNFS
consists of a combination of the GGNFS and Msieve projects. It's
really the guys behind these tools who deserve the credit for making
this possible. While it does take a bit of work to get the tools set
up correctly, most of what I did was sitting around waiting for it to
finish, and every once in a while, telling the script to try another
filtering run. smile.gif

Some fun statistics:

- The factorization took, in total, about 1745 hours, or a bit less
  than 73 days, of computation. (I've actually been working on this
  since early March; I had a couple of false starts and haven't been
  able to run the software continously.)
- My CPU, for reference, is a dual-core Athlon64 at 1900 MHz.
- The sieving database was 4.9 gigabytes and contained just over 51
  million relations.
- During the filtering phase, Msieve was using about 2.5 gigabytes of RAM.
- The final processing involved finding the null space of a 5.4
  million x 5.4 million matrix.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


NIST announces SHA-3 round 2 candidates

2009-07-25 Thread Jack Lloyd

http://csrc.nist.gov/groups/ST/hash/sha-3/Round2/submissions_rnd2.html


A report summarizing NIST's selection of these candidates will be
forthcoming. A year is allocated for the public review of these
algorithms, and the Second SHA-3 Candidate Conference is being planned
for August 23-24, 2010, after Crypto 2010.


Making the cut:
  BLAKE
  Blue Midnight Wish
  CubeHash
  ECHO
  Fugue
  Grostl
  Hamsi
  JH
  Keccak
  Luffa
  Shabal
  SHAvite-3
  SIMD
  Skein

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-22 Thread Jack Lloyd
On Tue, Jul 21, 2009 at 07:15:02PM -0500, Nicolas Williams wrote:
 I've an application that is performance sensitive, which can re-key very
 often (say, every 15 minutes, or more often still), and where no MAC is
 accepted after 2 key changes.  In one case the entity generating a MAC
 is also the only entity validating the MAC (but the MAC does go on the
 wire).  I'm interested in any MAC algorithms which are fast, and it
 doesn't matter how strong they are, as long as they meet some reasonable
 lower bound on work factor to forge a MAC or recover the key, say 2^64,
 given current cryptanalysis, plus a comfort factor.
[...]
 Which MAC algorithms would you recommend?

I'm getting the impression that key agility is important here, so one
MAC that comes to mind is CMAC with a block cipher with a fast key
schedule like Serpent. (If for some reason you really wanted to do
something to make secuity auditors squirm you could even cut Serpent
down to 16 rounds which would increase the message processing rate by
about 2x and also speed up the key schedule. This seems like asking
for it to me, though.)

Another plausible answer might be Skein - it directly supports keying
and nonces (so you don't have to take the per-message overhead of the
extra hash as with HMAC), and has very good bulk throughput on 64-bit
CPUs.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: What will happen to your crypto keys when you die?

2009-07-03 Thread Jack Lloyd
On Thu, Jul 02, 2009 at 09:29:30AM +1000, silky wrote:

 A potentially amusing/silly solution would be to have one strong key
 that you change monthly, and then, encrypt *that* key, with a method
 that will be brute-forceable in 2 months and make it public. As long
 as you are constantly changing your key, no-one will decrypt it in
 time, but assuming you do die, they can potentially decrypt it while
 arranging your funeral :)

This method would not work terribly well for data at rest. Copy the
ciphertext, start the brute force process, and two months later you
get out everything, regardless of the fact that in the meantime the
data was reencrypted.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Popular explanation of fully homomorphic encryption wanted

2009-06-17 Thread Jack Lloyd
On Tue, Jun 16, 2009 at 09:31:36AM -0700, Hal Finney wrote:
 Udhay Shankar N quotes wikipedia:
  The question was finally resolved in 2009 with the development of the
  first true fully homomorphic cryptosystem. The scheme, constructed by
  Craig Gentry, employs lattice based encryption and allows evaluation
  of both addition and multiplication operations without restriction.[2]
 
 2. ^ Craig Gentry. On homomorphic encryption over circuits of
arbitrary depth. In the 41st ACM Symposium on Theory of Computing
(STOC), 2009. 
 
 A URL for this paper is
 http://portal.acm.org/citation.cfm?id=1536414.1536440 but you will have
 to be an ACM member to download it. I was able to get a copy this morning
 and quickly skimmed it.

Google located for me a set of slides and audio for a talk given by
the author on this paper:
  http://www.fields.utoronto.ca/audio/08-09/crypto/gentry/index.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Distinguisher and Related-Key Attack on the Full AES-256

2009-05-22 Thread Jack Lloyd

Alex Biryukov, Dmitry Khovratovich, and Ivica Nikolic gave a talk at
the Eurocrypt rump session, 'Distinguisher and Related-Key Attack on
the Full AES-256', with the full paper accepted to Crypto.

Slides from Eurocrypt are here:

http://eurocrypt2009rump.cr.yp.to/410b0c56029d2fa1d686823e3a059af8.pdf

The q-multicollisions attack they describe may be a practical way of
breaking a hash function based on AES. So this could have some
interesting ramifications to SHA-3 candidates which use the AES round
function; I'm not sufficiently familiar with those designs yet for it
to be clear one way or another if they would in fact be vulnerable.

(via zooko's blog)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Property RIghts in Keys

2009-02-12 Thread Jack Lloyd
On Thu, Feb 12, 2009 at 10:49:37AM -0700, s...@acw.com wrote:

 If anybody can alter, revoke or reissue a certificate then I agree it is
 common property to which attaches no meaningful notion of property rights.
 
 If on the other hand only certain people can alter, revoke or reissue a
 certificate then it seems to me they have some sort of property rights in
 the certificate and from their point of view the certificate is their
 property and not everybody's property.

That only certain persons can revoke or reissue a certificate is a
matter of mathematics, not legal restrictions.

Say I have discovered a marvelous method of easily factoring RSA keys,
which unfortunately the margin of this emacs buffer is too small to
contain, and I then go out, factor GeoTrust's CA key and issue a new
certificate.

Questions:

Am I now infringing on GeoTrust's IP rights? Or have, rather, I made
myself a co-owner in said rights on this particular key?

Have I broken any law? If not, should what I have done be illegal?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: very high speed hardware RNG

2008-12-30 Thread Jack Lloyd
On Sun, Dec 28, 2008 at 08:12:09PM -0500, Perry E. Metzger wrote:
 
 Semiconductor laser based RNG with rates in the gigabits per second.
 
 http://www.physorg.com/news148660964.html
 
 My take: neat, but not as important as simply including a decent
 hardware RNG (even a slow one) in all PC chipsets would be.

I've been thinking that much better than a chipset addition (which is
only accessible by the OS kernel in most environments) would be a
simple ring-3 (or equivalent) accessible instruction that writes 32 or
64 bits of randomness from a per-core hardware RNG, something like

; write 32 bits of entropy from the hardware RNG to eax register
rdrandom %eax

Which would allow user applications to access a good hardware RNG
directly, in addition to allowing the OS to read bits to seed the
system PRNG (/dev/random, CryptoGenRandom, or similar)

I think the JVM in particular could benefit from such an extension, as
the abstractions it puts into place otherwise prevent most of the
methods one might use to gather high-quality entropy for a PRNG seed.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: very high speed hardware RNG

2008-12-30 Thread Jack Lloyd
On Tue, Dec 30, 2008 at 11:45:27AM -0500, Steven M. Bellovin wrote:

 Of course, every time a manufacturer has tried it, assorted people
 (including many on this list) complain that it's been sabotaged by the
 NSA or by alien space bats or some such.

Well, maybe it has. Or maybe it was just not competently implemented,
or perhaps it has a failure mode that was not accounted for. The
design might be perfect but the physical implementation that happens
to be in your computer has a manufacturing flaw such that if the CPU
core voltage is slightly low and the ambient temperature is above 95F,
the raw output becomes biased from a uniform distribution in a subtle
way - how do you detect something like that? I personally would not
trust the direct output of any physical hardware source for anything,
precisely because you cannot measure it, or test for failures, in any
meaningful way. That does not mean it is not a useful thing to have.

 It's not obvious to me that you're right.  In particular, we need to
 consider how such an instruction would interact with a virtual machine
 hypervisor.  Is it a bug or a feature that the hypervisor can't
 intercept the request?  Remember that reproducibility is often a virtue.

We already have this problem with rdtsc and equivalent cycle counter
reading instructions. ISTR that some architectures allow the kernel to
forbid access to the cycle counter - if so, similar techniques could
be used for a rdrandom instruction. For those that don't, the
non-reproducability ship has already sailed (think of rdtsc as a
rdrandom that has a very bad probability distribution).

Reproducability is sometimes a virtue, but sometimes not. I recall
discussions last year, I believe on this list, about how to design a
PRNG that was able to safely deal with VM state rollbacks. A
user-accessible RNG instruction would easily alleviate this problem.

 The JVM could just as easily open /dev/urandom today.

Except when it doesnt exist - in which case most Java software seems
to default to things like seeding a PRNG with the timestamp, because
the other alternatives that are feasible in Java, like interleaving
counter reads among multiple threads, are slow, difficult to implement
correctly, and even more difficult to test.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Who cares about side-channel attacks?

2008-10-24 Thread Jack Lloyd
On Mon, Oct 06, 2008 at 05:51:50PM +1300, Peter Gutmann wrote:
 For the past several years I've been making a point of asking users of crypto
 on embedded systems (which would be particularly good targets for side-channel
 attacks, particularly ones that provide content-protection capabilities)
 whether they'd consider enabling side-channel attack (SCA - no, not that SCA)
 protection in their use of crypto.  So far I've never found anyone who's made
[...]

 In other words the user has to make a conscious decision that SCA protection
 is important enough that performance/power consumption can be sacrificed for
 it.  Can anyone provide any data on users making this tradeoff?  And since
 negative results are also results, a response of I've never found anyone who
 cares either is also useful.  Since the information may be commercially

I have little experience on the embedded crypto side but I do maintain
a crypto library that has some non-zero number of users on general
desktop and server machines.

Basic protections ala your point 2 are provided and enabled by default
(blinding, and checking private key operations for consistency with
the public, to prevent the really easy attacks). There used to be a
toggle to disable blinding, which as far as I know was never used - or
at least nobody complained when I removed the toggle.

To my memory nobody has ever asked about what SCA measures are or are
not enabled, or how to toggle them, though I do have a FAQ entry about
it, so perhaps people who really wanted serious side-channel
resistence just read that FAQ and moved on to another implementation
without ever bothering to contact me - certainly there are some
self-selection problems with my sampling.

When FlexSecure wrote Botan's ECC implementation for BSI, they
implemented a number of anti-timing attack countermeasures - but they
were being paid to care about that, so this is probably not a valid
datapoint.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: combining entropy

2008-10-24 Thread Jack Lloyd
On Fri, Oct 24, 2008 at 10:23:07AM -0500, Thierry Moreau wrote:

 Do you really trust that no single source of entropy can have knowledge of 
 the other source's output, so it can surreptitiously correlate its own?

 I.e, you are are also assuming that these sources are *independent*.

I do not think one means the other here.

An omniscient malicious RNG source seems quite unlikely in most threat
models. However that is a very different statement from saying that
lacking such an attacker, you can safely assume your 'pools of
entropy' (to quote the original question) are independent in the
information-theoretic sense.

Say you execute (on a Linux machine) two commands, like ifconfig -a
and netstat -s (which print ASCII text with statistics about network
interfaces and network protocols, resp), capturing the output as two
of your entropy sources.

Both have some amount of entropy (perhaps zero if an attacker is on
the machine and runs his commands at the same time as yours - and
perhaps quite a bit more if the local machine happens to be safe). But
they are certainly not statistically independent!  Information in one
will be somewhat reflected in the other (packet counters), and of
course at the macro level all your inputs have high bit unset, so if
you combined via XOR your output will have at best .875 bits of
entropy per bit.

To address IanG's question more directly, my first thought would be to
use something like the design Hugo Krawczyk describes in On
Extract-then-Expand Key Derivation Functions and an HMAC-based KDF
(http://www.ee.technion.ac.il/~hugo/kdf/kdf.pdf) or one of the related
PRNG designs he references. Then use the output of the HMAC PRF to
feed the DT vector of an X9.31 PRNG (using block cipher du jour), a
trick AFAIK invented by Peter Gutmann which has always seemed like a
good worst-case-scenario trick to me (for instance, if the code for
the hash's compression function is miscompiled), though at the cost of
extra code/design complexity (and thus points of failure) - as always
there are tradeoffs to make.

-Jack (IANAC)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: combining entropy

2008-10-24 Thread Jack Lloyd
On Fri, Oct 24, 2008 at 03:20:24PM -0700, John Denker wrote:
 On 10/24/2008 01:12 PM, Jack Lloyd wrote:
 
   is a very different statement from saying that
  lacking such an attacker, you can safely assume your 'pools of
  entropy' (to quote the original question) are independent in the
  information-theoretic sense.
 
 The question, according to the original poster, is not 
 whether it is safe to assume that one of the entropy
 sources can be trusted.  Safe or not, the question explicitly 
 assumed that one of the sources was trusted ... and asked 
 what the consequences of that assumption would be.

Perhaps our seeming disagreement is due to a differing interpretation
of 'trusted'. I took it to mean that at least one pool had a
min-entropy above some security bound. You appear to have taken it to
mean that it will be uniform random?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Kaminsky finds DNS exploit

2008-07-09 Thread Jack Lloyd
On Wed, Jul 09, 2008 at 05:36:02PM +0100, Ben Laurie wrote:
 Paul Hoffman wrote:
 First off, big props to Dan for getting this problem fixed in a 
 responsible manner. If there were widespread real attacks first, it would 
 take forever to get fixes out into the field.
 However, we in the security circles don't need to spread the Kaminsky 
 finds meme. Take a look at 
 http://tools.ietf.org/wg/dnsext/draft-ietf-dnsext-forgery-resilience/. 
 The first draft of this openly-published document was in January 2007. It 
 is now in WG last call.
 The take-away here is not that Dan didn't discover the problem, but Dan 
 got it fixed. An alternate take-away is that IETF BCPs don't make nearly 
 as much difference as a diligent security expert with a good name.

 Guess you need to tell Dan that - he seems to think he did discover it.

Taking a brief look at what changed in bind, it seems primarily to
involve randomizing the query port, matching both the port and
transaction id instead of just the id, and using RC4 to generate the
transactions ids instead of a pair of very sketchy-looking
(cryptographically speaking) RNGs based on an LCRNG design via Knuth.

Perhaps there is something subtle here that is more dangerous than the
well known problems, and all these source port randomization and
transaction id randomization fixes are just a smokescreen of sorts for
a fix for something Dan found.

Securosis claims [1] The good news is that due to the nature of this
problem, it is extremely difficult to determine the vulnerability
merely by analyzing the patches, and Dan claims something similar,
offering to share the stage at Defcon with anyone who finds the
bug [2]

A statement from the MaraDNS author [3]:


MaraDNS is immune to the new cache poisoning attack.  MaraDNS has
always been immune to this attack.  Ditto with Deadwood (indeed,
people can use MaraDNS or Deadwood on the loopback interface to
protect their machines from this attack).

OK, basically, this is an old problem DJB wrote about well over seven
years ago.  The solution is to randomize both the query ID and the
source port; MaraDNS/Deadwood do this (and have been doing this since
around the time of their first public releases that could resolve DNS
queries) using a cryptographically strong random number generator
(MaraDNS uses an AES variant; Deadwood uses the 32-bit version of
Radio Gatun).


(But CERT has no reply in their advisory from MaraDNS, so I'm not sure
if this statement was made on the basis of just what is publically
known, or if he was in fact in on the vendor notify list).

-Jack

[1] 
http://securosis.com/2008/07/08/dan-kaminsky-discovers-fundamental-issue-in-dns-massive-multivendor-patch-released/
[2] http://www.doxpara.com/?p=1162
[3] http://marc.info/?l=maradns-listm=121560639013865w=2

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Strength in Complexity?

2008-07-02 Thread Jack Lloyd
On Wed, Jul 02, 2008 at 07:25:36AM -0400, Perry E. Metzger wrote:
 
 [EMAIL PROTECTED] (Peter Gutmann) writes:
  (Actually even that doesn't really explain something like IKE... :-).
 
 Having been peripherally involved in the causation change for IKE, let
 me confess that it was caused by human stupidity destroying the
 alternatives. The author of the much cleaner spec asserted copyright
 and control over it, and fearing lawsuits, people turned to the
 the remaining alternative. A number of us were all to blame for the
 situation.

Out of curiosity, was this other spec Photuris?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Why doesn't Sun release the crypto module of the OpenSPARC?

2008-06-30 Thread Jack Lloyd
On Fri, Jun 27, 2008 at 12:19:04PM -0700, zooko wrote:
 and probably other commodity products).  Likewise newfangled ciphers like 
 Salsa20 and EnRUPT will be considered by me to be faster than AES (because 
 they are faster in software) rather than slower (because AES might be built 
 into the commodity hardware).

The calculus on AES may change in the nearish future: Intel is adding
AES instructions in upcoming processors, and AMD is adding another set
of instructions in SSE5 to assist AES implementations. AMD claims a 5x
speedup for AES using SSE5 versus plain x86-64 instructions [2], I
have not yet seen performance estimates for the Intel instructions.

-Jack

[1]: http://softwarecommunity.intel.com/articles/eng/3788.htm
[2]: http://www.ddj.com/hpc-high-performance-computing/201803067

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Jack Lloyd
On Tue, May 06, 2008 at 03:40:46PM +, Steven M. Bellovin wrote:

 In particular, with TLS the session key can be negotiated between
 two user contexts; with IPsec/IKE, it's negotiated between a user
 and a system.  (Yes, I'm oversimplifying here.)

Is there any reason (in principle) that IPsec/IKE could not be done
entirely in userspace / application space, though?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Comments on SP800-108

2008-05-05 Thread Jack Lloyd
Hi,

As a standard, this is specification is a disaster. Just from a quick
read, I see the following:

However, alternative orders for the input data fields may be used for
a KDF.

with a length specified by the function, an algorithm, or a protocol
which uses T as an input.

In feedback mode, the output of the PRF is computed using the result
of the previous iteration and, optionally, using a counter as the
iteration variable(s).

With sufficient options, all implementations are non-interoperable. I
think you've managed to reach that point here. As an implementor, my
instinct is to stay well away from this entire mess and just use IEEE
1363's KDF2, which is:

  - simple enough that anyone can implement it easily and without
 interop difficulties, or requiring protocol negotiations (and
 then the implementor has to do the negotiation properly - which
 opens up new avenues for security holes)

  - secure enough that it doesn't matter (ie, that the likelyhood
 that a security flaw in the KDF is the critical problem is far
 lower than a security flaw elsewhere in the system)

My recommendation: choose something that will work for nearly
everyone, and mandate it directly. For instance, why make the counter
length configurable? In 99% of implementations, the thing that will
make sense is a 32-bit counter (to paraphrase the famous if apocryphal
Bill Gates quote, 4 gigabytes of keying material should be enough for
anybody), but by refusing to mandate this behavior, you force every
implementor and application designer to choose something and then
negotiate on the off chance that some other length was chosen, or that
the other side is using variable length encodings - something which is
allowed by the spec, as best as I can tell, and which opens up some
pretty big (at least theoretical) holes.

I have no comments about the actual security aspects of it; it looks
fine to my eye, but given the interoperability issues listed above I
don't plan on implementing any of these KDFs anyway, so I can't say I
much care whether they are actually secure or not. I would advise you
to remember that crypto does not exist in a vacuum, and should help,
not hinder, the overall security of a system.

Regards,
  Jack Lloyd

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL and Malicious Hardware/Software

2008-04-29 Thread Jack Lloyd
On Mon, Apr 28, 2008 at 10:03:38PM -0400, Victor Duchovni wrote:
 On Mon, Apr 28, 2008 at 03:12:31PM -0700, Ryan Phillips wrote:
 
  What are people's opinions on corporations using this tactic?  I can't
  think of a great way of alerting the user, but I would expect a pretty
  reasonable level of privacy while using an SSL connection at work.  
 
 Expectations of privacy at work vary by jurisdiction and industry. In
 the US, and say in the financial services industry, any such expectations
 are groundless (IANAL).

Most places I have worked (all in the US) explicitly required consent
to more or less arbitrary amounts of monitoring as a condition of
employment.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Cruising the stacks and finding stuff

2008-04-23 Thread Jack Lloyd
On Wed, Apr 23, 2008 at 08:20:27AM -0400, Perry E. Metzger wrote:

 There are a variety of issues. Smart cards have limited capacity. Many
 key agreement protocols yield only limited amounts of key
 material. I'll leave it to others to describe why a rational engineer
 might use fewer key bits, but suffice it to say, there are quite
 rational reasons. I'll agree that if you have no tradeoffs, you might
 as well use longer keys, but if you really have no tradeoffs, you
 would prefer to use a one time pad, too. All real engineering is about
 tradeoffs.

I think one point worth making is that we probably don't really know
how to make a cipher that is secure to, say, 2^512 operations (or
2^1024 or 2^4096 or whatever). For instance if you took Serpent or AES
or Twofish and modified it to support 512-bit keys, I don't believe
the resulting cipher would actually be secure to 2^512
operations... to guess completely at random, I'd say they would be
more like 2^300 or so. (Have any block ciphers with 256-bit
block/512-bit key been proposed/studied? I have not been following FSE
and similar conferences of late)

Making a cipher that uses an N bit key but is only secure to 2^M
operations with MN is, firstly, considered broken in many circles, as
well as being inefficient (why generate/transmit/store 512 bit keys
when it only provides the security of a ~300 bit (or whatever) key
used with a perfect algorithm aka ideal cipher - why not use the
better cipher and save the bits).

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Double Encryption Q

2008-04-18 Thread Jack Lloyd
On Fri, Apr 11, 2008 at 04:30:47PM +0200, COMINT wrote:
 Quick system scenario:
 
 You have packet [A].
 
 It gets encrypted using an AES algo in a particular mode and we are
 left with [zA].
 
 More data [B] is added to that encrypted packet.
 
 Now I have [zA]+[B] in one packet and I re-encrypt it with the same
 algo/key/mode.
 
 Have I just compromised the security somehow? I wasn't aware of
 anything but something about this double encryption made something
 ring in my mind so I wanted to double check...

This would certainly cause problems in if particular mode == OFB or
counter, since (if you also reuse the IVs), you could have E(zA) == A.

If you use a different (independent, unrelated) key/IV, then the
existence of a weakness in this scheme would seem to provide evidence
of an attack on AES, regardless of mode choice.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection for quasi-offline memory nabbing

2008-03-21 Thread Jack Lloyd
On Tue, Mar 18, 2008 at 09:46:45AM -0700, Jon Callas wrote:

 What operates like a block cipher on a large chunk?
 Tweakable modes like EME.

Or as a non-patented alternative one could use the Bear/Lion
constructions [1], which can encrypt arbitrary size blocks at
reasonably good speeds (depending on the performance characteristics
of the stream cipher and hash function they are instantiated with).

-Jack

[1] http://www.cl.cam.ac.uk/~rja14/Papers/bear-lion.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cold boot attacks on disk encryption

2008-02-21 Thread Jack Lloyd
On Thu, Feb 21, 2008 at 12:10:33PM -0500, Perry E. Metzger wrote:
 
 Ed Felten blogs on his latest research:
 
 http://www.freedom-to-tinker.com/?p=1257
 
 Excerpt:
 
 Today eight colleagues and I are releasing a significant new
 research result. We show that disk encryption, the standard
 approach to protecting sensitive data on laptops, can be defeated
 by relatively simple methods. We demonstrate our methods by using
 them to defeat three popular disk encryption products: BitLocker,
 which comes with Windows Vista; FileVault, which comes with MacOS
 X; and dm-crypt, which is used with Linux.

While they did have some success with recovering an entire AES key
schedule uncorrupted, it seems important to note that the simplistic
nature of the AES and DES key schedules allowed them to recover the
entire original key even after the state had been somewhat degraded
with only moderate amounts of work. A cipher with a better key
schedule (Blowfish or Serpent, for instance) would seem to offer some
defense here.

Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: More on in-memory zeroisation

2007-12-14 Thread Jack Lloyd
On Wed, Dec 12, 2007 at 05:27:38PM -0500, Thierry Moreau wrote:
 As a consequence of alleged consensus above, my understanding of the C 
 standard would prevail and (memset)(?,0,?) would refer to an external 
 linkage function, which would guarantee (to the sterngth of the above 
 consensus) resetting an arbitrary memory area for secret intermediate 
 result protection.

GCC on x86-64 (-O2) compiles this function to the same machine code
regardless of the value of ZEROIZE:

#include string.h

int sensitive(int key)
   {
   char buf[16];
   int result = 0;
   size_t j;

   for(j = 0; j != sizeof(buf); j++)
  buf[j] = key + j;

   for(j = 0; j != sizeof(buf); j++)
  result += buf[j];

#if ZEROIZE
   (memset)(buf, 0, sizeof(buf));
#endif

   return result;
   }

Even if (memset) must refer to a function with external linkage (an
analysis I find dubious), there is nothing stopping the compiler from
doing IPA/whole program optimization - especially with a very basic
function like memset (in the code above, if buf is declared volatile,
GCC does do the memset: but it does it by moving immediate zero values
directly to the memory locations, not by actually jumping to any
external function).

Regards,
  Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Password vs data entropy

2007-10-27 Thread Jack Lloyd
On Thu, Oct 25, 2007 at 09:16:21PM -0700, Alex Pankratov wrote:
 Assuming the password is an English word or a phrase, and the 
 secret is truly random, does it mean that the password needs 
 to be 3100+ characters in size in order to provide a proper
 degree of protection to the value ? 

If E(key) = E(text), why not use a one time pad?

 Or, rephrasing, what should the entropy of the password be 
 compared to the entropy of the value being protected (under
 whatever keying/encryption scheme) ? 

Entropy != economic value

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Retailers try to push data responsibilities back to banks

2007-10-05 Thread Jack Lloyd
On Thu, Oct 04, 2007 at 06:48:49PM -0400, Leichter, Jerry wrote:
 Prat Moghe, founder and CTO of Tizor Systems Inc., a Maynard,
 Mass.-based security firm, called the NRF's demand political posturing
 and said it would do little to improve retail security anytime soon.
 
 I think a lot of this is about moving culpability back to the credit
 card companies and saying don't make this my problem alone, Moghe
 said. They seem to have realized that going on the defense as an
 industry doesn't help. There is just more and more they have to do.

Amazingly, Tizor Systems does PCI reviews (actually they entirely seem
to do CA work), and I'm sure Prat would prefer to see the PCI gravy
train stay around. (I don't know the current state of the industry,
but when I was working in a consulting group 2004-2005, PCI reviews
were our most profitable engagement type by a large margin - and
non-technical enough that you can put a person with a few months of
security training on them and they'll do fine).

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New DoD encryption mandate

2007-08-20 Thread Jack Lloyd
On Fri, Aug 17, 2007 at 05:21:16PM -0700, Alex Alten wrote:

 Agreed, for most requirements.  Sometimes one may need to keep keys
 in trusted hardware only.  The only real fly-in-the-ointment is that current
 hash algorithms (SHA-1, SHA-2, etc.) don't scale across multiple CPU
 cores (assuming you need integrity along with your privacy).

The basic algorithms don't but you can easily enough use multiple CPUs
with a hash tree or hash list. I'd also guess that in many cases you'd
want to hash many files, which offers easy parallelism by spawning a
pool of threads that work off a series of files. If you can afford a
patent license for PMAC, that would work as well.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: no surprise - Sun fails to open source the crypto part of Java

2007-05-12 Thread Jack Lloyd
On Fri, May 11, 2007 at 04:42:47PM +0200, Ian G wrote:

 They also involve some elements of sound and cryptography, 
 said Tom Marble, Sun's OpenJDK ambassador. We have already 
 contacted the copyright holders. We were unable to negotiate 
 release under an open-source license, Marble said.

I believe at least some versions of Java used RSADSI's JSAFE for the
low-level crypto code, which would explain why that portion of it
wasn't included.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DNSSEC to be strangled at birth.

2007-04-05 Thread Jack Lloyd
On Wed, Apr 04, 2007 at 05:51:27PM +0100, Dave Korn wrote:

   Can anyone seriously imagine countries like Iran or China signing up to a
 system that places complete control, surveillance and falsification
 capabilities in the hands of the US' military intelligence?

How is this any different from plain-old-DNS? Except that now the
number of attackers is limited to one - instead of worrying about the
US or China or UK or India or Russia or whoever falsifying DNS
records, you just have to worry about the US. And if/when you catch
them at it, you know exactly who did it.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A note on vendor reaction speed to the e=3 problem

2006-09-18 Thread Jack Lloyd
On Fri, Sep 15, 2006 at 09:48:16AM -0400, David Shaw wrote:

 GPG was not vulnerable, so no fix was issued.  Incidentally, GPG does
 not attempt to parse the PKCS/ASN.1 data at all.  Instead, it
 generates a new structure during signature verification and compares
 it to the original.

Botan does the same thing for (deterministic) encodings - mostly
because I wrote a decoder for PKCS#1 v1.5, realized it probably had
bugs I wouldn't figure out until too late, and this way the worst
thing that can happen is a valid signature is rejected due to having
some unexpected but legal encoding. Default deny and all that.

Anyway, it's a lot easier to write that way - my PSS verification code
is probably around twice the length of the PSS generation code, due to
the need to check every stupid little thing.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EDP (entropy distribution protocol), userland PRNG design

2006-07-02 Thread Jack Lloyd
On Sun, Jul 02, 2006 at 03:25:09AM -0500, Travis H. wrote:
 Going over old emails.
 
 On 10/12/05, Jack Lloyd [EMAIL PROTECTED] wrote:
 I prefer a multi-stage design, as described by various people smarter than 
 I
 am:
 
   source(s) -- mixer -- pool -- extractor -- X9.31
 
 Did you really mean X9.31 and not X9.17?

Yes. IIRC, same thing, just in a different standard. (And I have a
copy of X9.31, while I've never even read X9.17)

-J

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread Jack Lloyd
On Wed, Mar 22, 2006 at 03:29:07PM -0800, Aram Perez wrote:

 * How do you measure entropy? I was under the (false) impression that  
 Shannon gave a formula that measured the entropy of a message (or  
 information stream).

He did give a formula for the entropy of a source; however the caculation is
based on the probabilties of each symbol appearing. Unless you know those, you
can't actually apply the formula. So it is computable in theory, just not in
pratice for any source that is at all interesting.

 * Can you measure the entropy of a random oracle? Or is that what  
 both Victor and Perry are saying is intractable?

A random oracle, by definition, produces a completely random output. However,
since random oracles don't actually exist that does not seem to be a terribly
interesting thing to be measuring the entropy of.

 * Are there units of entropy?

Bits are usually the most intuitive/useful unit.

 * What is the relationship between randomness and entropy?

I have a vague feeling this question requires a deeper answer than I'm able to
provide.

 * Does processing an 8 character password with a process similar to  
 PKCS#5 increase the entropy of the password?

No, because there are no additional secrets. Knowledge of the password is all
you need to rederive the final output, thus clearly there is no additional
information (ie, entropy) in the output that was not in the original password.

This ignores the salt, iteration count, and the specification of the algorithm
itself, but those are all (usually) public. So they contribute to the entropy,
they do not contribute to the conditional entropy, which is what we are usually
interested in when thinking about entropy and crypto.

 * Can you add or increase entropy?

Yes. Let's say the contents of tommorrow's NY Times has n bits of entropy (we
probably can't actually calculate n, but we know it is a finite and fixed
value). And the LA Times from the same day will also have some amount of
entropy (call it n'). However, the entropy of the two papers combined would
(probably) not be n+n', but some number x st min(n,n') = x = n+n', because
the two papers will report on many of the same issues, carry some of the same
AP stories, and so forth. This redundant information doesn't increase the
entropy (reading the same AP story in a second newspaper wouldn't give you any
new information).

A book you may find interesting is Elements of Information Theory by Cover 
Thomas.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: general defensive crypto coding principles

2006-02-14 Thread Jack Lloyd
On Tue, Feb 14, 2006 at 03:24:09AM +1300, Peter Gutmann wrote:

 1. There are a great many special-case situations where no published protocol
fits.  As the author of a crypto toolkit, I could give you a list as long
as your arm of user situations where no existing protocol can be applied
(I'd prefer not to, because it's a lot of typing).
[...]

I'm also the author of a crypto toolkit, and I'll admit I've been involved in
creating custom security protocols more than once myself. I'm well aware that
this is a legitimate need.

 It's better to design a system that can be used by the average user than one
 that's brittle enough that only geniuses can safely employ it.

I think the source of our different views on this is a result of expectations
with regards to what your average programmer is capable of in terms of secure
protocol design. I have done reviews on probably a dozen or so products that
had a custom crypto component of one sort or another, and there were often
really trivial problems (typically the well-known and well-documented ones that
people have been getting wrong for decades).

At this point I'm generally of the opinion that there are maybe 5% of
programmers who will be careful enough to get it right, and the rest will get
it spectacularly wrong because they won't bother to do anything more than
perhaps skim Applied Cryptography. So, if you're going to mandate just one
technique for everyone, you're better off (IMO) using something that is a bit
trickier but has better optimal bounds, because the 5% will still probably get
it right (and their protocols will be better off for it) and the rest are too
busy getting it wrong in other ways to bother implementing the authenticated
encryption mode incorrectly.

In short, I find it extremely optimistic to think that there is any substantial
population of programmers who could correctly design and implement a
non-trivial and secure crypto protocol without taking a reasonable amount of
time with the existing body of knowledge.

-J

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: general defensive crypto coding principles

2006-02-11 Thread Jack Lloyd
On Fri, Feb 10, 2006 at 07:21:05PM +1300, Peter Gutmann wrote:

 Well, that's the exact problem that I pointed out in my previous message - in
 order to get this right, people have to read the mind of the paper author to
 divine their intent.  Since the consumers of the material in the paper
 generally won't be expert cryptographers (or even inexpert cryptographers,
 they'll be programmers), the result is a disaster waiting to happen.

I would expect that typically implementors would be following a published
standard, which would (well, one would hope) have had expert cryptographers
check it over sometime prior to publication. If your typical application
programmer is just coming up with their own crypto protocol, I personally don't
consider it to be a valid concern because they will with overwhelming odds
completely botch it in any case, and usually in a much less subtle way than
this.

(Actually offhand I can't think of a single non-cryptographer-designed crypto
protocol I've seen that wasn't fundamentally broken, often in a fairly obvious
way. I could believe there have been a few, but the odds seem very much against
it.)

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: general defensive crypto coding principles

2006-02-09 Thread Jack Lloyd
On Thu, Feb 09, 2006 at 05:01:05PM +1300, Peter Gutmann wrote:

 So you can use encrypt-then-MAC, but you'd better be *very*
 careful how you apply it, and MAC at least some of the additional non-message-
 data components as well.

Looking at the definitions in the paper, I think it is pretty clear that that
was their intent. The scheme definitions in section 4 make no provisions for
initialization vectors or any kind of parameterization, so I'm assuming that
they assumed the encryption function will include all that as part of the
output, meaning it will be included as part of the MAC.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: thoughts on one time pads

2006-01-26 Thread Jack Lloyd
On Thu, Jan 26, 2006 at 05:30:36AM -0600, Travis H. wrote:

[...]
 Excuse me?  This would in fact be a _perfect_ way to distribute key
 material for _other_ cryptosystems, such as PGP, SSH, IPSec, openvpn,
 gaim-encryption etc. etc.  You see, he's right in that the key
 distribution problem is the hardest problem for most computer
 cryptosystems.  So the OTP system I described here is the perfect
 complement for those systems; it gives them a huge tug on their
 bootstraps, gets them running on their own power.
[...]
 So my questions to you are:
 
 1) Do you agree with my assessment?  If so, why has every crypto
 expert I've seen poo-pooed the idea?

Your use case above suggests that you are still willing to trust conventional
ciphers to be secure, so, practically speaking, what is the difference between:

Key #1: 128 bits of one time pad
Key #2: AES_{masterkey}(counter++)

I'm not an expert, but the reason I'd call it a bad idea (versus just not
worth the effort, which is all the AES/OTP comparison is suggesting) is it
introduces a need for synchronization, and that can be a hard thing to do
between arbitrary parties on a network.

 2) Assuming my use case, what kind of attacks should I worry about? 
 For example, he might leave the CD sitting around somewhere before
 putting it in his computer.  If it sits around on CD, physical access
 to it would compromise past and future communications.  If he copies
 it to flash or magnetic media, then destroys the CD, we can
 incrementally destroy the pad as it is used, but we have to worry
 about data remanence.

I don't think attacks are the problem, so much as susceptibility to errors. To
even get started, you need a CD of truly random bits, which is fairly
non-trival to do on many platforms (and it's difficult to tests if your bits
are actaully random or just look that way). More importantly, the key
management issues seem annoying and highly prone to catastrophic failure. For
example, I send you a message using the first N bits of the pad, my machine
crashes, I restore from backup (or a filesystem checkpoint), and then my index
into the pad is reset back to the start. Then I resend a second message using
the same pad bits. Problem.

I think your characterization of the possible attacks is pretty fair. But
compare the OTP failure mode access to it would compromise past and future
communications, to the failure mode of, say, RSA authenticated DH key
exchange, which provides PFS and requires an active attack in order to attack
communications even after the key is compromised. Is OTP so much more secure
than a simple PK-based key exchange that it is worth even this single tradeoff
(not to mention the initial key exchange hassles and the need to store
megabytes of pad with anyone I might want to talk to)?

[...]
 4) For authentication, it is simple to get excellent results from an
 OTP.  You simply send n bytes of the OTP, which an attacker has a
 2^-8n chance in guessing.

That sounds prone to a man in the middle attack; what is to stop someone from
taking your authentication packet with the N bits of unguessable pad, cause
your connection to drop and then authenticating as you using the pad you sent
earlier?

You could probably do a challenge-response authentication based on pad bits
pretty easily, however, though doing it in a way that doesn't require a secure
hash might be a little trickier.

 How do we ensure message integrity?  Is it
 enough to include a checksum that is encrypted with the pad?  Does it
 depend on our method of encipherment?  Assuming the encipherment is
 XOR, is a CRC sufficient, or can one flip bits in the message and CRC
 field so as to cancel each other?

There are some attacks against WEP along those lines (they used RC4 to encrypt
the checksum, instead of a one time pad, but it would end up about the same, I
would think). Using HMAC keyed with pad bits seems a lot more sane to me...

 6) How should one detect and recover from lost, reordered, or partial 
 messages?

I think that this question needs to be asked at all points to one of the flaws
of OTP from a practical standpoint.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSSL BIGNUM vs. GMP

2006-01-03 Thread Jack Lloyd

Some relevant and recent data: in some tests I ran this weekend (GMP 4.1.2,
OpenSSL 0.9.8a, Athlon/gcc/Linux) RSA operations using GMP were somewhat faster
than ones using OpenSSL even when blinding was used with both (typical
performance boost was 15-20%).

I'm assume both of which are needed should have been at least one of which
is needed? AFAIK blinding alone can protect against all (publicly known)
timing attacks; am I wrong about this?

-Jack

On Sat, Dec 31, 2005 at 11:04:31AM +, Ben Laurie wrote:
 It appears that one reason GMP may sometimes be faster than OpenSSL for
 RSA is that it seems that GMP does not do blinding or constant time
 arithmetic, both of which are needed to defend against known attacks.
 
 So, if you are going to use GMP for speed, be aware that you may be
 risking your private keys.
 
 Cheers,
 
 Ben.
 
 -- 
 http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
 
 There is no limit to what a man can do or how far he can go if he
 doesn't mind who gets the credit. - Robert Woodruff
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OpenSSL BIGNUM vs. GMP

2006-01-03 Thread Jack Lloyd
On Tue, Jan 03, 2006 at 10:10:50PM +, Ben Laurie wrote:

 Yes, you are - there's the cache attack, which requires the attacker to
 have an account on the same machine. I guess I shouldn't have called it
 constant time, since its really constant memory access that defends
 against this.
 
 http://www.daemonology.net/papers/htt.pdf

Thanks for the link! Does OpenSSL defend against this attack? I suspect writing
fast modexp code that will (in the paper's words) avoid any data-dependent or
key-dependent memory access or code path patterns would be quite non-trivial,
so I would be interested to see how that is implemented. I did a quick scan
through 0.9.8a's crypto/bn but didn't see anything that looked likely, but
OpenSSL is big, so I could easily be missing it. :)

 Incidentally, I think the main component of the difference on Athlon,
 like many other platforms, is simply a question of which library has
 hand-optimised assembler for the platform. That is, it tells us little
 about architectural differences and plenty about whether anyone has been
 bothered to optimise for that particular platform recently.

That's an good point - probably compiling both OpenSSL and GNU MP with no
assembly code would be a more interesting comparison, from an algorithm
standpoint.

However, I think you are confusing two different goals here. I'm guessing that
you want an analysis of GNU MP vs OpenSSL so you can figure out how to make
OpenSSL faster. For that, an algorithm-based analysis is obviously much more
useful. I mostly care about which is faster, now, on the platforms I care
about, and it doesn't really matter why one is faster. For example, if GNU MP
has a faster modexp implementation than OpenSSL on an Athlon because the GNU MP
developers sold their souls to the devil [*] in exchange for some really good
hand-tuned Athlon asm, that's cool with me -- as long as it's fast.

Jack

[*] I make no claims that the GNU MP developers have actually been involved in
any unholy bargains involving souls for code.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: crypto for the average programmer

2005-12-27 Thread Jack Lloyd
On Tue, Dec 27, 2005 at 02:28:07PM +, Ben Laurie wrote:

 Apparently this rather depends on platform and compiler options. I am
 reliably informed that GMP is not always faster.
 
 For those that really care it'd be cool if someone did a careful
 comparison. It would also be interesting to know why they differ.

Thank you for the correction. My statement was primarily on the basis of some
benchmarks I ran at the time I wrote some backend code in Botan to dump crypto
operations to GNU MP and/or OpenSSL when available, and at the time GNU MP
outperformed OpenSSL by a fairly large margin on x86 and Alpha machines (up to
50% on large RSA private key operations; as the keys got smaller the
performance difference reduced, down to basically nothing at 512 bit
keys). However I have since checked my changelogs and realized I must have run
those tests almost two years ago now (which surprised me a bit!), so I'm sure
those results are not particularly reflective of current performance. I'll have
to revisit this and see how things stack up these days on the platforms I care
about.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Standard ways of PKCS #8 encryption without PKCS #5?

2005-12-23 Thread Jack Lloyd

Does anyone know of any 'standard' [*] ways of encrypting private keys in the
usual PKCS #8 format without using password-based encryption? It is obviously
not hard to do, as you can stick whatever you like into the encryptionAlgorithm
field, so it would be easy to specify an plain encryption algorithm OID
(aes256-cbc, or whatever) plus an IV (and possibly a key check value and/or
some optional key label fields). I'm sure this is not the first time someone
has needed such a thing - any references would be useful.

[*]: Standard in this case being at least one implementation/spec has it, and
(preferably) it is reasonably secure/sane

Thanks,
   Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: crypto for the average programmer

2005-12-17 Thread Jack Lloyd
On Fri, Dec 16, 2005 at 05:41:48PM +, Ben Laurie wrote:

 No, OpenSSL is self-contained. There is, IIRC, an engine that uses GMP
 if you want, but its entirely optional; OpenSSL has its own bignum
 implementation that's just as good.

Last I checked, public key operations in OpenSSL were significantly faster
using the GNU MP engine - so just as good is perhaps not entirely
accurate. OpenSSL's BN library is still very fast compared to many other MPI
implementations, of course.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Looking for fast KASUMI implementation

2005-12-16 Thread Jack Lloyd

Define fast - KASUMI is based heavily on MISTY1. In fact, during a fast scan of
the KASUMI spec, I couldn't see anywhere obvious where it different from MISTY1
at all. As far as I know, I'm the only person who has even tried writing fast
code for MISTY1, and the result is quite dog-slow compared to most other common
ciphers (to pull some numbers out of the air: around 4.3 MB/sec on an 800 MHz
Athlon, compared with 9.4 MB/sec from AES-128 and 15 MB/sec from 16-round
RC5). Obviously you can do better on a faster processor (and I'm sure there are
some cycles yet to be squeezed out of my MISTY1 code - there are many who can
hand-optimize better than I), but I don't think MISTY1 (or KASUMI) will ever be
very fast in software.

Would a FPGA work instead? That seems like your best bet to me.

-Jack

On Thu, Dec 15, 2005 at 08:24:23AM -0500, james hughes wrote:
 Hello list:
 
 I have research project that is looking for a fast -software-  
 implementation of the KASUMI block cipher.  I have found many papers  
 on doing this in hardware, but nothing in software. While free is  
 better (as is beer), I will consider a purchase.
 
 FYI, KASUMI is the cryptographic engine of the 3GPP.
   http://en.wikipedia.org/wiki/3gpp
 
 Thanks.
 jim
 
 
 -
 The Cryptography Mailing List
 Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-12 Thread Jack Lloyd
On Mon, Dec 12, 2005 at 12:20:26AM -0600, Travis H. wrote:
 2) While CTR mode with a random key is sufficient for creating a
 permutation of N-bit blocks for a fixed N, is there a general-purpose
 way to create a N-bit permutation, where N is a variable?  How about
 picking a cryptographically strong permutation on N elements, where N
 is not necessarily a power of 2?

Use can use the Bear or Lion constructions to form 2^{arbitrary} bit block
ciphers quite easily.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Encryption using password-derived keys

2005-11-30 Thread Jack Lloyd

The basic scenario I'm looking at is encrypting some data using a
password-derived key (using PBKDF2 with sane salt sizes and iteration
counts). I am not sure if what I'm doing is sound practice or just pointless
overengineering and wanted to get a sanity check.

My inclination is to use the PBKDF2 output as a key encryption key, rather than
using it to directly key the cipher (with the key used for the cipher itself
being created by a good PRNG). For some reason the idea of using it directly
makes me nervous, but not in a way I can articulate, leading me to suspect I'm
worried over nothing.

So, assuming using it as a KEK makes sense: At first I thought to use XOR to
combine the two keys, but realized that could lead to related key attacks (by
just flipping bits in the field containing the encrypted key). That is probably
not a problem with good algorithms, but, then again, why take the chance; so I
was thinking instead using NIST's AES-wrap (or perhaps a less weirdly designed
variant of it that uses HMAC for integrity checking and AES in CBC mode for
confidentiality).

Am I thinking about this far harder than I should?

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-18 Thread Jack Lloyd
On Thu, Nov 17, 2005 at 12:10:53PM -0500, John Kelsey wrote:

 c.  Maybe they just got it wrong.  SHA0 and SHA1 demonstrate that this
 is all too possible.  (It's quite plausible to me that they have very
 good tools for analyzing block ciphers, but that they aren't or
 weren't sure how to best apply them to hash functions.)  

SHA-* also look very much like the already existing and public MD4 and MD5... I
would be very willing to bet that the NSA's classified hash functions (I assume
it has some, though to be honest I have only ever seen information about block
ciphers) look nothing like SHA. Perhaps their analysis tools apply well to the
ones that they build internally, but did not to an MDx-style hash, and they did
not want to release a design based on some clever design technique of theirs
that the public didn't know about; when SHA was released, Clipper and the
export controls were still in full swing, so it seems pretty plausible that the
NSA wanted to limit how many goodies it gave away.

/speculation

-Jack


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Pseudorandom Number Generator in Ansi X9.17

2005-11-10 Thread Jack Lloyd
On Thu, Nov 10, 2005 at 10:33:18AM +, Terence Joseph wrote:
 Hi,
 
 The Pseudorandom Number Generator specified in Ansi X9.17 used to be one of 
 the best PRNGs available if I am correct.  I was just wondering if this is 
 still considered to be the case?  Is it widely used in practical situations 
 or is there some better implementation available?  What would be the 
 advantages/disadvantages of modifying the Ansi X9.17 PRNG to use AES 
 instead of 3DES? Is this feasible at all?

Asides from the relatively small internal state, and the state compromise
extension problems noted by Schneier, Wagner, et al, X9.17/X9.31 are AFAIK good
PRNGs. It is very trivial to use AES instead of 3DES (just swap out the
algorithms, and change the size of the various internal values to match the
128-bit block size), and you get a larger keyspace, larger internal state, and
faster operation, so I'd say doing so is a complete win.

Technically, X9.17 has been withdrawn by ANSI, but X9.31 contains the exact
same PRNG in Appenxix A.2.4. ANSI still requires 2-key 3DES, but NIST allows
the use of 3-key 3DES or of AES with any keylength instead.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-26 Thread Jack Lloyd
On Wed, Oct 26, 2005 at 07:47:22AM -0700, Dirk-Willem van Gulik wrote:

 On Mon, 24 Oct 2005, cyphrpunk wrote:
 
  Is it possible that Skype doesn't use RSA encryption? Or if they do,
  do they do it without using any padding, and is that safe?
 
 You may want to read the report itself:
 
   http://www.skype.com/security/files/2005-031%20security%20evaluation.pdf
 
 and perhaps section 3.2.3 (about padding) and 3.2.2 (about how RSA is
 used) may help with this (and what it is used for in section 2).

I just reread those sections and I still don't see anything about RSA
encryption padding either. 3.2.2 just has some useless factoids about the RSA
implementation (but neglects to mention important implementation points, like
if blinding is used, or if signatures are verified before being
released). 3.2.3 describes the signature padding, but makes no mention of the
encryption padding, or even that a padding method is used for encryption.

Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: EDP (entropy distribution protocol), userland PRNG design

2005-10-19 Thread Jack Lloyd
On Tue, Oct 18, 2005 at 12:18:57AM -0500, Travis H. wrote:
 
   source(s) -- mixer -- pool -- extractor -- X9.31
 
 Where can I find out more about the design choices for these stages?

Peter Gutmann has several good papers on RNG design, as have some folks
currently or formerly associated with Counterpane (ie Wagner, Kelsey, Hall,
...). It is worth reading their analysis papers as well as their design papers,
especially the ones that cover fielded PRNG designs.

  I believe most common hardware RNGs produce data at fairly high rates, often
  over 100 kbytes per second.
 
 Some do, some don't.  Depends on the random source they are tapping.

Of course. I was just pointing out that there are many that do produce at that
rate. Since you weren't specific, I assumed it was a COTS hardware RNG.

 However, that's a sealed opaque package, so I don't fully trust it. 
 I've been wondering if there's a way I could use it such that I didn't
 have to fully trust it.  For example, if I could combine several, so
 that an effective attack would require collusion of several parties.

That does not seem very difficult: just sample all of them. As long as your
PRNG is good, an attacker won't be able to do anything by only controlling a
subset of them.

 
  Instead of treating the two entropy sources as somehow different in your 
  mixing
  strategy, just use the HWRNG for most of the inputs, but every tenth sample 
  (or
  whatever), instead use the hash of all the random-looking system data you 
  can
  get ahold of. Only doing it occasionally means there is a reasonable chance
  that sufficient changes have happend to the system since the sample 
  worthwhile
  in terms of entropy gained, and doing a large block of it all at once 
  prevents
  iterative guessing attacks if an attacker can control your HWRNG outputs but
  not your system statistics.
 
 That seems like a very ad-hoc system that treats the HWRNG and
 random-looking system data as somehow different (one is used for 90%
 of the samples, one for 10%).

Sorry, I should have been a little more clear. That 1/10 split was only
supposed to be an example. You would presumably determine the appropriate
sample rate based on an analysis of the system statistics. It is just as you
mentioned, oversampling won't help you generate random bits any faster; you
will get more bits but no more randomness. You should sample each possible
source so as to reach some maximum; presumably you either want to maximize
total entropy over time, or total entropy over bits sampled, or maybe total
entropy over # of samples. In any case, it's highly likely that alternating
between reading 160 bits from a good HWRNG and the SHA1 hash of the output of
`ps` is not going to produce any such maxiumum.

The point of my suggestion was not that you implement those specific steps, but
that you desing your PRNG so that it can make the best use of an arbitrary
number of entropy sources with various (known or estimated) distributions. This
ties in well with the previous point about using multiple HWRNGs.

 
  Encrypting the output using keys generated by the PRNG is a good idea, but 
  you
  presented it in a somewhat confusing way, in that it sounded almost like you
  were doing message transfer. [...]
  At not point do the two sides actually exchange messages,
 
 I don't follow.  I'm transmitting entropy from the source to where it
 is needed; surely this is a message of some kind?

But that is all you are moving - entropy. As best as I could tell from your
original proposal, the two sides never actually shared a key. So while one side
was encrypting stuff and the other side was decrypting, at no point were they
actually exchanging information.

[...]

 
  If
  you want to try to keep the entropy values sent from the box with the HWRNG 
  to
  the client a secret from people on the network, just open up a TLS session.
 
 TLS is SSL, right?

Yes.

 
 Transmitting over SSL would limit the strength to the minimum of the
 strength of the asymmetric and symmetric ciphers.  Using my method
 alone would not involve PK, so would be faster, need less entropy to
 start with, and also the upper bound on strength is the same or
 higher.  What I'm saying is that a chain is only as strong as its
 weakest link, and my protocol has one less link.

However, I don't see how you are protecting the confidentiality of the data at
all in your current design. I was not suggesting TLS as an alternative, but as
a method of achieving what you (apparently?) meant to do. However, if you could
perhaps clarify what you meant with regards to encrypting the data that is
transferred, that might help - I may well have simply misread you. In
particular, how do the two sides agree on an initial key?  I'm afraid I don't
see how it is possible for two parties to use the same key starting out with no
shared knowledge and without any public key operations.

  at little or no extra cost. You can buy a PCI board with a low-end Hifn 
  crypto
 

Re: EDP (entropy distribution protocol), userland PRNG design

2005-10-12 Thread Jack Lloyd
On Wed, Oct 12, 2005 at 04:49:43AM -0500, Travis H. wrote:
 I am thinking of making a userland entropy distribution system, so
 that expensive HWRNGs may be shared securely amongst several machines.
[...]
 Comments?
 --
 http://www.lightconsulting.com/~travis/  --
 We already have enough fast, insecure systems. -- Schneier  Ferguson

I can't say I a fan of the idea of having multiple ways of mixing entropy into
the system. In particular, the idea of producing output by XORing your PRNGs
output with the output of a semi-public RNG seems like a bad idea to me,
because an attacker can easily control those values by taking over the web
server or modifying packets in the network, and if they can somehow predict
your PRNG outputs then they will be able to actually control the final output.
The difference between knowing and controlling the PRNG output is a big deal
when you're using it for something like DSA.

I prefer a multi-stage design, as described by various people smarter than I
am:

  source(s) -- mixer -- pool -- extractor -- X9.31

Take the sources, mix it into an entropy pool and then use an extraction
function to derive values from the pool. Then use the values of that to seed a
X9.31 PRNG and produce the final output with that (with the DT values also
being generated by the extractor function). That helps make sure that even if
you make a mistake with the extractor and/or mixer function you still have some
level of protection. For example, even if an attacker can correctly guess every
16th bit of your extractor function, it will still be very difficult for them
to guess the final PRNG outputs. I've found that it is much easier to think
about the two functions as distinct, so you can reason about what specific
properties you want or need the mixer and extractor to have, and it also makes
it simpler to replace one or the other to make different security/speed
tradeoffs.

I believe most common hardware RNGs produce data at fairly high rates, often
over 100 kbytes per second. If you have one of those you'll be able to get much
more entropy from that than you will out of regular system sources, especially
as the entropy of those samples usually decreases pretty strongly after the
first sample or two, while the HWRNG keeps producing entropy at the same rate.
Instead of treating the two entropy sources as somehow different in your mixing
strategy, just use the HWRNG for most of the inputs, but every tenth sample (or
whatever), instead use the hash of all the random-looking system data you can
get ahold of. Only doing it occasionally means there is a reasonable chance
that sufficient changes have happend to the system since the sample worthwhile
in terms of entropy gained, and doing a large block of it all at once prevents
iterative guessing attacks if an attacker can control your HWRNG outputs but
not your system statistics.

Encrypting the output using keys generated by the PRNG is a good idea, but you
presented it in a somewhat confusing way, in that it sounded almost like you
were doing message transfer. Then I realized you're not actually doing that at
all, just a postprocessing (or preprocessing, in the case of the recipient)
operation using a randomly keyed block cipher (which the X9.31 RNG would also
provide nicely). At not point do the two sides actually exchange messages, so
in this situation, your mention of key distribution is somewhat misleading. If
you want to try to keep the entropy values sent from the box with the HWRNG to
the client a secret from people on the network, just open up a TLS session. TLS
is cheap if you use session resumption, and with self-signed certificates or
anonymous DH there is no key distribution. It makes bootstrapping a little more
difficult, but presumably the client can get at least some entropy via the
traditional means currently available on common platforms.

You could also just solve the problem you mention directly, and try to find a
cheaper HWRNG design. I know people who have built them for a few dollars worth
of stuff at Radio Shack, and apparently VIA, Intel, and AMD have all found them
cheap enough at various times to ship them included in components they've built
at little or no extra cost. You can buy a PCI board with a low-end Hifn crypto
chip on it for less than $80 online.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SHA-1 results available

2005-02-22 Thread Jack Lloyd

http://theory.csail.mit.edu/~yiqun/shanote.pdf

No real details, just collisions for 80 round SHA-0 (which I just confirmed)
and 58 round SHA-1 (which I haven't bothered with), plus the now famous work
factor estimate of 2^69 for full SHA-1.

As usual, Technical details will be provided in a forthcoming paper. I'm not
holding my breath.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Passwords can sit on disk for years

2004-06-14 Thread Jack Lloyd
On Mon, Jun 14, 2004 at 11:31:23AM +, [EMAIL PROTECTED] wrote:
 Ben Laurie wrote:
 
  In OpenSSL we overwrite with random gunk for this reason.
 
 What?  No compiler is smart enough to say, The program
 sets these variables but they are never referenced again.
 I'll save time and not set them.

Sure there are. In fact there was a discussion (either here or cypherpunks)
maybe a year or two ago about how Visual C++ has exactly that problem with
memset. Consider the following:

void foo()
{
   char buffer[1024];

   /* do something */

   memset(buffer, 0, 1024);
   return;
}

As far as the compiler can tell, that memset has no effect, because as soon as
it returns from the function the stack will go away, so whatever value it may
or may not have doesn't matter (basically - there is no way for you to tell if
that memset executed or not). Since it has no effect, why bother executing it?
It's just a waste of time.

That's because languages are defined in terms of side-effects that are visible
to the program - not in terms of side effects visible to some external
entity. The fact that someone messing around with swap can tell if that memset
executed or not is not something C cares about.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL accel cards

2004-05-25 Thread Jack Lloyd

Does anyone know of an SSL acceleration card that actually works under
Linux/*BSD? I've been looking at vendor web pages (AEP, Rainbow, etc), and
while they all claim to support Linux, Googling around all I find are people
saying Where can I get drivers? The ones vendor shipped only work on RedHat
5.2 with a 2.0.36 kernel. (or some similar 4-6 year old system), and certainly
they don't (gasp) make updated versions available for download. Because someone
might... what, steal the driver? Anyway...

What I'm specifically looking for is a PCI card that can do fast modexp, and
that I can program against on a Linux/*BSD box. Onboard DES/AES/SHA-1/whatever
would be fun to play with but not extremely important.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


DH with shared secret

2003-10-03 Thread Jack Lloyd
This was just something that popped into my head a while back, and I was
wondering if this works like I think it does. And who came up with it
before me, because it's was too obvious. It's just that I've never heard of
something alone these lines before.

Basically, you share some secret with someone else (call it S).  Then you
do a standard issue DH exchange, but instead of the shared key being
g^(xy), it's g^(xyS)

My impression is that, unless you know S, you can't do a succesfull MITM 
attack on the exchange. Additionaly, AFAICT, it provides PFS, since if 
someone later recovers S, there's still that nasty DH exchange to deal 
with. Of course after S is known MITM becomes possible.

Given the recent climate around here, I'll add that I'm not planning on
using this for anything (I only use TLS, I swear! :P), I just thought it
was an semi-interesting idea.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]