interesting HMAC attack results

2006-09-23 Thread Perry E. Metzger

  http://eprint.iacr.org/2006/319

Cryptology ePrint Archive: Report 2006/319

Forgery and Partial Key-Recovery Attacks on HMAC and NMAC Using Hash Collisions

Scott Contini and Yiqun Lisa Yin

  Abstract. In this paper, we analyze the security of HMAC and NMAC,
  both of which are hash-based message authentication codes. We present
  distinguishing, forgery, and partial key recovery attacks on HMAC and
  NMAC using collisions of MD4, MD5, SHA-0, and reduced SHA-1. Our
  results demonstrate that the strength of a cryptographic scheme can be
  greatly weakened by the insecurity of the underlying hash function.

[I Heard about this paper from ekr's blog.]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-23 Thread Peter Gutmann
Simon Josefsson [EMAIL PROTECTED] writes:

Not using e=3 when generating a key seems like an easy sell.

Almost no-one does this anyway, but I don't think that's much help.

A harder sell might be whether widely deployed implementations such as TLS
should start to reject signatures done with an e=3 RSA key.

What do people think, is there sufficient grounds for actually _rejecting_ e=
3 signatures?

I can't think of any other way to get people to move away from e=3.  The
problem isn't major implementations who use e=F4 and check signatures properly
(at least as of a week or so back :-), it's the hundreds (or thousands?) of
random obscure implementations and deployments that'll never even hear about
this and will never be fixed, and so will remain vulnerable in perpetuity
without even knowing it.  Unless things break obviously, there's no incentive
(or even apparent need) to fix it.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Exponent 3 damage spreads...

2006-09-23 Thread Peter Gutmann
Leichter, Jerry [EMAIL PROTECTED] writes:
| I don't think it's a problem, you just take the ASN.1 DigestInfo
| value, since the trailing garbage isn't part of the DigestInfo, you
| ignore it.  Specifically, the ASN.1 object is entirely self-contained,
| so you can tell exactly where it ends and what it contains.  Anything
| outside it is beyond the scope of this specification :-).

This is a rather peculiar interpretation of the spec.  If I look at a C
specification and it tells me that an integer is a string of digits, when I
write a C compiler, am I permitted to say that [EMAIL PROTECTED]% can be 
parsed as an
entirely contained integer, with the @#% beyond the scope of the
specification?

No, because the compiler doesn't use a TLV encoding as ASN.1 does.  Take
something like an MPEG, JPEG, MP3, or some other piece of data that uses a TLV
encoding with a clearly defined (by the data) end.  Cat some extra garbage
onto the end of it, and try and view/play it.  If it's a proper TLV encoding
(rather than a TV encoding, i.e. no explicit length info included) then the
decoder/player will ignore the extra garbage at the end*.  That's the whole
point of TLV encodings, you know a priori when to stop, and stop exactly at
that point.

Peter.

* There are some that might keep reading in the hope that something else crops
  up, I don't know, but the ones I've looked at don't care if there's extra
  junk at the end.  This goes back at least as far as CP/M, where you could
  have extra ^Z's and other junk at the end of files that apps had to avoid
  ploughing into.
  

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-23 Thread Simon Josefsson
Leichter, Jerry [EMAIL PROTECTED] writes:

 Granted, one or more implementations got this wrong.  (Has anyone looked
 to see if all the incorrect code all descends from a common root, way
 back when?)

We have at least three independent widely used implementations that
got things wrong: OpenSSL, Mozilla NSS, and GnuTLS.

However, note that this isn't a single problem; we are talking about
at least two related problems.  Some implementations are vulnerable to
only one of them.

The first problem was ignoring data _after_ the ASN.1 blob.
Vulnerable: OpenSSL, NSS?

The second problem was ignoring data _in_ the ASN.1 blob, in
particular, in the parameters field.  Vulnerable: OpenSSL, GnuTLS,
NSS?

A several year old paper by Kaliski discussed using the ASN.1 OID to
store data in.  It has slightly different properties, but the lesson
in this context is that implementations must properly check the ASN.1
OID field too.

 Until we know whether this is *one* mistake that was copied from
 implementation to implementation, or the same mistake made by
 multiple developers, it's really premature to draw any conclusions.

I hope that I convinced you that this isn't an open question.

/Simon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-23 Thread Peter Gutmann
Yet another e=3 attack, although this one is a bit special-case.  As Burt
Kaliski points out in his paper on hash function firewalls,
http://www.rsasecurity.com/rsalabs/staff/bios/bkaliski/publications/hash-firewalls/kaliski-hash-firewalls-ct-rsa-2002.pdf,
if you can control the AlgorithmIdentifier (specifically the object identifier
or OID), you can also inject arbitrary bits into the signature.  This works as
follows:

1. Create your forged e=3 signature using extra chosen garbage data.

2. Register an object identifier for the hash algorithm that contains the
   extra data, thus allowing you to retro-create the forged signature using
   legitimate data.

3. Profit!

The use of multiple OIDs to identify a single algorithm is relatively common
(see the OID table for dumpasn1, there are something like a dozen overlapping
OIDs for DSA alone), all you need to do is get one registered and adopted.
Sure, it's a bit of work, but if implemented no amount of checking will catch
it, since it's a perfectly valid, legitimate OID and encoding.

(I know of at least one registered OID that was back-engineered to contain an
 particular interesting bit pattern, and I've seen it used in several
 implementations, so this isn't that far-fetched an attack).

Oh yes, and before the ASN.1-bashing starts again, this affects any encoding
scheme, it's not some ASN.1 problem.

Peter.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-23 Thread Peter Gutmann
Leichter, Jerry [EMAIL PROTECTED] writes:

A several year old paper by Kaliski discussed using the ASN.1 OID to store
data in.

Damn, beat me to it :-).

It has slightly different properties, but the lesson in this context is that
implementations must properly check the ASN.1 OID field too.

The problem is that no amount of checking can catch this.  If you register the
OID or otherwise get it into some standard somewhere, then it's kosher as far
as anyone's concerned.  There's no check that can catch it if you're
required (by a standard, by a client, by bilateral agreement, etc) to accept
that OID.

(There's been at least one case where random OIDs have been used in the past.
 Since it's a pain to register them, a large vendor generated them randomly
 beneath an arc registered to them.  Although this is kind of weird and I'm
 sure was never meant to be done this way, there's nothing inherently invalid
 about this).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Exponent 3 damage spreads...

2006-09-23 Thread Leichter, Jerry
| | I don't think it's a problem, you just take the ASN.1 DigestInfo
| | value, since the trailing garbage isn't part of the DigestInfo, you
| | ignore it.  Specifically, the ASN.1 object is entirely
| | self-contained, so you can tell exactly where it ends and what it
| | contains.  Anything outside it is beyond the scope of this
| | specification :-).
| 
| This is a rather peculiar interpretation of the spec.  If I look at a
| C specification and it tells me that an integer is a string of
| digits, when I write a C compiler, am I permitted to say that
| [EMAIL PROTECTED]% can be parsed as an entirely contained integer, with 
the
| @#% beyond the scope of the specification?
| 
| No, because the compiler doesn't use a TLV encoding as ASN.1 does.
| Take something like an MPEG, JPEG, MP3, or some other piece of data
| that uses a TLV encoding with a clearly defined (by the data) end.
| Cat some extra garbage onto the end of it, and try and view/play it.
| If it's a proper TLV encoding (rather than a TV encoding, i.e. no
| explicit length info included) then the decoder/player will ignore the
| extra garbage at the end*.  That's the whole point of TLV encodings,
| you know a priori when to stop, and stop exactly at that point.
I have to disagree.  The spec describes an entire certificate, not one
field inside of it.  The issue isn't the TLV encoding of the DigestInfo
- fine, that subfield ended.  It's the next level in the hierarchy, the
D data field I think it was called.  TLV-encoded or not, it exists as a
called-out entity, since it is what the signature is run over.  And
we are told that D consists of DigestInfo and a digest value.  That's
it; no extra bytes at the end.  If this were at the end of the
certificate, and the code *completely ignored* the random trailing
junk - that is, it did *not* include it in the signature - then you
might have an argument.

| Peter.
| 
| * There are some that might keep reading in the hope that something
|   else crops up, I don't know, but the ones I've looked at don't care
|   if there's extra junk at the end.  This goes back at least as far as
|   CP/M, where you could have extra ^Z's and other junk at the end of
|   files that apps had to avoid ploughing into.
I would expect a file *player* that pretty much linearly scans a
file, that's designed to be user-friendly, that runs in the
standards-free zone of Windows software, and that doesn't deal with
anything particularly critical, to have a greedy algorithm that
just decodes as it goes along.  Compare to Web browsers and their
casual acceptance of all kinds of broken HTML.  In both cases, I
would expect a validator to be much stricter - and I would certainly
expect a security-relevant piece of software, or even something like
a compiler, to be strict.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-23 Thread Leichter, Jerry
|  Granted, one or more implementations got this wrong.  (Has anyone
|  looked to see if all the incorrect code all descends from a common
|  root, way back when?)
| 
| We have at least three independent widely used implementations that
| got things wrong: OpenSSL, Mozilla NSS, and GnuTLS.
| 
| However, note that this isn't a single problem; we are talking about
| at least two related problems.  Some implementations are vulnerable to
| only one of them.
| 
| The first problem was ignoring data _after_ the ASN.1 blob.
| Vulnerable: OpenSSL, NSS?
| 
| The second problem was ignoring data _in_ the ASN.1 blob, in
| particular, in the parameters field.  Vulnerable: OpenSSL, GnuTLS,
| NSS?
| 
| A several year old paper by Kaliski discussed using the ASN.1 OID to
| store data in.  It has slightly different properties, but the lesson
| in this context is that implementations must properly check the ASN.1
| OID field too.
| 
|  Until we know whether this is *one* mistake that was copied from
|  implementation to implementation, or the same mistake made by
|  multiple developers, it's really premature to draw any conclusions.
| 
| I hope that I convinced you that this isn't an open question.
I agree that there are two issues, and they need to be treated
properly.  The first - including data after the ASN.1 blob in the
signature computation but then ignoring it in determining the
semantics - is, I'll argue, an implementation error.  You list only
OpenSSL as definitely vulnerable, NSS as ?, so it sounds like
only one definitive example.  Even if NSS has the same problem, one
has to look at code provenance.  OSS efforts regularly share code,
and code to pick apart data fields is hardly kind of thing that
is going to inspire someone to go out and do it better - just
share.

The second - embedded parameter fields - is a much deeper issue.
That's a protocol and cryptographic error.  The implementations
appear to be correctly implementing the semantics implied by the
spec - ignore parameters you don't care about.  This is common
practice, and a fine idea in *most* situations, since it allows
for extensions without breaking existing implementations.  As
we've seen, it's a really bad idea for signed fields with small
exponent, since they have an unexpected malleability.  The
advice to avoid small exponents is fine, but in fact I think this
is a special case of another principle:  Don't act as a signing
oracle.  That was known to be a bad idea quite some time ago,
and it's one reason we sign cryptographic hashes, not raw data.
The curiosity of this bit of misdesign is that it hashes the raw
data, but then, on the way to signing it, turns the field back
into (mainly) raw data!
-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-23 Thread James A. Donald

--
: :  10.2.3 Data decoding
: : The data D shall be BER-decoded to give an ASN.1
: : value of type DigestInfo, which shall be separated
: : into a message digest MD and a message-digest
: : algorithm identifier. The message-digest algorithm
: : identifier shall determine the selected
: : message-digest algorithm for the next step.

Leichter, Jerry wrote:
 The only reasonable reading of the text quoted above
 is that the D must consist of, and *only* of, an ASN.1
 value of the given type.

That is not what it says.

It says shall be decoded to give, not shall be
decoded to give and only give

Further, similar text appears in lots of places where
the correct behavior, to allow for future extension, is
to allow for more stuff.

A major design consideration in ASN.1 was *to* allow for
more stuff, in order that multiple versions of the
specification can peacefully coexist.

Therefore, in the context of ASN.1, the correct
interpretation of the specification is to allow for
arbitrary expansion - which is a bad spec.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 uTSBsEVoGqap3qSR80twUH+gg5Q8MDBgQhB2Wyxw
 4AjRA5gK1azQkXrhC7CakjCPKw7vvSVL7qWID8o/o

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: IGE mode is broken (Re: IGE mode in OpenSSL)

2006-09-23 Thread Travis H.

On 9/9/06, Adam Back [EMAIL PROTECTED] wrote:

IGE if this description summarized by Travis is correct, appears to be
a re-invention of Anton Stiglic and my proposed FREE-MAC mode.
However the FREE-MAC mode (below described as IGE) was broken back in
Mar 2000 or maybe earlier by Gligor, Donescu and Iorga.  I recommend
you do not use it.  There are simple attacks which allow you to
manipulate ciphertext blocks with XOR of a few blocks and get error
recovery a few blocks later; and of course with free-mac error
recovery means the MAC is broken, because the last block is
undisturbed.



http://groups.google.ca/group/sci.crypt/browse_thread/thread/e1b9339bf9fb5060/62ced37bb9713a39?lnk=st


I don't see why integrity+confidentiality has to cost n log n
operations.  I haven't read the whole paper yet (and the proof is at
the end), but I don't see why you can't append a universal hash
(chosen by a second key, or at random and identified in the plaintext
in some suitable way) of the input to the plaintext prior to
encryption, and get integrity for cheap.  Or are universal hashes
considered cryptographic-weight primitives, and thus this constitutes
a second pass over the plaintext?  I must admit I don't know of any
lower bound on universal hash complexity... wikipedia only mentions
f(x) = ax + b mod p, (p prime) which is clearly less heavy than modexp
and other PK algos, and it looks like you could do it incrementally
over the plaintext x, I think... my intuition tells me this is way
faster than a block cipher.
--
On the Internet noone knows you're a dog - except Bruce Schneier.
Unix guru for rent or hire -- http://www.lightconsulting.com/~travis/
GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: IGE mode is broken (Re: IGE mode in OpenSSL)

2006-09-23 Thread Ben Laurie
Travis H. wrote:
 On 9/9/06, Adam Back [EMAIL PROTECTED] wrote:
 IGE if this description summarized by Travis is correct, appears to be
 a re-invention of Anton Stiglic and my proposed FREE-MAC mode.
 However the FREE-MAC mode (below described as IGE) was broken back in
 Mar 2000 or maybe earlier by Gligor, Donescu and Iorga.  I recommend
 you do not use it.  There are simple attacks which allow you to
 manipulate ciphertext blocks with XOR of a few blocks and get error
 recovery a few blocks later; and of course with free-mac error
 recovery means the MAC is broken, because the last block is
 undisturbed.
 
 http://groups.google.ca/group/sci.crypt/browse_thread/thread/e1b9339bf9fb5060/62ced37bb9713a39?lnk=st

 
 I don't see why integrity+confidentiality has to cost n log n

Not what he said, he said n+log n.

 operations.  I haven't read the whole paper yet (and the proof is at
 the end), but I don't see why you can't append a universal hash
 (chosen by a second key, or at random and identified in the plaintext
 in some suitable way) of the input to the plaintext prior to
 encryption, and get integrity for cheap.

Which is cost kn, k  1, so kn  n+log n, in the limit. Proof left as an
exercise for the reader.

  Or are universal hashes
 considered cryptographic-weight primitives, and thus this constitutes
 a second pass over the plaintext?  I must admit I don't know of any
 lower bound on universal hash complexity... wikipedia only mentions
 f(x) = ax + b mod p, (p prime) which is clearly less heavy than modexp
 and other PK algos, and it looks like you could do it incrementally
 over the plaintext x, I think... my intuition tells me this is way
 faster than a block cipher.


-- 
http://www.apache-ssl.org/ben.html   http://www.links.org/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]