RE: A note on vendor reaction speed to the e=3 problem

2006-09-18 Thread Whyte, William
   Anyway, the attack applies even if you throw away the
   ASN.1 data.
 If you ignore the ASN.1 data you expect the hash to be
 in a fixed byte position, so the attack does not apply.

It's correct that the attack doesn't apply if you expect
the hash to be in a fixed byte position. I would say that
it's incorrect that there was no chance of it being screwed 
up in the absence of ASN.1. But I'm happy to agree to
disagree at this point.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Exponent 3 damage spreads...

2006-09-18 Thread James A. Donald

David Wagner wrote:
 It seems to me that e=3 is a distraction.  I think
 that these security holes have revealed some more
 fundamental issues here that are independent of the
 value of e you use.

 It seems to me that the problems can be attributed to
 two problems: (a) implementation bugs (failures to
 implement the spec faithfully); and (b) ad hoc
 signatures schemes that have never been adequately
 validated. In more detail:

   (a) Any implementation that doesn't check whether
   there is extra junk left over after the hash digest
   isn't implementing the PKCS#1.5 standard correctly.
   That's a bug in the implementation.  Of course, as
   we know, if you use buggy implementations that fail
   to implement the specification faithfully, all bets
   are off

   (b) The discussion of parameter fields in PKCS#1.5
   signatures illustrates a second, orthogonal problem.
   If your implementation supports appending additional
   parameter fields of some general structure, then you
   have not implemented conventional PKCS#1.5
   signatures as they are usually understood; instead,
   you have implemented some extension.  That raises a
   natural question: Why should we think that the
   extended scheme is still secure?

When a protocol is successful, pretty soon it comes in a
large number of variants, all of which have to coexist -
the original version, several upgraded versions,
different interpretations of the spec, buggy
interpretations of the spec and Microsoft style embrace
and extend interpretations of the spec designed to
deliberately hamstring interoperability.

Therefore one generally makes provisions for future
expansion - for additional fields in a record,
additional types of records.   ASN.1, S-expressions, and
XML permit essentially limitless future fields.  That is
dangerously great flexibility.  On the other hand, TCP
format turned out to permit too little flexibility.

GPG's concept of records seems to me are reasonable
way of providing future expansion, without requiring the
programmer to handle potentially limitless future
expansion of every possible bit of organized data, a
task at which the programmer is likely to fail.

In general, a communication should tell what program and
version the communication is coming from, so that future
programs can say oh, I am talking to someone old
fashioned, so must talk in the old fashioned dialect.
(Though for conciseness, this probably should not be in
human readable form, but in some cryptic set of bytes
that are agreed to stand for some particular program.)
It should also be able to specify that it is like some
other program, some program that became a de facto
standard that everyone has to be able to interoperate
with, saying If you don't recognize me, assume I am
this other program, and it will work well enough.  The
communication should also identify that it is in accord
with some particular standards document (again, probably
with a cryptic set of bytes rather than a verbose url),
so that the recipient knows that this communication
version 1.0 format, or version 1.1, or the emergency
fixed version of version 1.1, though if the standards
document was good enough, which it seldom is, it would
not be necessary for the program to identify itself, or
to reference particular concrete implementations.

But still, we likely need programs that only understand
1.0 format to have some success when they receive 1.1
format, and this is where the expandability of ASN.1 and
the rest is useful - and dangerous.

I would suggest that communication occurs in records
that correspond to database records and to C++ objects,
and that some records be defined with provision for
future expansion, and other records not so defined,
according to the judgment of the people defining the
original protocol.  With ASN.1, XML, and S-expressions,
*everything* has provision for future expansion, which I
suggest is dangerously excessive.

If the 1.0 protocol contains an error, they find that
the 1.1 protocol needs some more fields in the record,
and no provision has been made for future expansion,
then they either define a new type of record replacing
the old, (thereby guaranteeing that 1.0 programs will
fail to interoperate when this new record is used) or
define an additional record supplementing the old, which
the 1.0 programs will ignore, handling only the records
they recognize.

Coming back to the case at hand, we should have had a
signature record that only allowed a one particular kind
of hash, and nothing but the hash, and no future
expansion, then when people realized that was a problem,
that unforeseen new hashes would need to be introduced
over time, they should have then introduced an
incompatible signature record that defined the hash
type, and the hash, and allowed for no future expansion.

 James A. Donald

Re: Why the exponent 3 error happened:

2006-09-18 Thread Simon Josefsson
Whyte, William [EMAIL PROTECTED] writes:

 This is incorrect. The simple form of the attack
 is exactly as described above - implementations
 ignore extraneous data after the hash. This
 extraneous data is _not_ part of the ASN.1 data.
 James A. Donald wrote:
But it is only extraneous because ASN.1 *says* it is

 No. It's not the ASN.1 that says it's extraneous, it's the
 PKCS#1 standard. The problem is that the PKCS#1 standard
 didn't require that the implementation check for the
 correct number of ff bytes that precede the BER-encoded
 hash. The attack would still be possible if the hash
 wasn't preceded by the BER-encoded header.

That's not true -- PKCS#1 implicitly require that check.  PKCS#1 says
the verification algorithm should generating a new signature and then
compare them.  See RFC 3447 section 8.2.2.  That solves the problem.

Again, there is no problem in ASN.1 or PKCS#1 that is being exploited
here, only an implementation flaw, even if it is an interesting one.

After reading it
occurred to me that section 4.2 of it describes a somewhat related
problem, where the hash OID is modified instead.  That attack require
changes in specifications and implementations, to have the
implementation support the new hash OID.  But it suggests a potential
new problem too: if implementation don't verify that the parsed hash
OID length is correct.  E.g., an implementation that uses

memcmp (parsed-hash-oid, sha1-hash-oid,
 MIN (length (parsed-hash-oid), length (sha1-hash-oid)))

to recognize the hash algorithm used in the ASN.1 structure, it may
also be vulnerable: the parsed-hash-oid may contain garbage, that
can be used to forge signatures against broken implementations,
similar to the two attacks discussed so far.  I don't know of any
implementations that do this, though.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

RE: Why the exponent 3 error happened:

2006-09-18 Thread Kuehn, Ulrich
I noticed the exact same code being present in the mozilla 1.7.13 source ... I 
wonder what the correct consequence would be? Have us crypto people proof-read 
all relevant source code? Better educate developers?

Interestingly the attacker's playground between the 0, 1, 0 and the hash gets 
bigger with larger key sizes, so I wonder if attacks get easier for longer 


 For another example of just how badly this kind of thing can 
 be done, look at this code excerpt from Firefox version, which is the fixed version.  There are two PKCS-1 
 parsing functions, one which returns the hash and its prefix, 
 the other of which is given the hash and asked whether it 
 matches the RSA-signed value.  This is from the latter one:
  * check the padding that was used
 if (buffer[0] != 0 || buffer[1] != 1)
 goto loser;
 for (i = 2; i  modulus_len - hash_len - 1; i++) {
 if (buffer[i] == 0)
 if (buffer[i] != 0xff)
 goto loser;
  * make sure we get the same results
 if (PORT_Memcmp(buffer + modulus_len - hash_len, hash, 
 hash_len) != 0)
 goto loser;
 return SECSuccess;
 Here, buffer holds the result of the RSA exponentiation, of 
 size modulus_len, and we are passed hash of size hash_len to compare.
 I don't think this code is used, fortunately.  It will accept 
 anything of the form 0, 1, 0, garbage, hash.  Just goes to 
 show how easy it is to get this kind of parsing wrong.
 (Note, this is from 

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: A note on vendor reaction speed to the e=3 problem

2006-09-18 Thread Jack Lloyd
On Fri, Sep 15, 2006 at 09:48:16AM -0400, David Shaw wrote:

 GPG was not vulnerable, so no fix was issued.  Incidentally, GPG does
 not attempt to parse the PKCS/ASN.1 data at all.  Instead, it
 generates a new structure during signature verification and compares
 it to the original.

Botan does the same thing for (deterministic) encodings - mostly
because I wrote a decoder for PKCS#1 v1.5, realized it probably had
bugs I wouldn't figure out until too late, and this way the worst
thing that can happen is a valid signature is rejected due to having
some unexpected but legal encoding. Default deny and all that.

Anyway, it's a lot easier to write that way - my PSS verification code
is probably around twice the length of the PSS generation code, due to
the need to check every stupid little thing.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]