[EMAIL PROTECTED] (Peter Gutmann) writes:

>>>Changing this could be tricky because there are all sorts of
>>>inconsistencies both in standards and implementations, the standard
>>>practice has been to skip the parameters field because if you don't,
>>>things break.
>>I don't think so.  The contents of the parameters field depends on the hash
>>algorithm.  As far as I know (but I didn't read the scriptures), for normal
>>hashes like SHA-1 the parameters field should not be used.
> It may or may not be used, depending on which standard you follow.  First of
> all, even for the simple case of SHA-1, the parameters can be present or not.
> See the note in RFC 3274:
>   There are two possible encodings for the [...] parameters field which arise
>   from the fact that when the 1988 syntax for AlgorithmIdentifier was
>   translated into the 1997 syntax, the OPTIONAL associated with the
>   AlgorithmIdentifier parameters got lost.  Later it was recovered via a
>   defect report, but by then, everyone thought that algorithm parameters were
>   mandatory.  Because of this, some implementations will encode null
>   parameters as an ASN.1 NULL element and some will omit them entirely (see
>   for example section 12 of CMS [RFC2630]).
> So for both standards and implementations it's pretty much a coin-toss (crap-
> shoot if you're in the US) as to what you'll find there.  Because of this,
> standard practice in implementations has been to skip any parameters.

I see nothing fatal yet.  RFC 2630 says this:

   The AlgorithmIdentifier parameters field is optional.  If present,
   the parameters field must contain an ASN.1 NULL.  Implementations
   should accept SHA-1 AlgorithmIdentifiers with absent parameters as
   well as NULL parameters.  Implementations should generate SHA-1
   AlgorithmIdentifiers with NULL parameters.
   The AlgorithmIdentifier parameters field must be present, and the
   parameters field must contain NULL.  Implementations may accept the
   MD5 AlgorithmIdentifiers with absent parameters as well as NULL

GnuTLS follows this and permits both an absent parameters field, or a
field containing an ASN.1 NULL value (0x0500).  In the latest release,
it does not permit anything else.

The attacker can chose one of two encodings: missing parameters field,
or a NULL parameters field.  That's not flexibility enough to make the
value a cube, I believe.

> But wait, there's more!  Some AlgorithmIdentifiers have parameters that you
> can set to either a large range of fixed values or even arbitrary values.

If so, the implementation will have to parse the parameters field and
make sure it is valid.  But as you note later on, that may not always
be possible...

> OK, so you don't sign IDEA values so this isn't a problem.  However, a related
> AlgoID is the DES one, which for CBC mode has as parameter an IV, which is 64-
> bits of arbitrary user-chosen data.

Now this is interesting.  A parameters field that contain fields such
as a longer IV -- that can't be validated because there is no
structure in it -- may be abused to make the value a cube.

> So lets extend this further.  There have been a pile of designs for hash
> algorithms with user-definable parameters.  If these ever get used in
> standards then no doubt there'll be AlgoIDs defined that allow an attacker to
> set arbitrary values in the AlgoID through them.  So the arms-race of trying
> to track invalid data now becomes a problem of proving a negative, i.e.
> proving that there isn't some AlgoID out there somewhere that allows you to
> set one or two parameter bytes to arbitrary values.

Deploying a hash widely isn't done easily, though.  GnuTLS only
support MD2, MD5, SHA-1 and RIPEMD (of which MD2/MD5 are by default
not used to verify signatures).

> But wait, there's more!  From what I understand of the attack, all you need
> for it to work is for the sig.value to be a perfect cube.

The signature is the cube root of a perfect cube, where the perfect
cube is the PKCS#1 value 00 01 FF ... FF 00 ASN.1-DigestInfo.

> To do this, all you need to do is vary a few of the bytes of the
> hash value, which you can do via a simple brute-force search.  So
> even with a perfect implementation that does a memcmp() of a fixed
> binary string for all the data present but the hash, the attack
> still works.

An interesting question is how much work would there be in varying the
input to modify the hash in the PKCS#1 string until the PKCS#1 string
becomes a cube?  It probably takes quite a number of hashes.

> In either of these cases, RSA e=3 is dead.  Obesa cantavit.

I could be seriously confused, but as far as I have understood the
math so far, I would agree.

> So the fix isn't an ongoing penetrate-and-patch arms race to try and filter
> out more and more hard-to-find possibilities, it's to immediately deprecate e=
> 3.  Grab a pile of IETF boilerplate, add a single sentence "Don't use RSA with
> e <= 3" (I'd actually say "<= 17" since there's no good reason not to), and
> apply it as a BCP by reference to SSL/TLS, IPsec, S/MIME, PGP, DNSSEC, and so
> on.  There'll always be broken standards out there that require e=3 (I know of
> at least one that uses e=2, and another that uses raw, unpadded RSA, and
> another that... well you get the idea), but the only quick, sure fix is to
> kill e=3, not to try and anticipate every potential way of trying to use it,
> because you'll never be secure that way.

Wouldn't e=17 border on being problematic as well?  You'll need a
larger modulus though, but the approach appear to be the same...
Perhaps the modulus becomes sufficient large so that people aren't
using those modulus sizes, though.


The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to