As other's have mentioned, I don't believe the small RSA exponent (e = 3) is to blame in Bleichenbacher's attack. Indeed, the mathematical problem of computing the cubic root of m modulo an rsa modulus n, for a *fixed*, arbitrary m, is still considered to be hard (no one has shown the opposite). What Bleichenbacher demonstrated is that computing the cubic root of m' || G, where G can be any value, garbage, and is sufficiently large, is easy.

These are two different problems, and the vulnerability is due to the fact that these libraries allow for the variant G part. I don't see ASN.1 as being faulty either. The ASN.1 simply acts as a value that allows you to determine what hash algorithm to use. If the encrypted signature would be of the form: What-ever-padding, hash, header and implementations would directly go to the least significant bits in order to retrieve the header (which should be of fixed size), and then retrieve the hash, we wouldn't have this problem. I believe you should put the most sensitive information in the least significant bytes, which are harder to manipulate (Bleichenbacher's attack plays with the most significant bytes, the least significant bytes are basically random in his calculations, he doesn't have control over them). This reminds me of the RSA lsb hardness problem theorem http://www.wisdom.weizmann.ac.il/~oded/annot/node17.html I have notes explaining it right here, section 8.4.1: http://crypto.cs.mcgill.ca/~stiglic/Papers/crypto2.ps The theorem basically says that if you can predict the least significant bit of the plaintext given the corresponding RSA ciphertext, than you can compute the whole plaintext. The theorem doesn't directly apply however (RSA signature verification uses the encryption operation, not decryption), but may be of some insight. The problem is that we (crypto community) still don't have a good way of writing specs. This is in fact a hard problem. And the problem doesn't get easier with the increasing complexity of the specs. We need simple algorithms and protocols, which allow just enough flexibility, and we need a good and precise way to write specs for these. On one side you have theoretical cryptographers / mathematicians who work in an abstract level, develop algorithms and protocols, but dont have much interest in implementing these other than possibly in a prototype form. On the other end, you have developers who excel in coding and system integration but dont necessarily understand the theoretical background in all its details. Specifications act as a bridge between these two worlds, but this bridge is not very solid today. We need to do allot more effort into building stronger bridges. --Anton --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]