Hello,

On Tue, Jul 3, 2012 at 1:56 AM, Michael Nelson <[email protected]> wrote:
> If the target HSM notices that the encrypted blob is corrupted, then it will 
> give you an error message.  This is a leak of information, but that's life.  
> Normally such a covert channel would at most help you to mount a brute force 
> attack, which is impractical owing to the size of the key space and other 
> things.
>
> However, if the target HSM allows you to ask it to unwrap (decrypt) the key 
> under the assumption that pkcs v1.5 padding has been used, then by careful 
> manipulation of the encrypted blob, you can use the failed decryptions to 
> deduce the plain key, after only a few thousand submissions.

If you deal with HSM-s and transporting keys, you probably have decent
procedures in addition to more carefully chosen cipher modes. I would
consider walking away with the HSM(-s) in question instead, if
possible and allowed.

> It does not matter whether the symmetric key was encrypted using the pkcs1 
> v1.5 padding. (Matthew Green mentioned this in his post.)  The flaw in the 
> HSM is that it will attempt to use the deprecated padding during the 
> decryption.  Loosely, it looks for certain bytes and tells you whether it 
> found them.

If asked to. In addition it will give you the OAEP-wrapped imported
symmetric key back, encrypted against a weak key pair generated
yourself with OpenSSL, if *asked to*.

> It also does not matter whether you are using pkcs11 APIs, and whether you 
> are doing key wrap/unwrap, and whether the data is a key.  Any secret piece 
> of data encrypted under an RSA cert can be potentially extracted, via any 
> kind of crypto module, as long as the module will use the deprecated padding 
> mechanism.

That's a very broad claim. I guess nobody has questioned the fact that
the authors of the paper optimized a long-known weakness to become
useful, *if the conditions are right*.
Like uncontrolled access to C_UnwrapKey or C_Decrypt (in terms of
PKCS#11, as this is what the authors are using).

It all works, if the module functions as an oracle that can be
exploited by the adversary. I don't know the SecureID token, but I do
know some other tokens described in the paper. Any reasonable token
would do owner PIN verification before trying to decrypt.

> "The vulnerability outlined by the researchers makes it possible (however 
> unlikely) that an attacker with access to the user’s smartcard device and the 
> user’s smartcard PIN could gain access to a symmetric key or other encrypted 
> data sent to the smartcard."
>
> This is a fairish statement.  Though I think "unlikely" needs clarifying.  
> It's not that common for end users to do the importation onto such a device 
> that leads to the attack, so in that sense it's unlikely.  However, if you do 
> do such an import, the attack is likely.  I can certainly imagine 
> provisioning scenarios that are vulnerable, though I can't say if people are 
> actually doing them.  So I'd say that anyone using the tokens for this sort 
> of thing should be warned immediately.

Common end users don't call C_UnwrapKey that often. Thinking of a
common procedure that would call for C_Decrypt, could be S/MIME. And
common people don't control the format of the sender either - if you
get a PKCS#1 padded message, you need to decrypt that. Last time I
checked, Thunderbird by default does PKCS#1 v1.5.


> "This is not a useful attack. The researchers engaged in an academic exercise 
> to point out a specific vulnerability in the protocol, but an attack requires 
> access to the RSA SecurID 800 smartcard (for example, inserted into a 
> compromised machine) and the user’s smartcard PIN. If the attacker has the 
> smart card and PIN, there is no need to perform any attack, so this research 
> adds little additional value as a security finding."
>
> This is quite false.  The legitimate owner of the token, or a security 
> officer provisioning the token, can get information (marked unextractable) he 
> is not supposed to get, PIN or no PIN.  There are definite real world 
> scenarios in which this attack is a serious break.

As the paper deals with PKCS#11, which is a software API, it can tell
much more about host-side software errors than what is happening on
device. PKCS#11 is like ISO7816: it describes a lot but leaves many
things optional or outside the scope of the given document.  Have a
look at chapter 7 in PKCS#11 v2.20.

PKCS#11 is a *software API* that is usually used to access a given
device, implemented to support the capabilities of the device.
Thinking that the flags set or queried through PKCS#11  (in fact,
"sensitive" in this example, as "not extractable" should mean a key
which you can't C_WrapKey in the beginning) would actually matter
outside the scope of the single module it targets, is foolish. Think
about it: for a non-repudiation key, generated with OpenSSL and
imported into a device, that displays "never extractable" through
PKCS#11, would it suffice? I hope not.

Needless to say that there are countless other ways how a (crypto)
system can go bad. Whoever actively uses any of the devices in the
described modes, should of course review their systems and evaluate
possible impacts. But IMHO the impact is not that catastrophic.

I hope 12.08 will make it all clear.

Martin
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to