> -----Original Message-----
> From: Michael Richardson <[email protected]>
> Sent: Monday, May 27, 2019 7:30 AM
> To: Jim Schaad <[email protected]>
> Cc: [email protected]; 'Esko Dijk' <[email protected]>; =?iso-8859-
> 1?Q?'G=F6ran_Selander'?= <[email protected]>
> Subject: Re: Pinning of raw public keys in Constrained Vouchers
> 
> 
> Jim Schaad <[email protected]> wrote:
>     >> new element in the constrained voucher.  This I've given the
mouthful
> name
>     >> of:  proximity-registrar-sha256-of-subject-public-key-info
>     >> and: pinned-sha256-of-subject-public-key-info
>     >> (knowing that the YANG->SID process will turn this into a small
integer).
> 
>     jim> Fixing on a single hash function would probably be frowned upon
by
> the IESG.
>     jim> The lack of algorithm flexibility would be an issue.
> 
> So extending to other hash functions is just a question of amending the
> YANG.
> There aren't hundreds of hash functions, and SID values should provide
> space
> for at least 30 code points here.   Perhaps explaining how the algorithms
can
> be agile in the Security Considerations.
> 
>     >> A sha256 hash is 32-bytes in size.
>     >> A 256-bit ECDSA key is about 32-bytes in size.
>     >> An equivalent 2048-bit RSA key is 256 bytes in size.
>     >> So the hash only wins if we use bigger ECDSA keys, or if we use
RSA.
>     >> (okay, so there are a few bytes for the subject-public-key-info
(SPKI)
>     >> wrapping, which also provides algorithm agility)
>     >>
>     >> We could truncate the hash if we felt this was still secure enough
We
>     >> could
>     >> add the length of the thing (the RPK) being hashed if that would
help
> with
>     >> pre-image attacks, but that's mostly already there in the SPKI
encoding,
>     >> but I
>     >> suppose an attacker might find a way to prepad with nonsense DER.
> 
>     jim> As I remember things:
> 
>     jim> * Adding the random value would make the collision resistance
attack
> simpler
>     jim> for the registrar.  It has some random bytes that do not have to
have
> any
>     jim> structure to create two different public keys with the same hash
>     jim> value.
> 
> Sorry, do you mean, simpler for an attack on the registrar?

No, I mean a simpler attack by the registrar.  

> 
> I'm saying that the subject-public-key-info starts with an
AlgorithmIdentifier,
> which is a sequence.  An attacker could add additional items to the
sequence
> in order to make a second pre-image attack (usually that requires
additional
> bytes, changing the length of the item).
> This depends upon the victim verifier ignoring the extra bytes somehow; a
> Postel Principle parser might do that, and if I were building a
pull-parser for
> this, I might do that.

Given that this is a DER encoded item, there is no way for a victim to
ignore the extra bytes inside of the SPKI structure.   If it does then then
it is completely incorrectly doing the parsing.  You would not even be able
to have trailing bytes that might or might not be part of the SPKI object.
Even if you did BER indefinite length encoding, you would not be able to
ignore the extra bytes, but you would have bytes to play with as you can
have different encodings that represent the same output value.  

> 
> In practice, most constrained implementations avoid DER parsing by
> accepting a known sequence of bytes for the subject-public-key-info and
use
> memcmp().
> I.e. they are extremely conservative in what they accept, to the point of
> violating the specification one could argue.

 I don't know that I would say that this is a violation of the spec.  You
are saying does this match what I expect to see and since it is DER a memcmp
is a fine way to do this.  The issue here would be that you need to make
sure that you have correctly identified the border of the SPKI in the
containing object.  That is, you cannot put in extra bytes in the
surrounding object and do the memcmp based on the size in memory rather than
the size in the PDU.

> 
>     jim> * Adding the random value might make a second pre-image attack
> more
>     jim> difficult, but I doubt it.  You have more bytes as input, but
that is
>     jim> generally not the biggest issue for finding a second pre-image
>     jim> collision.
> 
>     jim> * You don't care about pre-image resistance if you are making the
> input
>     jim> public.
> 
> What do you mean by the part about public?
> Yes, in DTLS 1.2, the certificates are visible, and an active attacker can
see
> them.

Yes that is what I mean.  There is no issue about the attacker getting the
hashed bytes from the hash structure.  There are some cases where the
content is not available to the attacker and doing an inversion is what is
desired.  An example of this is creating key values from passwords.

> 
>     jim> I cannot speak to the level of security that is required for your
> solution.
>     jim> However, strength is directly related to the size of the hash
value.
> 
> We make certificates by doing sha256 hashes of the subject-public-key-info
> and other names, and having the CA sign that.  I think that it makes no
sense
> to truncate the hash going into a 256-bit sized signature, since ECDSA
signs
> 256-bit values.   I am not sure how to evaluate the strengh required here.
> It's a CBOR value, so it has a length, and I suppose we could define a way
to
> truncate the value in a standard direction, and then decide later.

The first sentence has me slightly worried.  Is the CA computing the hash or
is the hash being provided to the CA?  

> 
> I think that a non-truncated hash ought to be as strong as sending the key
> itself, and having two code paths is not a win here.

One attack is a birthday attack.  This is O(2^(n/2)) in terms of inputs to
be evaluated.  That is if you compute 2^(n/2) hash values you have a 50/50
chance of finding a collision.  This means that it takes 2^128 attempts for
a 256-bit hash value, but only 2^64 for a 128-bit hash value, even if you
truncate as SHA-256 value to 128 bits rather than using a 128-bit hash
function (such as AES-CBC).   The idea that 64-bit authentication values are
fine is the current thinking in the CoRE and ACE groups as that is the size
of authentication value that is being used for encryption of items.  Part of
this is based on the assumption that these values are only of interest for
short term and not historically.  TLS on the other hand is worried about
long term (i.e. recorded for later use) and thus has gone with longer
authentication values as a general rule.  In COSE we are saying that a
truncated hash is fine for identifying certificates, but that is operating
as a filter rather than an identifier.

Jim

> 
> 
> --
> Michael Richardson <[email protected]>, Sandelman Software Works
> -= IPv6 IoT consulting =-
> 
> 


_______________________________________________
Anima mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/anima

Reply via email to