Jim Schaad <[email protected]> wrote: jim> As I remember things: jim> * Adding the random value would make the collision resistance attack jim> simpler for the registrar. It has some random bytes that do not jim> have to have any structure to create two different public keys with jim> the same hash value.
>> Sorry, do you mean, simpler for an attack on the registrar?
> No, I mean a simpler attack by the registrar.
Yes, I'd call it the malicious registrar.
The attack would take an existing (nonceless) voucher that pinned the device
to a specific registrar, and then try to create a registrar with a keypair
that hashes the same. This would be used for offline registrars.
If we pin the public key then this can not happen.
> Given that this is a DER encoded item, there is no way for a victim to
> ignore the extra bytes inside of the SPKI structure. If it does then
then
> it is completely incorrectly doing the parsing. You would not even be
able
> to have trailing bytes that might or might not be part of the SPKI object.
> Even if you did BER indefinite length encoding, you would not be able to
> ignore the extra bytes, but you would have bytes to play with as you can
> have different encodings that represent the same output value.
Are we are agreeing, but using different terms?
So yes, an attacker could resort to non-DER encoding (BER encoding) to give it
bytes to play with. That would be non-DER though.
My reading of the ASN.1 is that one could include arbitrary additional
parameters
in the algorithm SEQUENCE, (correctly DER encoded) and use those bytes to do
the pre-image attack... except that it would fail because constrained devices
are using memcmp().
>> In practice, most constrained implementations avoid DER parsing by
>> accepting a known sequence of bytes for the subject-public-key-info and
>> use memcmp().
>> I.e. they are extremely conservative in what they accept, to the point of
>> violating the specification one could argue.
jim> I don't know that I would say that this is a violation of the spec.
You
jim> are saying does this match what I expect to see and since it is DER a
memcmp
jim> is a fine way to do this.
well, because it locks the list of parameters to be 0.
AlgorithmIdentifier ::= SEQUENCE {
algorithm OBJECT IDENTIFIER,
parameters ANY DEFINED BY algorithm OPTIONAL }
Perhaps we have never extended any, but the extension point is right here.
>> We make certificates by doing sha256 hashes of the
subject-public-key-info
>> and other names, and having the CA sign that. I think that it makes no
>> sense to truncate the hash going into a 256-bit sized signature, since
>. ECDSA signs 256-bit values. I am not sure how to evaluate the strengh
required here.
>> It's a CBOR value, so it has a length, and I suppose we could define a
way to
>> truncate the value in a standard direction, and then decide later.
> The first sentence has me slightly worried. Is the CA computing the hash
or
> is the hash being provided to the CA?
I think that in traditional CA processing, we provide the public key (in a
CSR) to the CA, and it computes the hash. My point here is that if we were
concerned (as I concern myself above) with pre-image attacks against sha256
under signature, then we should be concerned about certificates, period :-)
I think that we can also intelligently advise pledge to carefully check
the DER encoding is minimal, and using memcmp() works very well here.
Stupider implementation wins here :-)
>> I think that a non-truncated hash ought to be as strong as sending the
key
>> itself, and having two code paths is not a win here.
> One attack is a birthday attack. This is O(2^(n/2)) in terms of inputs to
> be evaluated. That is if you compute 2^(n/2) hash values you have a 50/50
> chance of finding a collision. This means that it takes 2^128 attempts
for
> a 256-bit hash value, but only 2^64 for a 128-bit hash value, even if you
> truncate as SHA-256 value to 128 bits rather than using a 128-bit hash
> function (such as AES-CBC). The idea that 64-bit authentication values
are
> fine is the current thinking in the CoRE and ACE groups as that is the
size
> of authentication value that is being used for encryption of items. Part
of
> this is based on the assumption that these values are only of interest for
> short term and not historically. TLS on the other hand is worried about
> long term (i.e. recorded for later use) and thus has gone with longer
> authentication values as a general rule. In COSE we are saying that a
> truncated hash is fine for identifying certificates, but that is operating
> as a filter rather than an identifier.
I have not paid enough attention to the HMAC algorithms in COSE... just
reading rfc8152. Truncated to 64-bits. Hmm. We wouldn't have a keyed-hash,
just a hash, which makes it more vulnerable to off-line dictionary attacks I
think.
But, I think you are perhaps referring to the kid usage?
In online uses of the voucher, it has a limited time when it is useful,
and in the online usage, there is also a freshness nonce. So it seems that
we could truncate the hash efficiently. It would be inappropriate to
generate off-line nonceless (constrained) vouchers with this method,
but I think that is something we can write down intelligently.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | IoT architect [
] [email protected] http://www.sandelman.ca/ | ruby on rails [
--
Michael Richardson <[email protected]>, Sandelman Software Works
-= IPv6 IoT consulting =-
signature.asc
Description: PGP signature
_______________________________________________ Anima mailing list [email protected] https://www.ietf.org/mailman/listinfo/anima
