On Tue, Sep 17, 2024 at 11:36 AM Sophie Schmieg <sschmieg=
[email protected]> wrote:


> For the algorithm header field, I am not aware of any reason to have that
> in the token (other than the fact that it historically has been there).
>

Possibly to allow selection of token by algorithm.



> To see why, we first need to look at the public key: The public key can
> never be part of the token, since it is trivial to create a token with a
> valid signature if the attacker gets to choose the public key, they can
> just create the public key themselves. But conceptually speaking, the
> public key includes the algorithm already, a RSA key, a ECDSA key, and a
> ML-DSA key are not interchangeable after all.
>

The signature is validated against a trust anchor. In the ECC era where our
public keys were relatively compact 32 or 66 byte strings, using the key as
the trust anchor made sense and followed what we ended up doing in 1024 bit
RSA. However before we move to ECC and were looking at sticking with RSA,
we started looking at what it might take to make 15Kb keys work and we will
probably revisit that in the ML-DSA era.

If the trust anchor is the digest of the key (or a fingerprint specifying
the digest and algorithm), we can get a suitable work factor out of 65
bytes regardless of the public key scheme. If we play that game the trust
assertion could contain digests of four different algorithms. To verify a
signature against such a signature, it would be necessary to specify both
the key and the public key parameters.

This is a structure I am using to enable support for ECC and PQC crypto in
a single PKI.


A token is a form of PKI assertion and as such is inherently relativist. No
assertion has semantics except relative to something else.


So if a token says it uses ECDSA as algorithm, but the public key that is
> supposed to be used for verification is a ML-DSA key, the token is clearly
> malformed, making the algorithm field a field that only ever has
> information that is either superfluous (you already knew it has to be
> ML-DSA because that is your public key's key type) or invalid (the
> algorithm field does not align with the public key). Therefore including
> the algorithm in the token is never useful.
> But it gets worse, if the application is implemented in the wrong way, it
> will take the algorithm field of the token as authoritative and essentially
> reinterpret_cast the public key bytes to the type the header field
> suggested. This way, you get vulnerabilities casting the say an ECDSA
> public key into an HMAC key, with the attacker now able to forge the MAC,
> since the public key is known.
>

That is true and one reason why I dislike the use of arbitrary key
identifiers. In my view, key identifiers should not be names, they should
be bound to the key they identify like OpenPGP Key fingerprints are.

That is not the path we took then and I only started going down that path
after trying to work out how to handle the use of key identifiers which are
names.

The assumption in JOSE was that the specification just defines the
cryptographic envelope format and that may have left rather too much unsaid.
_______________________________________________
COSE mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to