> On 4 Jan 2024, at 19:37, Orie Steele <[email protected]> wrote:
> 
> Thanks for your comments Neil!
> 
> On Thu, Jan 4, 2024 at 12:47 PM Neil Madden <[email protected] 
> <mailto:[email protected]>> wrote:
> I’m in two minds about this draft. I’m fairly receptive to it in general, but 
> I think it might be closing the stable door after the horse has already 
> bolted. 
> 
> Some questions and comments that come to mind:
> 
> * A JWK “alg” constraint can only contain a single value. After this spec 
> passes some algorithms may have two valid identifiers, leaving 
> implementations a choice as to which to advertise (and risk breaking some 
> clients) or to publish the key twice with different identifiers (wasteful and 
> potentially causes other issues), or to drop the algorithm constraint 
> entirely. None of these seem great. 
> 
> I'd argue that dropping alg, and leaving alg polymorphic are basically the 
> same thing (and both are not great).
> 
> It's worth considering the parts of key management that happen before and 
> after you have a key representation.
> 
> I am sure there are other references, but one I often find myself referring 
> to is:
> 
> https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57Pt3r1.pdf 
> <https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57Pt3r1.pdf>
> 
> > A major thrust from Part 1 of this Recommendation is that, in general, keys 
> > shall not be
> used for multiple cryptographic purposes
> 
> > Maximum cryptoperiods for each key type shall be determined at the KMF in
> accordance with the organization’s security policy...
> 
> When best practices are followed, keys are created for a single purpose, and 
> for a fixed lifespan, they operate in that purpose and then they are 
> destroyed.

Right, and the point of associating the "alg" with the key is to ensure that it 
is only used for one thing.

> 
> It would be more work to change the requirements for "alg" in key 
> representations, or to add crypto periods to key representations.
> 
> I see this document as enabling compliance with best practices, not ensuring 
> that all protocols follow them.

I'm not sure how this document does that at all.

> * In the example given of advertising algorithms in server metadata, I’m not 
> sure how this helps. For compatibility, any server that supports EdDSA is 
> going to have to continue supporting EdDSA or risk breaking existing clients. 
> Likewise, any signature verification client that supports only Ed25519 may 
> still have to support “EdDSA” and filter out any non-Ed25519 keys. 
> 
> A similar issue occurs with secp256k1 and ecdsa today.
> 
> Some implementations normalize to lower-s (and expect it), others don't.
> 
> When you cross test, you get errors in implementations that assume ES256K is 
> always lower S, and it's not... that's for the same ES256K public key 
> (arguably an even worse problem).
> 
> The point being that we could fix this by making "ES256K-LS", and we'd have 
> the same problem with older implementations that advertised ES256K.

I'm not familiar with this issue or what "lower S" refers to here. Do you mean 
some implementations spell the algorithm "Es256K" with a lowercase s?

> I agree with your comment, but I don't see anything better to do than enable 
> more precision, so that implementations that are aware of it, or that come 
> after it's available, can take advantage of it.

Given that an EC/OKP JWK already specifies the curve, this is a case where this 
draft doesn't make things more precise and just causes compatibility issues for 
no gain.

> 
> * Does the usage of “enc” count as not being fully specified? I can well 
> imagine that there are some clients that support, say, RSA-OAEP, but only 
> support 128-bit content encryption algorithms, or only support GCM. So the 
> same issue with not specifying the curve also applies when not specifying the 
> content encryption algorithm. 
> 
> Excellent question.
> 
> See the recent discussion here:
> 
> https://datatracker.ietf.org/meeting/118/materials/slides-118-lamps-attack-against-aead-in-cms-00
>  
> <https://datatracker.ietf.org/meeting/118/materials/slides-118-lamps-attack-against-aead-in-cms-00>
> 
> In an ideal world, the crypto is "key AND algorithm" committing, in our 
> current world, we may only be able to signal things in a way that easily 
> enables that kind of commitment, not enforce that it actually happens in all 
> the places it should.

This is not really related to the point I was making, and I'm not sure how that 
attack would apply to JOSE given that we don't have any unauthenticated cipher 
modes. (One of the good things about "enc" compared to "alg" is that all of the 
choices share the same security goal: AEAD). The algorithm and encryption mode 
are also both committed to already in the AD.

What I mean is that this draft is motivated by saying that algorithms like 
"EdDSA" don't give enough information to be useful in negotiating an algorithm 
because one party might only support the curve Ed25519 while another supports 
only Ed448. Well, exactly the same thing happens with content encryption 
algorithms. Both parties might support "RSA-OAEP" key algorithm, but one 
supports only "A128GCM" and the other only supports "A128CBC-HS256". Following 
the logic of this draft, the issue is that "RSA-OAEP" is not fully specified, 
so we should really add the following additional algorithm identifiers:

RSA-OAEP-A128GCM
RSA-OAEP-A192GCM
RSA-OAEP-A256GCM
RSA-OAEP-A128CBC-HS256
RSA-OAEP-A192CBC-HS384
RSA-OAEP-A256CBC-HS512

Not to mention doing the same for RSA-OAEP-256 and RSA1_5 and ECDH-ES and 
ECDH-ES+A128KW and ...

This is what I mean when I say that "fully specifying" the algorithm results in 
a combinatorial explosion of cipher-suite like identifiers. It doesn't seem 
like a sensible way to do things, which is why TLS 1.3 doesn't do that any 
more. And this is also why OIDC, which is cited in the draft as a motivating 
example, has both metadata fields:
id_token_encryption_alg_values_supported
id_token_encryption_enc_values_supported
Probably OIDC should add similar id_token_signing_crv_values_supported and so 
on metadata fields to address this properly, rather than trying to cram 
everything into a single algorithm identifier.

> 
> Addressing what "fully specified" means for HPKE, ECDH-* + AEAD... is harder 
> than addressing signatures... There are principles that are hard to achieve 
> without an ability to fully specify and commit to keys, this is an area where 
> more discussion is probably needed.
> 
> 
> * The draft states that having different algorithm identifiers for different 
> RSA key sizes is not useful, but actually some HSMs only support specific key 
> sizes for RSA, and an implementation may want to restrict key sizes for 
> efficiency reasons (even more so with PQC). 
> 
> Are you suggesting that RSA usage is sorta like P-256 / ES256 vs P-384 / 
> ES384, where some systems would prefer fully specified RSA "alg" values for 
> JOSE / COSE ?

No, as in practice everyone uses 2048 or (rarely) 3072. If more key sizes 
became common then this might be an issue. My point is that "fully specified" 
is a vague term that depends on what you consider important and is likely to 
evolve over time.

[snip the rest]

In short, I don't think this draft improves anything and it makes some things 
worse so should be rejected on that basis.

-- Neil
_______________________________________________
jose mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/jose

Reply via email to