Thanks for your comments Neil!

On Thu, Jan 4, 2024 at 12:47 PM Neil Madden <[email protected]> wrote:

> I’m in two minds about this draft. I’m fairly receptive to it in general,
> but I think it might be closing the stable door after the horse has already
> bolted.
>
> Some questions and comments that come to mind:
>
> * A JWK “alg” constraint can only contain a single value. After this spec
> passes some algorithms may have two valid identifiers, leaving
> implementations a choice as to which to advertise (and risk breaking some
> clients) or to publish the key twice with different identifiers (wasteful
> and potentially causes other issues), or to drop the algorithm constraint
> entirely. None of these seem great.
>

I'd argue that dropping alg, and leaving alg polymorphic are basically the
same thing (and both are not great).

It's worth considering the parts of key management that happen before and
after you have a key representation.

I am sure there are other references, but one I often find myself referring
to is:

https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57Pt3r1.pdf

> A major thrust from Part 1 of this Recommendation is that, in general,
keys shall not be
used for multiple cryptographic purposes

> Maximum cryptoperiods for each key type shall be determined at the KMF in
accordance with the organization’s security policy...

When best practices are followed, keys are created for a single purpose,
and for a fixed lifespan, they operate in that purpose and then they are
destroyed.

It would be more work to change the requirements for "alg" in key
representations, or to add crypto periods to key representations.

I see this document as enabling compliance with best practices, not
ensuring that all protocols follow them.


>
> * In the example given of advertising algorithms in server metadata, I’m
> not sure how this helps. For compatibility, any server that supports EdDSA
> is going to have to continue supporting EdDSA or risk breaking existing
> clients. Likewise, any signature verification client that supports only
> Ed25519 may still have to support “EdDSA” and filter out any non-Ed25519
> keys.
>

A similar issue occurs with secp256k1 and ecdsa today.

Some implementations normalize to lower-s (and expect it), others don't.

When you cross test, you get errors in implementations that assume ES256K
is always lower S, and it's not... that's for the same ES256K public key
(arguably an even worse problem).

The point being that we could fix this by making "ES256K-LS", and we'd have
the same problem with older implementations that advertised ES256K.

I agree with your comment, but I don't see anything better to do than
enable more precision, so that implementations that are aware of it, or
that come after it's available, can take advantage of it.


> * Does the usage of “enc” count as not being fully specified? I can well
> imagine that there are some clients that support, say, RSA-OAEP, but only
> support 128-bit content encryption algorithms, or only support GCM. So the
> same issue with not specifying the curve also applies when not specifying
> the content encryption algorithm.
>

Excellent question.

See the recent discussion here:

https://datatracker.ietf.org/meeting/118/materials/slides-118-lamps-attack-against-aead-in-cms-00

In an ideal world, the crypto is "key AND algorithm" committing, in our
current world, we may only be able to signal things in a way that easily
enables that kind of commitment, not enforce that it actually happens in
all the places it should.

Addressing what "fully specified" means for HPKE, ECDH-* + AEAD... is
harder than addressing signatures... There are principles that are hard to
achieve without an ability to fully specify and commit to keys, this is an
area where more discussion is probably needed.


> * The draft states that having different algorithm identifiers for
> different RSA key sizes is not useful, but actually some HSMs only support
> specific key sizes for RSA, and an implementation may want to restrict key
> sizes for efficiency reasons (even more so with PQC).
>

Are you suggesting that RSA usage is sorta like P-256 / ES256 vs P-384 /
ES384, where some systems would prefer fully specified RSA "alg" values for
JOSE / COSE ?

On the PQ side, the current JOSE / COSE drafts try to fully
parameterize ML-DSA and SLH-DSA, in other words, the current drafts are
trying to do what this document suggests already.


>
> If we take this draft to its logical conclusion, we’d surely end up with
> “alg” being more akin to TLS 1.2 ciphersuites. But that’s very different to
> where we are now, and I note that TLS 1.3 has moved in the opposite
> direction: negotiating curves and other parameters externally to the cipher
> suite. Otherwise, we’ll end up with a combinatorial explosion of new
> algorithm identifiers.
>

The JOSE and COSE registries have an "alg" field, in an ideal world, the
number of RECOMMENDED registered options are limited to what is safe to use
today, and what will be safe tomorrow.

Registries grow over time... better that they grow fully specified things,
than that they grow with a need to implement the cartesian product of
another registry.

I think there is utility is registries like
https://www.iana.org/assignments/hpke/hpke.xhtml giving us the "à la
carte"... and in registries like:
https://www.iana.org/assignments/cose/cose.xhtml#algorithms

Giving us a smaller set of options, when using JOSE or COSE ( as opposed to
doing vanilla crypto, of the form we see from NIST / CRFG ).

"...as you might know, a smorgasbord of standardized algorithms to pick and
choose for anyone's appetite comes with interop and other challenges. "

- https://mailarchive.ietf.org/arch/msg/cfrg/3VDlvosyXMY4Ea1JrJVsnTF6Cao/


> So I think it’s a “no” from me on adopting this draft, and the effort
> should be spent rather fixing the negotiation mechanisms of the protocols
> that are having issues. Because they will need to do that anyway for all
> the degrees of freedom that are still not nailed down by these “fully
> specified” identifiers.
>

It's easier for us to add fully specified algorithms, protocols that don't
have an ability to make use of them, will continue to exist.

Even in the existing registries there are "deprecated" and "not
recommended" algorithms that I am sure some protocols can't afford to stop
using.

Enabling more secure protocols is different than fixing insecure protocols
that have ambiguous negotiation.

But I concede that security in depth should require us to do both.


>
> — Neil
>
> On 2 Jan 2024, at 19:13, Karen ODonoghue <[email protected]> wrote:
>
> 
> JOSE working group members,
>
> This email starts a two week call for adoption for:
>
> https://datatracker.ietf.org/doc/draft-jones-jose-fully-specified-algorithms/
>
> As discussed at the November IETF meeting, with the approved expansion of
> the charter to include maintenance items, this document is now within
> scope.
>
> Please reply to this email with your comments on the adoption of this
> document as a starting point for the related JOSE work item.
>
> This call will end on Wednesday, 17 January 2024.
>
> Thank you,
> JOSE co-chairs
> _______________________________________________
> jose mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/jose
>
> _______________________________________________
> jose mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/jose
>


-- 


ORIE STEELE
Chief Technology Officer
www.transmute.industries

<https://transmute.industries>
_______________________________________________
jose mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/jose

Reply via email to