On Sun, Mar 26, 2023 at 10:26:40AM -0400, Manu Sporny wrote:
> On Sun, Mar 26, 2023 at 9:49 AM AJITOMI Daisuke <[email protected]> wrote:
> > Taking Ilari's post into account, I would like to take some time to 
> > reconsider my proposal and your raised issue.
> 
> The following article is a good summary of a modern take on the
> concerns related to "cryptographic agility":
> 
> https://www.blockchaincommons.com/musings/musings-agility/

The problem with lack of cryptographic agility is that if a component
is broken or proves inadequate, you are in a world of hurt. Especially
if the protocol needs to be linearly scalable. And as for component
proving to be inadequate, quantum computers anyone?


Of all the three problems brought up, versions are worse than
algorithms:

- Versions are much more expensive.
- Versions are much more likely to interact badly.
- Versions are much more vulernable to downgrade attacks.


And with algorithms being expensive, sometimes it is perversely
lack of agility that makes things expensive. E.g., consider wanting
to use Edwards25519 curve for signatures in constrained environment...

The article does not even bring any concrete example of algorithms
interacting badly (outside stuff deliberately designed to interact
badly) within the same version. This kind of behavior seems to be
incredibly rare, even after factorial probability enhancement. The
only example I can think of is the DH/ECDH interaction in TLS 1.2-.

And the example of downgrade attack given is version downgrade
attack, not algorithm downgrade attack. As hard as algorithm negotiation
is, version negotiation is much harder.

And in response to the statement "No one should have used those
suites after 1999!": Better suites were not registered until 2008.

And the article does not seem to bring up overloading as a solution:
Use the same identifiers with meanings that depend on the key. The
applications/libraries are then forced to consider the key type before
trying operations.


> The design philosophy behind that approach is the notion that a
> non-trivial number of developers that utilize cryptographic libraries
> in application-space are ill equipped to know how to properly choose
> cryptographic parameters, so exposing them to the ability to configure
> those parameters is less safe than choosing good defaults for them.
> Choosing between P256 or RS256 or HS256, or why one would use SHA2-256
> or SHAKE-256, and so on are difficult choices for non-experts.

RS256 and HS256 is are very different things, and applications
absolutely require control over that sort of stuff.

And who cares about SHA-256 versus SHAKE-256 (until either gets broken,
but nobody knows which).

Considering the multitude of security issues with JOSE, I don't think
those have much to do with poor algorithm choices:

- Libraries somehow managing to use RSA public key as HMAC key (don't
  ask me how).
- Bad library API design leading to alg=none being used when it should
  not.
- Trusting untrustworthy in-band keys.
- Picking wrong kinds of algorithms.
- And numerious others where no algorithm is going to save you.

And indeed, looking at JOSE algorithm registry, while there are some
bad algorithms there (e.g., RS1), I would not say any of those is easy
to pick apart if right kind of algorithms are chosen.

The COSE registry has considerably worse stuff. E.g., WalnutDSA and
SHA-256/64. Those might actually be easy to pick apart.


One part of "improvement" seen with algorithms in newer stuff is that
newer protocols/versions tends to not have the most horrible stuff
anymore. I have seen export/single-DES (totally broken in 2000!) in
TLS 1.2 (no, not TLS 1.0/1.1) No Earlier Than 2H22. And fair bit of
those also support things like ECDH P-256 and AES-128-GCM.



> Therefore, the "cryptosuites approach" attempts to provide reasonable
> defaults (with new versions released when needed) to those developers
> such that the chances of them trying to work with parameters that they
> don't have the skillset to pick are greatly reduced (or, ideally,
> eliminated). This is the approach that systems like Wireguard have
> taken in the Linux kernel. Reduction in parameter choice in
> cryptographic algorithms also leads to, as has been noted in this
> thread, less fan-out and thus an easier audit surface and a reduced
> attack surface.

The problem with ciphersuites is that it is easy to couple things that
absolutely should not be coupled (and if you don't then number of
ciphersuites explodes). And they slot well with very flawed arguments
about "cryptographic strength matching". The end result can easily end
up being a disaster.

The worst stuff I have seen looks innocent (with some flawed "strength
matching" arguments), with devil in the details. Not like the impressive
mess that is TLS 1.2- ciphersuites.

Then ciphersuites also cause problems with configuration. I have
written TLS library. In order to make crypto configuration not be a
mess, the interface pretends TLS does not have ciphersuites.

And Wireguard is not linearly scalable, so it can get away with stuff
other protocols that actually need linear scalability can not.




-Ilari

_______________________________________________
jose mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/jose

Reply via email to