On Wed, Jul 05, 2023 at 09:51:37AM +0000, Jeremy O'Donoghue wrote:
> Hi Ilari,
> 
> Response inline below – apologies once more for Outlook threading.
> 
> I’m not seeing convergence to consensus in this thread. At this point
> it seems that we have one group definitely favouring “alg” and
> another group definitely favouring “hkc”, and it seems like no-one is
> changing their mind at this point because there are strong technical
> arguments on both sides.

HPKE sender info instead of "hkc", but otherwise agreed.


> On 29/06/2023, 21:41, "COSE" <[email protected]> wrote:
> 
> WARNING: This email originated from outside of Qualcomm. Please be
> wary of any links or attachments, and do not enable macros.
> 
> On Thu, Jun 29, 2023 at 04:00:09PM +0000, Jeremy O'Donoghue wrote:
> 
> > The short version is that I *strongly* prefer a single “alg” integer
> > to further parameterise the key usage space. The use of a construction
> > like “hkc” is almost certain to complicate interoperability and
> > testing. Large-scale systems may be able to support many options, but
> > constrained systems rarely have that luxury. While configuration or
> > profile documents can help, there is generally a risk over time of
> > there being a large enough number of configurations that they no
> > longer help very much as everyone chooses his/her favourite.
> 
> 
> [JOD] “hkc” still complicates things further. As an implementer,
> checking the consistency and compatibility of COSE arguments is
> already fairly tedious – adding a special case makes things
> significantly worse.

What kind of consistency checks? I can think of one (key is castable
into KEM). However, that would still be required even with "alg".

And with disaggregated KEM, that seems really hard to get wrong,
given that key cast is absolutely required step.

 
> And "hkc" is not meant for highly constrained cases. For example,
> if application does ciphersuite-per-key-type, then "hkc" is
> completely unnecressary.
> 
> [JOD] HPKE is not for constrained use-cases? I can think of
> constrained use-cases. This seems like a bold claim to make.

Probably "hkc" and HPKE sender info got mixed up.

Constrained usecases would be using HPKE sender info, since that is
required. However, those would not be using "hkc" in keys, because
that is unnecressary because tightly defined profile.


> In many cases, particularly if you care strongly about security in
> an Edge device, you will want your COSE implementation to sit in a
> small Enclave/TEE that communicates with a Root of Trust that holds
> the crypto keys and hardware – these are commonly pretty constrained
> environments – a Arm Cortex M class device with Trustzone-M or an
> equivalent RISC-V MCU, with low-hundreds of kiB memory.

"hkc" is for sender side, which does not need to run in TEE, as it
handles public keys.

And for receiver side, I would "pre-digest" the messages outside TEE
and then pass the digested results to TEE (maybe with some COSE-specic
stuff in TEE to prevent use with non-COSE HPKE), which then returns
crypto results (probably session key).

This is to cut down complexity in TEE code, which ranges from merely
expensive (compromise keys) to very expensive indeed (full system
compromise). Considering this simplifies interface, the impact is
especially great. It also reduces amount of data that needs transferring
(perennial problem).


> And having explicit list would do absolutely nothing to help interop in
> constrained environments. The list would probably cover every KEM, KDF
> and AEAD in HPKE at least once. Which would require the same effort as
> dealing with everything. Except with *additional* complexity of dealing
> with coupling.
> 
> [JOD] I think this is just a question of where you put the complexity.
> In a constrained environment you would likely use a profile to
> constrain the supported algorithm choices.

Sure, constrained environments are expected to profile down.

However, supporting ciphersuites is more complex than supporting every
possible combination of mixing components of those ciphersuites.

E.g., if one supports 16-1-1 (P-256/SHA-256/AES-128-GCM) and 17-2-2
(P-384/SHA-384/AES-256-GCM), that is a bit more complex than supporting
every possible combination of P-256, P-384, SHA-256, SHA-384,
AES-128-GCM and AES-256-GCM.

 
> > While crypto-agility is of definite and growing importance, it is not
> > unreasonable to restrict the space within which agility is permitted
> > to a subset of the large set of possible options in the name of
> > interoperability, providing options that provide meaningfully different
> > levels of security and/or differently constructed algorithms (hence,
> > presumably, resistant to different classes of attack)
> 
> This does not actually help interoperability.
> 
> Then, COSE has always been frameworky. For such specifications,
> restrictions for interop are inapporiate. Instead, the specs are
> intended to be profiled down or specialized for applications, and in
> that process, appriate restrictions are defined for interoperability.
> 
> [JOD] I agree that profiles are needed regardless of solution. “alg”
> and “hkc” are proposing different ways to do the same thing, placing
> the effort in slightly different places.

I argue that "alg" has effort that the other does not.


> > If the time to generate new informational RFCs to define new cipher-
> > suites for COSE “alg” is too long, we should address this rather than
> > adopting “hkc” as a workaround.
> 
> It is not just one RFC (and that is surprising amount of effort), but
> essentially endless stream of those, every time HPKE adds something,
> and those costs add up.
> 
> [JOD] I am aware of the effort in producing an RFC. What I have seen
> is that large framework RFCs that define something new take a long
> time (some of us on this list are editors of the RATS EAT draft, for
> example, so well aware of this). I have also observed that RFCs which
> extend an existing framework in specific and limited ways can be
> published fairly quickly.

Well, there have been RFCs that should be simple, but take extremely
long time for some reason.


> It is probably true that the “alg” approach may require two RFCs
> rather than one if making HPKE changes, but this reflects that
> anyway if you are potentially changing how COSE works (as here),
> consensus is needed in the COSE WG before this can be adopted.

The COSE WG just does not have the expertise to do this.


> > Variety of supportable use-cases: This is exactly the same with both
> > proposals, as far as I can tell. The difference is in the mechanism to
> > define new variations. I regard restricting the default set of
> > operations as a feature and not a bug.
> 
> No, it is not the same. More work for extra complexity, for likely no
> gain.
> 
> [JOD] I’m personally happy to put more work/complexity on the 1% that
> have particular and specialized needs compared to the 99% who do not,
> as previously stated. I can’t see that “alg” makes anything
> impossible, merely more work, although later in thread you suggest
> that it does.

I have actually written code that just can't deal with "alg". The
blocker is that even if the code can deal with everything the HPKE
code supports, it might not know what that is.

And if the code supporst multiple ciphersuites, what it likely does
internally is expand the alg into HPKE triple anyway. Unless someone
has built the road to hell. Then you need something more complex than
that.

And if there is just one supported ciphersuite, the simplest thing
is to just ignore the fields from message and substitute fixed values. 


> <snip>
> > In short, this is exactly the area where I *like* to see expert
> > review.
> 
> That review is going to be less "expert" than what you would like...
> 
> [JOD] Disappointing, if true.

And there's that the experts to review it are very same ones that
already did review it.

And then maybe that was bit colored by me sometimes seeing stuff
reviewed by "experts" that I think is absolute garbage.


> <snip>
> > This also helps hugely with interoperability as it provides a
> > tractable combination of options to test (this is not a trivial factor
> > for product vendors).
> 
> As I explained above, it does not help with interoperability. And
> with regards to testing, it would *increase* the effort required.
> 
> [JOD] As a vendor, I will certainly test layers independently in
> unit testing, but I will also extensively test *all* supported
> options before I ship.

Automated testing can test the combinations real fast.

And if it is actually constrained system, most likely there is like
only 1 or 2 combinations anyway.


> > Ease of Adding a new HPKE KEM: I don’t agree on the assumption that
> > HPKE will be provided as a separate layer. This will doubtless often
> > be the case, but it will not always be true. We certainly don’t
> > “need” to do it.
> 
> HPKE is certainly *specified* as a separate layer. While yeeting the
> parts of HPKE that are never used gets good simplifications, I think
> it is doubtful that mixng in the HPKE implementation will buy much,
> as most of that is irreducable crypto code.
> 
> [JOD] It should reduce parameter checking code. While this is often
> not very much compiled code, it is easy to get wrong.

HPKE requires the numeric identifiers anyway.

And this code is not easy to get wrong. If implementation supports
multiple of kind, it needs to branch on the identifier anyway. And if
it supports just one of kind, it can just hardcode the value.


> > Adding any new algorithm to a system implies change somewhere, and
> > for the same reason that I don’t regard “ease of coding” as an
> > argument in favour of “alg”, I don’t regard “ease of adding a new
> > HPKE KEM” as an argument in favour of “hkc”.
> 
> That change can be just updating the HPKE library. Something that is
> required anyway.
> 
> [JOD] Assumes that you can update libraries individually. This is
> usually not true for enclaves where static bootable images are more
> common. Once you have to generate a new image, there is no benefit
> to enforced separation.

Even with static linking, it is still changing one thing versus
changing two things.

 
> > Incidentally, solutions like dependabot do not address the problem of
> > creating adequate validation suites for cryptographic systems, and
> > more algorithm combinations creates a combinational explosion in the
> > testing requirements.
> 
> How testing requirements scale depends on coupling.
> 
> That combinatorial explosion *only* happens in presence of coupling.
> 
> [JOD] As noted above, I will not ship anything before testing all of
> the combinations I support in my implementation.

Well, even if one wants to do that, automated exhaustive check is very
fast.


> > Examples
> >
> >
> > Ø  > AwesomeHPKEApp.new(suite={kem: DHKEM_X25519_HKDFSHA256(0x20), kdf:
> >
> > Ø  > HKDF_SHA256(0x01), aead: AES128GCM(0x01)})
> >
> > Ø
> >
> > Ø  
> > AwesomeHPKEApp.new(suite="HPKEv1-Base-DHKEM(X25519,HKDFSHA256)-HKDFSHA256-AES128GCM")
> >
> > Assuming we had a value of 1762 for “alg” representing
> > “HPKEv1-Base-DKEM(X25519,HKDFSHA256)-HKDFSHA256-AES128GCM”, I offer
> > you:
> >
> > AwesomeHPKEApp.new(alg=1762)
> >
> > …which seems shorter still. I’m aware this example is facetious,
> > but in reality libraries would pre-define constants that an IDE
> > would likely auto-complete for you (even emacs does such things
> > these days).
> 
> Well, there are some issues with that API, starting from the fact that
> KEM is constrained by the key...
> 
> [JOD] My apologies. I misattributed the original examples above to
> you. I had to follow the message threading by date to verify that it
> didn’t originate in any of your mails.

I think I can guess who did that example without looking. :-)


> My general point is that there are lots of ways to design an API.

Yeah, I still haven't fully settled down on API in my COSE-HPKE
prototype (in fact, I changed that yesterday).




-Ilari

_______________________________________________
COSE mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/cose

Reply via email to