EAT allows for use of the different CBOR serializations just like COSE and CWT 
so particular deployments can choose what is best for them. It is important to 
continue to allow all serializations in EAT for the reasons that they exist in 
the first place.

The main example I can think of is EAT in pure HW (e.g., a TPM-like chip that 
outputs EAT). Outputting fixed length integers will make that HW simpler.

EAT goes one step further than COSE and CWT by pointing out that different 
serializations can cause interoperability issues and advises that a profile 
that specifies the serialization used be created for each use case. (Note that 
serialization variation is minor compared to algorithm selection and key 
identification and distribution)

It seems true to me that there are CBOR serialization variants that would not 
interoperate. Sounds a little bad and messy...

In reality, I don’t think it very bad because is very easy to support preferred 
serialization and because it is possible to create a decoder that supports all 
the serializations. Supporting these serializations doesn’t increase RAM and 
CPU usage much. We don’t hear any complaining from the real world about this 
and CBOR is getting close to ten years old.

LL


> On Feb 22, 2022, at 6:50 AM, Carsten Bormann <[email protected]> wrote:
> 
> Hi Anders,
> 
>> The WebAuthn/FIDO specification details CBOR serialization requirements
> 
> (As does COSE *for its internally constructed signing inputs*, not for what 
> goes over the wire.)
> 
>> while the EAT draft specifies multiple alternatives.  
> 
> Maybe we need to fix that then.
> 
>> There must be a reason for that.  
> 
> The spirit is willing, but the flesh is weak.
> Well, actually, the spirit is the problem.
> We need to get better in the willpower to nail down unneeded choices.
> (Of which JSON vs. CBOR is one.)
> 
>> To cope with (and potentially enforce/verify), different CBOR serialization 
>> variants, CBOR tools typically provide options: 
>> https://github.com/peteroupc/CBOR-Java/blob/master/api/com.upokecenter.cbor.CBOREncodeOptions.md
> 
> This is a bit of a Cadillac implementation with lots of options, many of 
> which have to do with API variants as opposed to encoding options.
> None of the latter ones will get in the way of EAT interoperability.
> 
>> The proposal is simply defining something like an "I-CBOR" that could serve 
>> as the foundation for future standards like EAT.
> 
> I-JSON was necessary because JSON implementations claim to have ingested 
> something and then give you something else, unless you stay in the fold of 
> I-JSON.
> I’m not aware of a similar problem for CBOR, so I don’t know why we’d need 
> I-CBOR.
> 
> Yes, because of historical artifacts we have different 
> deterministic/“canonical” encoding rules — but that is of interest only where 
> you *need* deterministic encoding.  COSE did the right thing and minimized 
> that surface so it actually doesn’t matter which ones you are using.  (CTAP2 
> actually did that too, IIRC, they just wrote down some additional rules that 
> they don’t actually need.  But I didn’t look at this for a while.)
> 
> If you really do need deterministic encoding, it’s right there in STD94 
> (RFC8949).  You need to remember that deterministic encoding spans all the 
> way to the application, so slapping an I-something label on the encoder is 
> not going to give you actual interoperability if you really do need 
> deterministic encoding.
> 
> Do we?
> 
> Grüße, Carsten
> 
> _______________________________________________
> COSE mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/cose

_______________________________________________
COSE mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/cose

Reply via email to