Dear David et al,
I believe we all know IETF's position on this topic by now.
Anyway, there is considerable interest maintaining the core quality of text
based messages (human readability) also for signed data, leading to various
quests for alternative solutions.
To give some kind of perspective on the complexity, my C#/.NET implementation
[1] weighs in at 30 Kb of executable code.
It took 3 complete revisions and five calendar years to get to this point but
it was (hopefully) worth it :-)
Thanks,
Anders
1] https://github.com/cyberphone/json-canonicalization/tree/master/dotnet
On 2018-10-22 04:47, David Waite wrote:
On Oct 19, 2018, at 10:55 PM, Anders Rundgren <[email protected]
<mailto:[email protected]>> wrote:
There is also a (very active) W3C community group working with a similar
concept so clear-text signatures based on JSON canonicalization will probably
reach the market regardless of IETF's position on the matter:
https://w3c-dvcg.github.io/ld-signatures/
This document is not on a standards track. The group which published this it is
not a standards group. So this isn’t currently on track to reach the market as
a IETF or W3 standard.
It also is based on RDF canonicalization; it canonicalized a specific format
expressed in JSON, not arbitrary JSON.
That isn't to say that there couldn’t be multiple canonicalization formats
supported, possibly built as filters so you could combine them together, such
as existed with XML-DSIG. Then you have a compatibility matrix to deal with.
This still doesn’t solve the problem that many internal representations of JSON
will not preserve the full data set you are representing in your canonical form.
From my development experience with XML-DSIG, this is most commonly that tools
will discard information they do not understand, such as properties which
aren’t part of the public schema. For developers who do not understand the
additional restrictions a cleartext signature is placing on them, this causes
issues that only come up late in broad interoperability testing, that can only
be resolved through per-vendor workarounds or reimplementation to use different
tools. This actually occurred multiple times across different implementations,
sometimes when leveraging libraries that did claim to preserve the full
information set of the XML document.
The 1 million of random and "edge" JSON Numbers used for testing the devised
JSON canonicalization scheme, uncovered flaws in other systems as well...
https://github.com/dotnet/coreclr/issues/17467
This yet another case against canonical JSON, is it not? Or rather, how are
people expected to deal with intermittent interoperability failures until a new
language runtime release which revises the numerical print and parse functions
comes out?
-DW
_______________________________________________
jose mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/jose