We had a whole working group on JSON, which I was hoping would solve the
problem of duplicate members (by forbidding them).  That would have caused
the parser community to get more strict.

Unfortunately, that didn't happen, largely due to concern over streaming
generators/parsers that can't keep track of what keys they've seen.
Whether you agree with that or not, that's the IETF consensus in the
current JSON RFC.

The JOSE documents are not the place to solve JSON's duplicate-key
problems.  We tried to that problem and decided not to.

So we have two choices: Either deal with the limitations of the tool, or
throw it away and use another.  The limitations here are really not bad.
There are no attacks that would be enabled by duplicate keys, and there are
no other tools that provide what we need.  Regardless  of the merits of
I-JSON, the software base doesn't exist, and this spec isn't going to make
it exist (but if it does come to be, then implementations can use it!).
The current text basically says, "use an off-the-shelf parser that works
the way most off-the-shelf parsers work."  It accommodates the limitations
of reality.

Continuing to tilt at this windmill is just going to result in either
killing JOSE deployment or creating a portion of the spec that is
universally ignored.  And moreover, it will demonstrate further how
out-of-touch the IETF is with reality.

--Richard




On Thu, Sep 18, 2014 at 4:06 AM, Tero Kivinen <[email protected]> wrote:

> Tim Bray writes:
> > The chance  of the JOSE working group moving the vast world of
> > deployed JSON infrastructure round to 0.00.   Thus putting a MUST
> > reject in here would essentially say you can’t use well-debugged
> > production software, and would be a really bad idea.
>
> And are none of those jose parsers open source? If any of them is open
> source, then someone who wants to use jose, could take one, fix it to
> reject duplicates, and use that still well-debugged production
> software, with small patch, and he just need to add regression test
> case for the new patch, and rerun the normal regression tests to know
> everything else still works.
>
> If all of them are closed source software which you cannot patch, then
> it might be better that people write proper open source parser which
> actually tries to be secure.
>
> > On the other hand, if JOSE specified that producers’ messages MUST
> > conform to I-JSON, and a couple other WGs climbed on that bandwagon,
> > and the word started to get around, I wouldn’t be surprised if a few
> > of the popular JSON implementations added an I-JSON mode.  That
> > would be a good thing and lessen the attack surface of all
> > JSON-based protocols (which these days, is a whole lot of them).
>
> And if we say MUST reject structures with duplicate keys, that would
> perhaps force them even more, especially as those vendors really
> wanting to be conformant would start asking that.
>
> On the other hand, I think most of the vendors would just issue
> request for the fix, but still continue using the relaxed parser,
> regardless what we write in the specification here. At least if we say
> MUST then they hopefully will put the feature request in. If we say
> SHOULD, they will not...
> --
> [email protected]
>
> _______________________________________________
> secdir mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/secdir
> wiki: http://tools.ietf.org/area/sec/trac/wiki/SecDirReview
>
_______________________________________________
jose mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/jose

Reply via email to