Hi Phillip,

On Tue, Apr 19, 2016 at 02:51:27PM -0400, Phillip Hallam-Baker wrote:
> In the meeting, I proposed that we make the use of JSON in ACME
> something that can be easily shared across multiple Web Services.
> 
> In a few years time we are quite likely to have multiple JSON Web
> services that people want to use together. Implementing them as a
> suite is going to be easiest if there is a consistent style. Some of
> the encoding choices in ACME right now make implementation in
> statically typed languages rather less convenient than they could be.
> In particular it is really difficult to parse data streams when the
> type descriptor for an object is a field inside the object that can
> occur last rather than being first on the wire.

I was one of the remote people who thought several of the things you
said at the meeting had merit and were worth discussing further here,
so thanks for writing this up.

But I also thought you were talking about something more than what you
have proposed here now - so could you perhaps clarify a little - did
some of us read more into what you were saying than you actually had
in mind, or have you backed away from some of those suggestions after
further discussion after the meeting?

In particular one of the things that I thought seemed important was
moving _more_ of the protocol into the JSON.  Right now we have a
bit of a mish-mash where part of the data you need to complete some
task is in the JSON, and part of it is squirrelled away in the HTTP
headers.  And there's a few places at least where that creates real
problems (like where an issued certificate might have more than one
valid issuer chain, just to pick one that JSHA and I had looked at).

I got the impression you were hoping to make this more transport
agnostic (which seems like a useful and fairly easy thing to do if
we are going to change things to the degree you are talking here).


> Looking at the impact of the changes in the spec, I don't think the
> changes should delay things as I think they actually make the document
> easier to read. At the moment there are things in sections 5 and 6
> that seem in the wrong place. Promoting the "type" field to be the
> name of the thing makes section 7 a lot clearer.

That's not without its own cost though, but more on that below ...

 
> I can make the edits myself of course. But thought I would check up on
> the nature of the edits first and see if my approach was acceptable.
> 
> 
> I see the following areas needing change:
> 
> 
> 5.3.  Request URI Type Integrity
> 
> Reword to specify the mapping of requests onto JSON messages, each
> request is a single object consisting of one field whose tag specifies
> the request type and whose value is an object containing the request
> data.

I understand the idea of what you are aiming for with this sort of
change - but the idea of a protocol full of JSON objects that are
mandated to strictly have only one field worries me a bit.

It seems more like an admission that "JSON is wrong way to encode
this" (or perhaps "I'm using the wrong tool to decode JSON") than
a genuine advantage to the encoding.  And I'm not of the opinion
that JSON is the wrong format to encode this in.

JSON objects can have multiple fields, and those field are unordered.
I don't think you can really escape that, and I don't think we can
reasonably "fix" that, without inventing something that "Isn't JSON".

There may be smarter ways we can arrange some of these structures,
for good reasons, but I'm not sure that reinventing ASN.1 in JSON is
a plan I'd happily endorse.


And I do think there *are* ways that we need to rearrange some of
these request structures.  In particular, overloading "reg" to both
make requests and request changes is troublesome.  I think it needs
to be split into separate query and change resources.

The obvious problem case that evidenced this is in testing of a real
implementation is that if you send:

 { "resource": "reg", "delete": true }

to the current Boulder implementation, it will happily return a 202
status, with the current registration resource as if you'd submitted
a "trivial payload" to actually query that state - with no indication
that something went wrong and the request you actually issued had
failed.

I think that's a serious impediment to ever being able to extend the
registration resource functions in any backward compatible way and
we should address that now while we still have the chance.  A request
to change something should never be misinterpreted as a request to do
something entirely different to that "successfully".



> Move table from here into section 6. Because section 5 should specify
> the layering, it should be ACME agnostic. Later on, this is the
> section we will pull out and build on as a re-usable element in other
> protocols (e.g. LURK).
> 
> 
> 5.X  Discuss GET vs POST
> 
> While we are about it, there should be some grounding in section 5 for
> the GET/POST distinction made in section 6.
> 
> I think the document as a whole would be cleaner if these were set out
> as two options in section 5 then section 6 describes which are
> assigned to which.
> 
> 
> 
> 6.  Certificate Management
> 
> One of the things I found very confusing when trying to implement was
> that there are tables in 5.3 and 6.1 that seem to have the same
> information but obviously don't as one has eight entries and the other
> has six.
> 
> I think these need to be merged.
> 
> 
> Example now becomes:
> 
> { "new-reg"  {
>      "contact": [
>        "mailto:[email protected]";,
>        "tel:+12025551212"
>      ],
>      "agreement": "https://example.com/acme/terms";,
>      "authorizations": "https://example.com/acme/reg/1/authz";,
>      "certificates": "https://example.com/acme/reg/1/cert";,
>    }}

I agree there's several places in the text where you either need to go
back and forth to get the full description of something, or where the
examples appear (or are) conflicting.  I think I noted some of those
in the nits I posted earlier, and agree they should be fixed, but that's
mostly editorial, not an actual change to the spec.


> 6.1.2 Authorization Objects
> 
>    The structure of an ACME authorization resource is a JSON
> object with exactly one field that specifies resource type and whose
> value is an object containing the resource parameters.
> 
>    {
>      "status": "valid",
>      "expires": "2015-03-01T14:09:00Z",
> 
>      "identifier": {
>        "dns" : {
>           "value": "example.org"}
>      },
> 
>      "challenges": [
>        {
>          "http-01" : {
>          "status": "valid",
>          "validated": "2014-12-01T12:05:00Z",
>          "keyAuthorization": "SXQe-2XODaDxNR...vb29HhjjLPSggwiE"
>        }}
>      ],
>    }
> 
> 7.  Identifier Validation Challenges
> 
> Here, just delete the 'type' fields and instead put the type value in
> the section headings.

The example you give here highlights the concern I expressed above.
In:

  "identifier": { "dns": { "value": "example.org"} },

by getting rid of the "type" field, you may have made things easier
for your parser (while it is hardcoded with *all* of the valid field
names that might appear in an "identifier" object) but you've in turn
made it harder for everyone else to know what to do if they don't
recognise that field name, and difficult or impossible for any future
extension to that structure to occur.

My code can no longer just go 'there's a value in the type field that
I don't recognise, but I know what "type" means and I can safely just
ignore that'.  Now I have a field name that I don't recognise, or have
any idea what to do with, and have no "critical" flag to tell me if
I can safely ignore that or not.

And I have a structure to parse with lots of redundancy.  Why have
"dns" be an object that itself only has one field in it?  Why bother
to wrap it in "identifier" if "dns" defines it as an identifier ...


It wouldn't be impossible to fix those problems, but if the fix for
them is to define lots of objects that may only have one field, then
I don't really see that as being an improvement, and this isn't
really JSON anymore, it's more like Yet Another Text Representation
Of ASN.1, which I think would burn bridges for future extension that
we'd be better off not burning.

There may be a way to get what you want and have this still be JSON
with all the nice properties that come with that, but I don't think
the above example is it.


You still *have* to parse the JSON text to ensure it is valid JSON
before you can make any assumption about what "dns" actually means
or whether the data that follows it can safely be stuffed into a
static data type.  And once you've done that, you have the "type"
field to know what the static type is ...


I do think there is some scope (and real need) to tweak some of
these structures, and I have a lot of natural sympathy for anything
which would make them easier to parse correctly and safely in a
broad range of code, but I think the above change just trades one
set of problems for a different (and larger) set of them and pushes
the problem from being a relatively widely solved one ("how do I
parse JSON") to being one that every implementor will need to code
logic for ("how do I handle unknown _fields_, and not just unknown
values in them").

But maybe I'm still just missing something about everything you have
in mind here, so I'm keeping an open mind still, and do think there
are other very good reasons we might tweak some of these structures.


  Cheers,
  Ron


_______________________________________________
Acme mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/acme

Reply via email to