On Wed, Apr 20, 2016 at 6:25 PM, Ron <[email protected]> wrote: > > Hi Phillip, > > On Tue, Apr 19, 2016 at 02:51:27PM -0400, Phillip Hallam-Baker wrote: >> In the meeting, I proposed that we make the use of JSON in ACME >> something that can be easily shared across multiple Web Services. >> >> In a few years time we are quite likely to have multiple JSON Web >> services that people want to use together. Implementing them as a >> suite is going to be easiest if there is a consistent style. Some of >> the encoding choices in ACME right now make implementation in >> statically typed languages rather less convenient than they could be. >> In particular it is really difficult to parse data streams when the >> type descriptor for an object is a field inside the object that can >> occur last rather than being first on the wire. > > I was one of the remote people who thought several of the things you > said at the meeting had merit and were worth discussing further here, > so thanks for writing this up. > > But I also thought you were talking about something more than what you > have proposed here now - so could you perhaps clarify a little - did > some of us read more into what you were saying than you actually had > in mind, or have you backed away from some of those suggestions after > further discussion after the meeting?
Yes and no. When I originally got the speaking slot, 'submit to last call' was not on the agenda. So the proposal I am making is slightly more modest. If we were 6 months out from last call then I would absolutely propose splitting the draft into two documents, a platform document and an ACME protocol document. In fact I already have such a draft: https://tools.ietf.org/html/draft-hallambaker-json-web-service-02 But I don't want my proposal to be critical path for last call. And so what I have done instead is to implement my version of ACME according to the layered model I describe and work out what the minimal changes that would be necessary to make ACME consistent. What I would like to avoid is what hit use with SAML where the people who insisted on one approach to layering in Version 1.0 discovered it didn't really work and it forced us to a Version 2.0. Ooops. > In particular one of the things that I thought seemed important was > moving _more_ of the protocol into the JSON. Right now we have a > bit of a mish-mash where part of the data you need to complete some > task is in the JSON, and part of it is squirrelled away in the HTTP > headers. I am trying to work out where things are happening, right now it looks as if we have things happening in the HTTP headers that are not actually covered by the authentication at all. Or that could just be me misreading the draft, either way it is confusing. > And there's a few places at least where that creates real > problems (like where an issued certificate might have more than one > valid issuer chain, just to pick one that JSHA and I had looked at). > > I got the impression you were hoping to make this more transport > agnostic (which seems like a useful and fairly easy thing to do if > we are going to change things to the degree you are talking here). That is still my objective. Though I agree that the first cut might actually be more of a 'discover bits of the spec that were assumed a bit too much'. > I understand the idea of what you are aiming for with this sort of > change - but the idea of a protocol full of JSON objects that are > mandated to strictly have only one field worries me a bit. > > It seems more like an admission that "JSON is wrong way to encode > this" (or perhaps "I'm using the wrong tool to decode JSON") than > a genuine advantage to the encoding. And I'm not of the opinion > that JSON is the wrong format to encode this in. I don't see that as being the case. Rather we have a series of protocol messages, each protocol message is of a specific type. We should state the type in JSON. So my ideal would be: "tag" : { ... object ... } But it seems to be easier to interface these to JSON support code wrapped up as an object. { "tag" : { ... object ... } } Now in a future version of the spec, I could well imagine that we might want to relax that one field requirement so that one authentication could cover two or more messages or we might have arrays of objects instead of singletons. So say we were requesting a thousand device certs at a time, we could do that. So I am saying that for purposes of ACME/1.0 we only need singleton requests and responses. > JSON objects can have multiple fields, and those field are unordered. > I don't think you can really escape that, and I don't think we can > reasonably "fix" that, without inventing something that "Isn't JSON". Or you can use nesting because the tag always precedes the value. > There may be smarter ways we can arrange some of these structures, > for good reasons, but I'm not sure that reinventing ASN.1 in JSON is > a plan I'd happily endorse. I see ASN.1 as a terrible implementation of a good idea. The idea of having one data encoding is good. The one they chose is horrible and DER worst of all. > And I do think there *are* ways that we need to rearrange some of > these request structures. In particular, overloading "reg" to both > make requests and request changes is troublesome. I think it needs > to be split into separate query and change resources. > > The obvious problem case that evidenced this is in testing of a real > implementation is that if you send: > > { "resource": "reg", "delete": true } > > to the current Boulder implementation, it will happily return a 202 > status, with the current registration resource as if you'd submitted > a "trivial payload" to actually query that state - with no indication > that something went wrong and the request you actually issued had > failed. > > I think that's a serious impediment to ever being able to extend the > registration resource functions in any backward compatible way and > we should address that now while we still have the chance. A request > to change something should never be misinterpreted as a request to do > something entirely different to that "successfully". I think that is spot on and comes up to 'criticality', one of those issues that people hate to raise because people have misunderstood it so badly in the past. I am pretty sure that a large fraction of those BULLRUN dollars were spent muddying the water with chatter about criticality. There are two ways that you might want a legacy application to react when faced with an extension: 1) Ignore the additional fields and continue as normal 2) Reject the request, do not attempt to process it. This is of course what the X.509 'CRITICAL' bit is intended to achieve. But people misinterpreted 'CRITICAL' and meaning 'important' and have as a result come up with idiotic specs that require CAs to cause legacy systems to break just for the hell of it. And probably not coincidentally, that idiot spec delayed deployment of a scheme that could have blocked the FLAME attack on Microsoft's internal CA. When we faced this issue in SAML, I had to disguise the criticality bit as 'Conditions' so that folk didn't complain. Sometimes the desired legacy behavior is to have it reject the request, sometimes it is to carry on. This maps very nicely to the nested JSON scheme: * To cause additional fields to be ignored, keep the object type identifier tag the same. * To cause legacy software to reject, change the object type identifier tag > The example you give here highlights the concern I expressed above. > In: > > "identifier": { "dns": { "value": "example.org"} }, > > by getting rid of the "type" field, you may have made things easier > for your parser (while it is hardcoded with *all* of the valid field > names that might appear in an "identifier" object) but you've in turn > made it harder for everyone else to know what to do if they don't > recognise that field name, and difficult or impossible for any future > extension to that structure to occur. The rule 'ignore field names you don't understand' seems to fit the JSON approach. If a field is mandatory, you need to understand it. Lets say that we want to introduce a new identifier type. "identifier": { "new": { "value": "example:id"} }, The service can't understand the identifier type that it needs to understand to process the request, so the response here is 'I don't support that request'. If however we wanted to add some additional information into the identifier, such as a DNSSEC trust root (no, I don't know why you would, pretend), we have: "identifier": { "dns": { "value": "example.org", "dnssec" : "wh2iiy24iy2iuwrywiuy3riu2h3riuh=="} }, Legacy services can simply ignore the additional information. I think this works very nicely with the challenge types. I can imagine us adding lots of additional fields over time that are not intended to break backwards compatibility. But we do have that option if it is needed. > My code can no longer just go 'there's a value in the type field that > I don't recognise, but I know what "type" means and I can safely just > ignore that'. Now I have a field name that I don't recognise, or have > any idea what to do with, and have no "critical" flag to tell me if > I can safely ignore that or not. So we need to add the above explanation into section 5 and then identify the areas where we might want the option of breaking legacy systems rather than suffer their best efforts. > And I have a structure to parse with lots of redundancy. Why have > "dns" be an object that itself only has one field in it? Why bother > to wrap it in "identifier" if "dns" defines it as an identifier ... Question is whether this is an extension point or not. If it isn't then we can remove the hierarchy. { "blah" : "....", "identifier": { "dns": { "value": "example.org"} }, becomes { "blah" : "....", "dns": { "value": "example.org"} }, > It wouldn't be impossible to fix those problems, but if the fix for > them is to define lots of objects that may only have one field, then > I don't really see that as being an improvement, and this isn't > really JSON anymore, it's more like Yet Another Text Representation > Of ASN.1, which I think would burn bridges for future extension that > we'd be better off not burning. I think we just need to understand where we want to put extension points. Right now, anywhere where we have a "type" tag approach seems right. ASN.1 certainly has a terrible approach to extensibility. The scheme I outline looks to me as if maintains the JSON approach of 'add fields at will' while still having enough mechanism there to be able to tell legacy services when they should not attempt to do their best with stuff they do not understand. _______________________________________________ Acme mailing list [email protected] https://www.ietf.org/mailman/listinfo/acme
