On Thu, May 4, 2017 at 4:35 PM, Watson Ladd <[email protected]> wrote:
> Dear all, > > Applications have always had to deal with the occasional replay, > whether from an impatient user or a broken connection at exactly the > wrong time. Unfortunately this isn't the case :( Not all applications are user-driven in that way, and some are very careful about how they retry. In the review I go through the example of an update to an eventually consistent data store. In that case, the application will typically try to update and if that update times-out, it will wait a time period, check for a conflict or a success, and only then retry. At no point is a replay, normal or expected. One could argue that Zero-RTT is not for applications like this, but why not? They certainly do benefit from the speed-up. And so in the review I go through reasons why it is likely that they will use it anyway, ranging from how subtle and hard the review is, to inadvertent enabling of the feature on something like an L4 TLS proxy which has no awareness of the underlying protocol. But they've generally been rare, so human-in-the-loop > responses work. Order the same book twice? Just return one of them, > and if you get an overdraft fee, ouch, we're sorry, but nothing we can > do. > > 0-RTT is opt-in per protocol, and what we think of per application. > But it isn't opt-in for web application developers. Once browsers and > servers start shipping, 0-RTT will turn on by accident or deliberately > at various places in the stack. > +1! > At the same time idempotency patterns using unique IDs might require > nonidempotent backend requests. But this is an easier problem than if > we had nonidempotent brower requests: backends are much more > controlled. > > If you are willing to buffer 0-RTT until completion before going to > the thing that makes the response, you can handle this problem for the > responsemaker. This will work for most applications I can think of, > and you need to handle large, drawn out requests anyway. This sounds > like it would work as a solution, but I am sure there are details I > haven't discovered yet. > I think you're right; and we could enforce in TLS by encrypting 0-RTT under a key that isn't transmitted until 1-RTT. But this would also defeat the point of the speed-up; especially for CDNs. It really does speed things up for them to be able to send the request to the origin right away. > In conclusion I think there is some thought that needs to go into > handling 0-RTT on the web, but it is manageable. I don't know about > other protocols, but they don't have the same kinds of problem as the > web does with lots of request passing around. Given the benefits here, > I think people will really want 0-RTT, and we are going to have to > figure out how to live with it. > +1 I think it's manageable too. Just have servers check for dupes :) -- Colm
_______________________________________________ TLS mailing list [email protected] https://www.ietf.org/mailman/listinfo/tls
