On Thu, Jun 1, 2017 at 5:22 PM, Eric Rescorla <e...@rtfm.com> wrote:

> I've just gone through this thread and I'm having a very hard time
> understanding what the actual substantive argument is about.
>
> Let me lay out what I think we all agree on.
>

This is a good summary, I just have a few clarifications ...


> 1. As long as 0-RTT is declinable (i.e., 0-RTT does not cause
>    connection failures) then a DKG-style attack where the client
>    replays the 0-RTT data in 1-RTT is possible.
>

This isn't what I call a replay. It's a second request, but the client is
control of it. That distinction matters because the client can modify it if
it needs to be unique in some way and that turns out to be important for
some cases.

2. Because of point #1, applications must implement some form
>    of replay-safe semantics.
>

Yep; though note that in some cases those replay-safe semantics themselves
actually depend on uniquely identifiable requests. For example a protocol
that depends on client-side-versioning, or the token-binding case.


> 3. Allowing the attacker to generate an arbitrary number of 0-RTT
>    replays without client intervention is dangerous even if
>    the application implements replay-safe semantics.
>

Yep.


> 4. If implemented properly, both a single-use ticket and a
>    strike-register style mechanism make it possible to limit
>    the number of 0-RTT copies which are processed to 1 within
>    a given zone (where a zone is defined as having consistent
>    storage), so the number of accepted copies of the 0-RTT
>    data is N where N is the number of zones.
>

This is much better than the total anarchy of allowing completely unlimited
replay, and it does reduce the risk for side-channels, throttles etc, but I
wouldn't consider it a proper implementation or secure. Importantly it gets
us back to a state where clients may have no control over a deterministic
outcome.

Some clients need idempotency tokens that are consistent for duplicate
requests, this approach works ok then. Other kinds of clients also need
tokens that are unique to each request attempt, this approach doesn't work
ok in that case. That's the qualitative difference.

I'd also add that the suggested optimization here is clearly to support
globally resumable session tickets that are not scoped to a single site.
That's a worthy goal; but it's unfortunate that in the draft design it also
means that 0-RTT sections would be globally scoped. That's seems bad to me
because it's so hostile to forward secrecy, and hostile to protecting the
most critical user-data. What's the point of having FS for everything
except the requests, where the auth details often are, and which can
usually be used to generate the response? Synchronizing keys that can
de-cloak an arbitrary number of such sessions to many data centers spread
out across the world, seems just so defeating. I realize that it's common
today, I've built such systems, but at some point we have to decide that FS
either matters or it doesn't. Are users and their security auditors really
going to live with that? What is the point of rolling out ECDHE so
pervasively only to undo most of the benefit?

Maybe a lot of this dilemma could be avoided if the the PSKs that can be
used for regular resumption and for 0-RTT encryption were separate, with
the latter being scoped smaller and with use-at-most-once semantics.

5. Implementing the level of coherency to get #4 is a pain.
>
> 6. If you bind each ticket to a given zone, then you can
>    get limit the number of accepted 0-RTT copies to 1
>    (for that zone) and accepted 1-RTT copies to 1 (because
>    of the DKG attack listed above).
>

Yep! Agreed :)

-- 
Colm
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to