On Thu, Jun 1, 2017 at 1:50 PM, Victor Vasiliev <vasi...@google.com> wrote:

> I am not sure I agree with this distinction.  I can accept the difference
> in
> terms of how much attacker can retry -- but we've already agreed that
> bounding
> that number is a good idea.  I don't see any meaningful distinction in
> other
> regards.
>

It's not just a difference in the number of duplicates. With retries, the
client maintains some control, so it can do things like impose delays and
update request IDs. Bill followed up with an exactly relevant example from
Token Binding where the retry intentionally has a different token value.
That kind of control is lost with attacker driven replays.

But even if we focus on just the number; there is something special about
allowing 0 literal replays of a 0-RTT section; it is easy for users to
confirm/audit/test. If there's a hard-guaranteee that 0-RTT "MUST" never be
replayable, then I feel like we have a hope of producing a viable 0-RTT
ecosystem. Plenty of providers may screw this up, or try to cut corners,
but if we can ensure that they get failing grades in security testing
tools, or maybe even browser warnings, then we can corral things into a
zone of safety. Otherwise, with no such mechanism, I fear that bad
operators will cause the entire 0-RTT feature to be tainted and entirely
turned off over time by clients.

>
> Sure, but this is just an argument for making N small.  Also, retrys can
> also
> be directed to arbitrary nodes.
>

This is absolutely true, but see my point about the client control.
Regardless, it is a much more difficult attack to carry out. That is to
intercept and rewrite a whole TCP connection Vs grabbing a 0-RTT section
and sending it again.


>
>
>> What concerns me most here is that people are clearly being confused by
>> the TLS 1.3 draft into mis-understanding how this interacts with 0-RTT. For
>> example the X-header trick, to derive an idempotency token from the binder,
>> that one experimental deployment innovated doesn't actually work because it
>> doesn't protect against the DKG attack. We're walking into rakes here.
>>
>
> Of course it doesn't protect against the DKG attack, but nothing at that
> layer
> actually does.
>
> This sounds like an issue with the current wording of the draft.  As I
> mentioned, I believe we should be very clear on what the developers should
> and
> should not expect from TLS.
>

Big +1 :)


> So, in other words, since we're now just bargaining about the value of N,
>>> operational concerns are a fair game.
>>>
>>
>> They're still not fair game imo, because there's a big difference between
>> permitting exactly
>> one duplicate, associated with a client-driven retry, and permitting huge
>> volumes of replays. They enable different kinds of attacks.
>>
>>
> Sure, but there's a space between "one" and "huge amount".
>

It's not just quantitive, it's qualitative too. But now I'm duplicating
myself more than once ;-)


> Well in the real world, I think it'll be pervasive, and I even think it
>> /should/ be. We should make 0-RTT that safe and remove the sharp edges.
>>
>
> Are you arguing that non-safe requests should be allowed to be sent via
> 0-RTT?
> Because that actually violates reasonable expectations of security
> guarantees
> for TLS, and I do not believe that is acceptable.
>

I'm just saying that it absolutely will happen, and I don't think any kind
of lawyering about the HTTP spec and REST will change that. Folks use GETs
for non-idempotent side-effect-bearing APIs a lot. And those folks don't
generally understand TLS or have anything to do with it. I see no real
chance of that changing and it's a bit of a deceit for us to think that
it's realistic that there will be these super careful 0-RTT deployments
where everyone from the Webserver administrator to the high-level
application designer is coordinating and fully aware of all of the
implications. It crosses layers that are traditionally quite far apart.

So with that in mind, I argue that we have to make TLS transport as secure
as possible by default, while still delivering 0-RTT because that's such a
beneficial improvement.


> I do not believe that this to be the case.  The DKG attack is an attack
>>> that allows
>>> for a replay.
>>>
>>
>> It's not. It permits a retry. The difference here is that the client is
>> in full control. It can decide to delay, to change a unique request ID, or
>> even not to retry at all. But the legitimate client generated the first
>> attempt, it can be signaled that it wasn't accepted, and then it generates
>> the second attempt. If it really really needs to it can even reason about
>> the complicated semantics of the earlier request being possibly
>> re-submitted later by an attacker.
>>
>
> That's already not acceptable for a lot of applications -- and by enabling
> 0-RTT for non-safe HTTP requests, we would be pulling the rug from under
> them.
>

Yep; but I think /this/ risk is manageable and tolerable. Careful clients,
like the token binding case, can actually mitigate this. I've outlined the
scheme. For careless clients, like browsers, they can mostly ignore this;
since they retry so easily anyway, it's no worse.

But there is *no* proposed mitigation for replayable 0-RTT. So I don't
think that's manageable. Just trying to make a data-driven decision. If
someone presents an alternative mitigation (besides forbidding replays),
I'll change my mind.


> Throttling POST requests is fine -- they shouldn't go over 0-RTT, since
> they
> are not idempotent.  Throttling GET requests in this manner goes at odds
> with
> RFC 7231.
>

Throttling GET requests happens all of the time and is an important
security and fairness measure used by many deployed systems. 0-RTT would
break it. That's not ok.

I don't think it is at odds with RFC 7231 ... which also defines the 503
status code.


Incidentally, guarantee C does solve the throttling problem -- if you get N
> replays, you get promise of 1/(N+1) throttled resource available to you.
> Deployments which do this with GETs may want to deploy the measures to
> make N
> very small.  Also, since they already keep track of the state for
> throttling
> purposes, they might add a strike register on top of that.
>

One could implement a strike register like that, in the same way as a
coordinated throttling system, they have some things in common. Though it
crosses layers.

I'm all for bounded replay protection, with the bounds being 1 ;-)
>>
>
> Why not 2?  What is the difference between 1 and 2?  2 and 3?  3 and 24?
> None
> of the proposed attacks distinguishes 1 replay and N replays in qualitative
> ways (as opposed to giant semantic leap between 0 and 1 replay); only in
> how
> much damage you can do, or how much information you can extract from the
> side
> channel.
>

You keep discarding my points about client control and focusing just on the
number; but there quantitative and qualitative differences. That kind of
argument isn't very productive and doesn't move us forward.  But I've
answered above too.

-- 
Colm
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to