Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Colm MacCárthaigh
Ar Dé Domhnaigh 13 Márta 2016, scríobh Eric Rescorla :
>
>
> 1. Nothing requires applications to use this feature at all. First, servers
> need to advertise it and are free to (a) not offer clients the ability to
> send
> 0-RTT data and (b) refuse to accept it if clients send it. Moreover,
> everyone
> I know of who is considering building a 1.3 library intends to provide
> that data to the server via a separate API, so the server will have to work
> to get it.
>

security is very difficult to judge and measure - but speed is very easy.
This sets up a sort of "race to the bottom" where providers may feel
pressured to respond and enable an unsafe feature; because the speed
benefit is more apparent than the loss of security.  There's a real trade
off; we should favor the s in tls :)

-
Colm


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] tls - Requested sessions have been scheduled for IETF 95

2016-03-13 Thread "IETF Secretariat"
Dear spt,

The session(s) that you have requested have been scheduled.
Below is the scheduled session information followed by
the original request. 

tls Session 1 (2:30:00)
Tuesday, Morning Session I 1000-1230
Room Name: Atlantico B size: 125
-
tls Session 2 (2:30:00)
Thursday, Morning Session I 1000-1230
Room Name: Atlantico C size: 225
-



Request Information:


-
Working Group Name: Transport Layer Security
Area Name: Security Area
Session Requester: spt

Number of Sessions: 2
Length of Session(s):  2.5 Hours, 2.5 Hours
Number of Attendees: 120
Conflicts to Avoid: 
 First Priority: tokbind tcpinc stir saag rtcweb opsawg httpauth httpbis dane 
cfrg cose jose acme trans uta capport curdle
 Second Priority: sacm oauth sidr i2nsf



Special Requests:
  Please avoid LURK BOF.
-

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Colm MacCárthaigh
On Sun, Mar 13, 2016 at 4:14 AM, Stephen Farrell 
wrote:

> With 0rtt, I think it also becomes a dangerous
> implement. So, that's my personal opinion, while not wearing an
> AD hat.
>

+1 for this as 0RTT is outlined in the draft. But to expand a little:

* Losing forward secrecy for "GET" requests and user cookies seems like a
very bad privacy trade-off. Collection of wire data, combined with a bug or
an attack that could disclose the ECDH parameters, does not seem so far
fetched these days.

* I imagine that 0RTT data may be intended for pre-fetching hints, e.g.
"I'm request URL foo as user bar, so please get a start on fetching the
resources I need". That suggests trivial application-level "is the cache
warm" side channels, or "lets trash the whole cache" DOS attacks, would
both be common.

* A vast number of APIs build on top of TLS/HTTPS are not idempotent or
replay safe. Even carefully designed APIs may have subtle side effects such
as a user being charged for the request, or the request counting towards a
throttle of some kind. Even ones as silly as there go your 10 free nytimes
articles; or booting an opponent out of your less-than-friendly sudoku
tournament.

* The draft calls out that early data needs to be handled differently than
regular TLS data. This reminds me of the problems with renegotiatable
client authentication - similar to what Marsh Ray highlighted a long time
ago. What if a request or action spans both the early and the regular data
sections? what would a "safe" API look like, for an SSL/TLS library layer?

It seems like the benefits of of 0-RTT boil down to two things;

1. Increased throughput
2. Give the recipient a chance to a get a head-start on servicing some kind
of request.

For 1. Raw data throughput could be improved by envelope encrypting the
early data; and transferring the envelope key only once the session has
been fully authenticated. Not that interesting to most people; but works. I
can't think of a good way to achieve 2 without all of the problems that
have been mentioned.

At a minimum I'd suggest that the draft could not call it "data", but maybe
instead "hint"; and make it so that early_data and application_data are not
supposed to appear sequentially in a stream. E.g. someone calling TLSRead()
or whatever should never expect to see the early_data; it's a deliberate
and separate stream.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Harlan Lieberman-Berg
Bill Cox  writes:
>> Most server admins won't be reading the TLSv1.3 spec.  They're going to
>> see "shiny feature added specifically in this version that makes it
>> faster!" with *maybe* a warning that there are risks, which they'll
>> dismiss because "if it was so insecure, they wouldn't have included it
>> in the protocol in the first place."  Unless 0-RTT can be fixed, it
>> looks like an attractive nuisance.
>
> I agree.  Instead of dropping 0-RTT, I think we should make it easy for
> admins to learn about what is involved in using 0-RTT in ways we believe
> are secure.  [snip]

I agree with a slight tweak in wording here, Bill.  I think that we
/should/ drop the parts of 0-RTT where we are not confident that an
admin who blindly enables functionality in TLS 1.3 will not end up
harming themselves.

More generally, I strongly believe that TLS 1.3 should not
provide options which we think should be restricted to "admins who know
what they're doing".  These end up hurting us down the line (cf EXPORT
cipher suites.)

I think we should ship the parts of 0-RTT that we believe are
intrinsically safe for (the vast majority) of the internet to enable and
use on day 1.

Sincerely,

-- 
Harlan Lieberman-Berg
~hlieberman

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Limiting replay time frame of 0-RTT data

2016-03-13 Thread Jeffrey Walton
On Sat, Mar 12, 2016 at 12:45 PM, Karthikeyan Bhargavan
 wrote:
> Hi Kyle,
>
> In my talk at TRON, I was also concerned by potential attacks from allowing
> unlimited replay of 0-RTT data. I recommended that TLS 1.3 servers should
> implement replay protection using a cache, but requiring clients to provide
> a timestamp in the client random is a great idea. Perhaps this would also
> allow TLS 1.3 servers to detect clients whose clocks are too out-of-sync
> with

Be careful here. The whole point of 0-RTT is to start consuming data
before its authenticated to save a few milliseconds in network time.
All those parameters are controlled by the attacker.

We never worried about the extra roundtrip using satellite comms with
0.5 to 1.0 second delays. Additionally, most of my delays in fetching
web pages seems to be delays in Google APIs and servings third party
ads. 0-RTT seems to be a solution looking for a problem.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Erik Nygren
On Sun, Mar 13, 2016 at 12:21 PM, Eric Rescorla  wrote:

>
> On Sun, Mar 13, 2016 at 3:51 PM, Yoav Nir  wrote:
>
>>
>> > On 13 Mar 2016, at 4:45 PM, Salz, Rich  wrote:
>> >
>> >> I also think it is prudent to assume that implementers will turn on
>> replayable
>> >> data even if nobody has figured out the consequences.
>> >
>> > I very much agree.  Customers, particularly those in the mobile field,
>> will look at this and say "I can avoid an extra RTT?  *TURN IT ON*" without
>> fully understanding, or perhaps even really caring about, the security
>> implications.
>>
>> Perhaps, and I think IoT devices are likely to do so as well.
>>
>> Is OpenSSL going to implement this? Are all the browsers?
>>
>
> There are already patches in preparation for this for NSS and I expect
> Firefox to
> implement it, as long as we have any indication that a reasonable numbers
> of
> servers will accept it.
>

I share some of the concerns expressed in this thread that 0RTT has the
risk of becoming an attractive nuisance.  Once browsers start supporting
it, server operators will feel competitive pressure to support it.  And
this will then put additional pressure onto more browsers to support it,
possibly pushing the edges where it is safe.

For example, when is HTTP GET safe vs not safe and who makes this call?
Especially if you have a browser that assumes that GET should be idempotent
and can be sent via 0RTT early data, a web developer whose never heard of
the word "idempotent" and builds an app with GET with side-effects but
assumes its safe since it's over TLS, and a server operator who is just
turning on a vendor provided feature which their users requested, not to
mention perhaps having a CDN or load-balancing-tls-terminating-proxy that
doesn't have a good way to convey the risks across two connections.  One
idea for HTTP that I'm increasingly in-favor of might be to define a new
method ("GET0"?)  and require that browsers use one of these new methods in
the early data request to expose this as high in the stack as possible.

It seems like there is a level of diminishing returns here if we compare
some of the options (not meant to be strictly ordered below):

1) We have old-school cleartext over TCP which has 1RTT before client data
(due to SYN/SYNACK)
2) We have TCP + TLS 1.[012] which has both the SYN/SYNACK plus the 2RTT
behavior means 3RTT before client data.
3) We have TCP TFO + TLS 1.3 1RTT which yields 1RTT before client data  (or
back to the same as #1 for resumption)
4) We have TCP (no TFO) + TLS 1.3 0RTT which also yields 1RTT before client
data for resumption
5) Then there's TCP TFO + TLS 1.3 0RTT which yields 0RTT before client data
for resumption
6) And finally there's old-school cleartext TCP TFO which has 0RTT before
client data, but which people are very hesitant to use for HTTP due to
replay issues.

Of these, #3 and #4 yield similar performance (with some limitations around
#3, such as requiring the same server IP or server IP prefix in some newer
drafts).

>From this, a few thoughts jump to mind from this:

* We get many of the same benefits 0RTT with using TCP TFO (TCP FastOpen)
and TLS 1.3 1RTT mode together as we get when using TLS 1.3 0RTT and stock
TCP.  TFO seems much safer here since its replay risks are at a lower level
that should be safe for TLS outside of the 0RTT context.  Note that there
are some issues with middleboxes sometimes breaking badly (blocking all
connections from a client IP for 30 seconds) when it tries to use TFO as
discussed recently in TCPM, but we may all want to focus some effort on
getting those fixed.
* We'll almost certainly want to make sure that any UDP-based protocol
(DTLS 1.3 or QUIC-over-TLS-1.3) can do a true 1RTT handshake safely in a
common case.  (ie, in a way that mirrors TCP TFO + TLS 1.3 1RTT.)  I
suspect this will be the bare minimum for getting QUIC to switch to use TLS
1.3.
* It seems like the risks around TLS 1.3 0RTT and TFO are similar (with TCP
being a protocol not trying to provide security properties).  If people
have been very wary to enable TFO for cleartext HTTP due to risks from
duplicated packets, shouldn't we be even more worried about TLS 1.3 0RTT
since the next-layer-up semantic issues and risks are similar, but TLS 1.3
0RTT potentially even has fewer mitigators?  (eg, we don't bind the server
IP in cryptographically to the request the client is making --- although
that might be an interesting addition to help make TLS 1.3 0RTT safer?)


There may also be some hacks that make TLS 1.3 0RTT marginally safer,
although I'm sure there are situations where they don't work and they may
just provide a false sense of security:
* Have the client include a time-delta-relative-to-PSK-issuance, as Martin
suggested.  (To allow the server to bound the duration of replay attacks.)
* Include the server IP in the client_hello for 0RTT  (to prevent replays
against different clusters).  There are a bunch 

Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Scott Schmit
On Sun, Mar 13, 2016 at 11:14:13AM +, Stephen Farrell wrote:
> First, with no hats, if the WG were to have a poll on whether
> or not to include 0rtt in TLS1.3, then as a participant in the
> work here, I'd be firmly arguing to leave it out entirely. I
> really think an over-emphasis on reducing latency for browsers
> is going to bite us (and the Internet) in the ass in the same
> ways that emphasising interop over security has in the past with
> fallbacks to older, worse versions of TLS/SSL, with all their
> inherent flaws and bits of e.g. crappy "export" crypto support.
> Absent 0rtt, TLS1.3 seems to me to be an excellent step forward
> in security. With 0rtt, I think it also becomes a dangerous
> implement. So, that's my personal opinion, while not wearing an
> AD hat.

I think you're exactly right.  Let's look at the vulnerabilities in TLS:
- Renegotiation attack
  - Optimization to establish a new TLS session without reestablishing a
TCP/IP connection.  Broken repeatedly despite analysis, etc.
  - TLSv1.3's answer?  Drop renegotiation.
- BEAST
  - CBC chained off the last block sent so that an explicit IV need not
be sent on the wire.  Another optimization.
  - TLSv1.3's answer?  Drop CBC.
- CRIME / BREACH
  - Compression was supported to speed up connections (less data to
process/transfer).  Optimization, again.
  - TLSv1.3's answer?  Drop compression.
- Lucky Thirteen / Padding timing attacks / POODLE
  - Years later, servers are still vulnerable because insecure
algorithms & versions remain enabled, even with the publicity around
POODLE
  - TLSv1.3's answer?  Drop the bad cipher suites and offering < 1.0.
- RC4
  - RC4 is known to be weak, but it continues to be widely used
  - SSL/TLS included it because analysis said it was secure as used.
  - TLSv1.3's answer?  Drop RC4.
- FREAK / Logjam (downgrade attacks)
  - This happened because TLS vendors left obsolete export algorithms
implemented, and server admins left them enabled.
  - TLSv1.3's answer?  Drop the insecure cipher suites.
- DROWN
  - Why is this even an issue?  Because people STILL haven't turned off
SSLv2 *twenty* *years* after it was known to be insecure!!
  - TLSv1.3's answer?  Drop compatibility with SSLv2.

So why are we adding a protocol optimization known from the start to be
insecure (or less secure than you'd expect from using a PFS cipher
suite)?

What percentage of servers that have a perceived need for 0-RTT will be
able to securely use and benefit from this feature as their
infrastructure is actually implemented?

If almost everyone should turn it off, why are we including it?

Most server admins won't be reading the TLSv1.3 spec.  They're going to
see "shiny feature added specifically in this version that makes it
faster!" with *maybe* a warning that there are risks, which they'll
dismiss because "if it was so insecure, they wouldn't have included it
in the protocol in the first place."  Unless 0-RTT can be fixed, it
looks like an attractive nuisance.

Let's leave it out.

-- 
Scott Schmit


smime.p7s
Description: S/MIME cryptographic signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Ilari Liusvaara
On Sun, Mar 13, 2016 at 11:14:13AM +, Stephen Farrell wrote:
> 
> I've been worried about this for a while now, but the recant
> thread started by Kyle Nekritz [3] prompted me to send this
> as I think that's likely just the tip of an iceberg. E.g., I'd
> be worried about cross-protocol attacks one might be able to
> try with JS in a browser if the JS can create arbitrary HTTP
> header fields which I think is the case. I'm also worried about
> things like EAP-TLS and RADIUS/Diameter if used via TLS etc
> where we don't necessarily have the right people active on this
> list. While I don't have any concrete attacks, the ability to
> create replayable data smells really really bad to me and I've
> no idea how we can honestly be confident we've done a good job
> on TLS1.3 while such smells linger.

Also, it occurs to me that problems can arise if one tries to
combine 0-RTT data with ALPN. The 0-RTT datablock is probably
only appropriate for one of the protocols...

If it is HTTP/2 vs. HTTP/1.1, if you get it wrong, the connection
will break (the HTTP/2 prelude). Wonder if one is so fortunate with
some other protocol pairs...

Hmm... That got me some ideas...

> I'd also note that my overall impression of the TRON w/s was that
> researchers thought 1rtt was mostly ready, but that there was
> no similar confidence in 0rtt. I also don't think "another TRON" is
> the answer here, as we'd not have the right people in the room
> who'd know the consequences of replay for all instances of /TLS.

TLS 1.3 1-RTT is just boring, unless you are trying to do something
at least a bit screwy, like mix pure-PSK and client-auth.
 
No such luck with 0-RTT. There is all sorts of cryptographic screwyness
in there too (through getting rid of DH-0RTT should eliminate that).


Also, it occurs to me that very few protocols are even nearly as
vulernable to these kind of issues than HTTP, including cases where
one end is speaking HTTP but the other is not...



-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Salz, Rich

> Personally, I think we should start without 0 RTT until we have a better
> understanding of what the consequences are.

For those who don't know, Kurt is on the openssl-dev team (longer than me), but 
is just more quiet and modest about it :)

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Eric Rescorla
On Sun, Mar 13, 2016 at 3:51 PM, Yoav Nir  wrote:

>
> > On 13 Mar 2016, at 4:45 PM, Salz, Rich  wrote:
> >
> >> I also think it is prudent to assume that implementers will turn on
> replayable
> >> data even if nobody has figured out the consequences.
> >
> > I very much agree.  Customers, particularly those in the mobile field,
> will look at this and say "I can avoid an extra RTT?  *TURN IT ON*" without
> fully understanding, or perhaps even really caring about, the security
> implications.
>
> Perhaps, and I think IoT devices are likely to do so as well.
>
> Is OpenSSL going to implement this? Are all the browsers?
>

There are already patches in preparation for this for NSS and I expect
Firefox to
implement it, as long as we have any indication that a reasonable numbers of
servers will accept it.

-Ekr


> (only the first one is directed specifically at you, Rich…)
>
> Yoav
>
>
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Kurt Roeckx
On Sun, Mar 13, 2016 at 04:51:49PM +0200, Yoav Nir wrote:
> 
> Is OpenSSL going to implement this?

Personally, I think we should start without 0 RTT until we have a
better understanding of what the consequences are.


Kurt

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Salz, Rich

> Is OpenSSL going to implement this? Are all the browsers?
> 
> (only the first one is directed specifically at you, Rich…)

I can answer the second question more easily :)  Yes the browsers will.

OpenSSL is unlikely to have TLS 1.3 before end of 2016 and I don't know what 
we'll do.  Right now we're finishing up our major 1.1 release, due in a month 
or so.  And then we'll figure out our 1.3 story.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Stephen Farrell


On 13/03/16 14:49, Eric Rescorla wrote:
>  Perhaps we could start by actually sponsoring some of those
> reviews. Given that HTTP is the primary customer for 0-RTT, perhaps
> Mark or Martin would be willing to start a review there?

I think that'd really help esp. if we can get folks looking at a
range of protocols that use TLS.

For the web, I'm pretty confident that the analysis will be done
and done well - that is the main use-case motivating this after
all. But getting that started now would still be great.

Getting a few non-web cases analysed would be great too if we
can figure a way to help get that to happen.

Cheers,
S.



smime.p7s
Description: S/MIME Cryptographic Signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Yoav Nir

> On 13 Mar 2016, at 4:45 PM, Salz, Rich  wrote:
> 
>> I also think it is prudent to assume that implementers will turn on 
>> replayable
>> data even if nobody has figured out the consequences.
> 
> I very much agree.  Customers, particularly those in the mobile field, will 
> look at this and say "I can avoid an extra RTT?  *TURN IT ON*" without fully 
> understanding, or perhaps even really caring about, the security 
> implications. 

Perhaps, and I think IoT devices are likely to do so as well.

Is OpenSSL going to implement this? Are all the browsers?

(only the first one is directed specifically at you, Rich…)

Yoav



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Eric Rescorla
On Sun, Mar 13, 2016 at 3:41 PM, Stephen Farrell 
wrote:

>
> Hiya,
>
> On 13/03/16 14:01, Eric Rescorla wrote:
> >
> > This is not an accurate way to represent the situation. Those WGs can
> safely
> > move from TLS 1.2 to 1.3 *as long as they don't use 0-RTT*.
>
> I agree your 2nd sentence but not your 1st.
>
> I also think it is prudent to assume that implementers will turn on
> replayable data even if nobody has figured out the consequences.


That may well be true, but I don't believe that it allows us to make
progress.
We already know that there are conditions in which 0-RTT is unsafe. That's
why
the specification has extensive caveats around its use. Therefore, either we
(collectively) can:

- Not specify it at all.
- Specify it and provide warnings that people should only use it in certain
  circumstances and attempt to delineate these circumstances.

In my original message, I proposed that we restrict the use of 0-RTT to
settings where it has been explicitly profiled. Your response seems to be
that people will turn it on even if we do so. But if that's your position
then
there's no point in doing any analysis because we already know that there
are cases where it's not safe, which is why we are warning them against
using it in those cases.

In any case, I'm a little surprised by your assertion in a previous message
that WGs expect us to do this analysis. That's not been the relationship
we've have with WGs in the past; rather, we document the properties we
are providing and they have to determine whether those properties are
appropriate. Perhaps we could start by actually sponsoring some of those
reviews. Given that HTTP is the primary customer for 0-RTT, perhaps
Mark or Martin would be willing to start a review there?

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Salz, Rich
>  I also think it is prudent to assume that implementers will turn on 
> replayable
> data even if nobody has figured out the consequences.

I very much agree.  Customers, particularly those in the mobile field, will 
look at this and say "I can avoid an extra RTT?  *TURN IT ON*" without fully 
understanding, or perhaps even really caring about, the security implications. 
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Stephen Farrell

Hiya,

On 13/03/16 14:01, Eric Rescorla wrote:
> 
> This is not an accurate way to represent the situation. Those WGs can safely
> move from TLS 1.2 to 1.3 *as long as they don't use 0-RTT*.

I agree your 2nd sentence but not your 1st.

I also think it is prudent to assume that implementers will turn on
replayable data even if nobody has figured out the consequences.

Cheers,
S.



smime.p7s
Description: S/MIME Cryptographic Signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Eric Rescorla
On Sun, Mar 13, 2016 at 2:51 PM, Stephen Farrell 
wrote:
>
> > That allows
> > the
> > experts in those protocols to do their own analysis, rather than somehow
> > making it the responsibility of the TLS WG. I agree that this is a sharp
> > object
> > and I'd certainly be happy to have such a requirement in 1.3.
>
> So again, I totally understand the reluctance to consider all of the
> foo/TLS options within the TLS WG. And I don't even know how one
> might get that done if one wanted. (Hence my asking the WG.)
>
> However, it is the TLS WG that is introducing the dangerous implement
> and as part of a protocol revision that is mainly intended to improve
> security. It seems fair to say that that may be a surprise for folks
> who just want to use TLS.
>
> My guess would be that if we say to all the WG's doing foo/TLS that
> they need to write a new document before they safely can move from
> TLS1.2 to TLS1.3,


This is not an accurate way to represent the situation. Those WGs can safely
move from TLS 1.2 to 1.3 *as long as they don't use 0-RTT*.

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Stephen Farrell

Hiya,

I mostly agree with what you wrote about specific mitigating factors
for specific potential threats.

For this thread though, maybe we're better talking about how we might
get to where we can be confident that replayable data in TLS1.3 won't
cause significant harm. I'm happy to talk more about the specifics as
well but maybe better to try figure out what are a good set of goals
first (or in parallel).

I think this bit maybe captures where you and I might most come from
different perspectives:

On 13/03/16 12:38, Eric Rescorla wrote:
> 
> Yes, this might be the case, though as above, I note that nothing is making
> those protocols use this feature, 

Sure. Except we *know* that people will see a new API and will use
that to go faster, even if they ought not. TBH, I really don't think
the API approach is sufficient by itself.

> and the specification is quite clear about
> the risks involved here. Rather than requiring some open-ended exploration

"requiring open-ended" doesn't match what I'm asking for. I can
understand your concern though.

I'm trying to figure out how we (the IETF) get to where we are
confident that we understand the important impacts of introducing this
dangerous implement. As I said, I am puzzled as to how we get there and
am asking that the WG help figuring that out before we get to when
tackling issues we find gets harder.

> of the impact on every protocol of this feature, I think it would be far
> more
> sensible to require that protocols not use 0-RTT absent some specific
> application layer profile describing when and why it is safe. 

I think some such statement would help for sure. I'm not sure though if
it's really sufficient. Even if 0rtt was to be in a separate RFC (I
forget if the WG considered that) the issue is that libraries will all
include it (as it'll make the web go faster) so implementers are liable
to turn it on anywhere really.

> That allows
> the
> experts in those protocols to do their own analysis, rather than somehow
> making it the responsibility of the TLS WG. I agree that this is a sharp
> object
> and I'd certainly be happy to have such a requirement in 1.3.

So again, I totally understand the reluctance to consider all of the
foo/TLS options within the TLS WG. And I don't even know how one
might get that done if one wanted. (Hence my asking the WG.)

However, it is the TLS WG that is introducing the dangerous implement
and as part of a protocol revision that is mainly intended to improve
security. It seems fair to say that that may be a surprise for folks
who just want to use TLS.

My guess would be that if we say to all the WG's doing foo/TLS that
they need to write a new document before they safely can move from
TLS1.2 to TLS1.3, they'll answer back that the TLS WG should have done
some or all of that before introducing the dangerous implement.

There are at least two bad approaches here I think. One is "higher
layers need to do all the work to figure out if moving from TLS1.2 to
TLS1.3 is safe." Another is "the TLS WG needs to figure out and say if
foo/TLS1.3 is safe for all ."

My issue is that I don't know what middle-ground here is reasonable
and gives us sufficient confidence that TLS1.3 with the dangerous
implement is safe enough.

Cheers,
S.



smime.p7s
Description: S/MIME Cryptographic Signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-13 Thread Eric Rescorla
On Sun, Mar 13, 2016 at 12:14 PM, Stephen Farrell  wrote:
>
>
> However, if I'm in the rough about the above, (which seems
> to me to be the case now) then my job as AD when I get a publication
> request that includes 0rtt, will include figuring out if that's
> safe or not. And I've no clue how I'll do that unless the WG
> have already done some analysis of the many, many protocols
> that use TLS. Note that I do not consider "use a different API"
> to be a sufficient answer here (it is necessary, but not
> sufficient).


It seem to me that there are several important mitigating factors here.

1. Nothing requires applications to use this feature at all. First, servers
need to advertise it and are free to (a) not offer clients the ability to
send
0-RTT data and (b) refuse to accept it if clients send it. Moreover,
everyone
I know of who is considering building a 1.3 library intends to provide
that data to the server via a separate API, so the server will have to work
to get it.

2. The replay issues are mostly problematic in cases that trouble
maintaining consistent state (primarily distributed machines).
Non-distributed
systems can maintain a replay cache and refuse to accept traffic which
appears to be a replay. This does not reduce the risk to zero but it
significantly
reduces it to state loss issues, which are easier to control for. So it
would
probably be useful to document some anti-replay mechanism based on the
ClientHello fo such protocols.


I've been worried about this for a while now, but the recant
> thread started by Kyle Nekritz [3] prompted me to send this
> as I think that's likely just the tip of an iceberg. E.g., I'd
> be worried about cross-protocol attacks one might be able to
> try with JS in a browser if the JS can create arbitrary HTTP
> header fields which I think is the case.


It's not generally the case without server consent. CORS prohibits the use
of any
non-simple headers in CORS requests without preflight. See:

https://fetch.spec.whatwg.org/#concept-main-fetch
and
https://fetch.spec.whatwg.org/#simple-header




> I'm also worried about
> things like EAP-TLS and RADIUS/Diameter if used via TLS etc
> where we don't necessarily have the right people active on this
> list.


Yes, this might be the case, though as above, I note that nothing is making
those protocols use this feature, and the specification is quite clear about
the risks involved here. Rather than requiring some open-ended exploration
of the impact on every protocol of this feature, I think it would be far
more
sensible to require that protocols not use 0-RTT absent some specific
application layer profile describing when and why it is safe. That allows
the
experts in those protocols to do their own analysis, rather than somehow
making it the responsibility of the TLS WG. I agree that this is a sharp
object
and I'd certainly be happy to have such a requirement in 1.3.

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Limiting replay time frame of 0-RTT data

2016-03-13 Thread Martin Thomson
I assume that you would run a tight tolerance on the 0RTT resumption by
saving the client's clock error in the ticket.  That a way only clients
with bad drift get no 0RTT. To do that all sessions need time.

I do not see how the server can do this in general without client help. But
the solution also addresses Brian's concern. The client doesn't need
absolute time, just relative time: How long since the ticket was issued.
Then it's an addition to the psk_identity only.
On 13 Mar 2016 1:49 PM, "Erik Nygren"  wrote:

> That does seem like a good idea to include a client time stamp in the 0RTT
> flow to let the server force 1RTT in the case where this is too far off as
> this bounds the duration of the replay window.  (I suspect we'll find a
> whole range of other similar attacks using 0RTT.)  An encrypted client
> timestamp could presumably be probed by the server.  (ie, if the server
> response is a different size for timestamp-expired vs timestamp-not-expired
> an attacker could keep probing until they change?)  That does seem like
> more effort, however.
>
>Erik
>
>
> On Sat, Mar 12, 2016 at 7:56 AM, Eric Rescorla  wrote:
>
>> Hi Kyle,
>>
>> Clever attack. I don't think it would be unreasonable to put a low
>> granularity time stamp in the
>> ClientHello (and as you observe, if we just define it it can be done
>> backward compatibly)
>> or as you suggest, in an encrypted block. With that said, though couldn't
>> you
>> also just include the information in the HTTP header for HTTP? Do you
>> think this is a sufficiently
>> general issue that it merits a change to TLS.
>>
>> -Ekr
>>
>>
>> On Fri, Mar 11, 2016 at 9:21 PM, Kyle Nekritz  wrote:
>>
>>> Similar to the earlier discussion on 0.5-RTT data, I’m concerned with
>>> the long term ability to replay captured 0-RTT early data, and the attack
>>> vectors that it opens up. For example, take a GET request for an image to a
>>> CDN. This is a request that seems completely idempotent, and that
>>> applications will surely want to send as 0-RTT data. However, this request
>>> can result in a few things happening:
>>> 1) Resource unavailable
>>> 2) Resource cached locally at edge cluster
>>> 3) Cache miss, resource must be fetched from origin data center
>>> #1 can easily be differentiated by the length of the 0.5-RTT response
>>> data, allowing an attacker to determine when a resource has been
>>> deleted/modified. #2 and #3 can also be easily differentiated by the timing
>>> of the response. This opens up the following attack: if an attacker knows a
>>> client has requested a resource X_i in the attacker-known set {X_1, X_2,
>>> ..., X_n}, an attacker can do the following:
>>> 1) wait for the CDN cache to be evicted
>>> 2) request {X_1, X_2, …, X_(n/2)} to warm the cache
>>> 3) replay the captured client early data (the request for X_i)
>>> 4) determine, based on the timing of the response, whether it
>>> resulted in a cache hit or miss
>>> 5) repeat with set {X_1, X_2, …, X_(n/2)} or {X_(n/2 + 1), X_(n/2 +
>>> 2), …, X_n} depending on the result
>>> This particular binary search example is a little contrived and requires
>>> that no-one else is requesting any resource in the set, however I think it
>>> is representative of a significant new attack vector that allowing
>>> long-term replay of captured early data will open up, even if 0-RTT is only
>>> used for seemingly simple requests without TLS client authentication. This
>>> is a much different threat than very short-term replay, which is already
>>> somewhat possible on any TLS protocol if clients retry failed requests.
>>>
>>> Given this, I think it is worth attempting to limit the time frame that
>>> captured early data is useful to an attacker. This obviously doesn’t
>>> prevent replay, but it can mitigate a lot of attacks that long-term replay
>>> would open up. This can be done by including a client time stamp along with
>>> early data, so that servers can choose to either ignore the early data, or
>>> to delay the 0.5-RTT response to 1.5-RTT if the time stamp is far off. This
>>> cuts down the time from days (until the server config/session ticket key is
>>> rotated) to minutes or seconds.
>>>
>>> Including the client time also makes a client random strike register
>>> possible without requiring an unreasonably large amount of server-side
>>> state.
>>>
>>> I am aware that client time had previously been removed from the client
>>> random, primarily due to fingerprinting concerns, however these concerns
>>> can be mitigated by
>>> 1) clients can choose to not include their time (or to include a random
>>> time), with only the risk of their .5-RTT data being delayed
>>> 2) placing the time stamp in an encrypted extension, so that it is not
>>> visible to eavesdroppers
>>>
>>>
>>> Note: it’s also useful for the server to know which edge cluster the
>>> early data was intended for, however this is already