Re: [TLS] WG Adoption for TLS Trust Expressions

2024-04-26 Thread Kyle Nekritz
I think it’s very important we provide a standard to efficiently solve this 
problem. Without one, I think it’s nearly inevitable that server operators will 
have to deploy complex ClientHello fingerprinting logic for certificate 
selection to maintain widespread client compatibility (which also may only be 
feasible for large operators, and will harm the TLS ecosystem), and that we 
will be adding roadblocks to deployment of more modern trust stores.

There’s still details to work out, but I support adoption of the draft as a 
good starting point.

Kyle Nekritz

From: TLS  On Behalf Of Devon O'Brien
Sent: Tuesday, April 23, 2024 4:37 PM
To: tls@ietf.org
Cc: Bob Beck 
Subject: [TLS] WG Adoption for TLS Trust Expressions

After sharing our first draft of TLS Trust Expressions and several discussions 
across a couple IETFs, we’d like to proceed with a call for working group 
adoption of this draft. We are currently prototyping trust expressions in 
BoringSSL &
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
ZjQcmQRYFpfptBannerEnd

After sharing our first draft of TLS Trust 
Expressions<https://datatracker.ietf.org/doc/draft-davidben-tls-trust-expr/> 
and several discussions across a couple  IETFs, we’d like to proceed with a 
call for working group adoption of this draft. We are currently prototyping 
trust expressions in BoringSSL & Chromium and will share more details when 
implementation is complete.


As we mentioned in our message to the mailing list from January, our primary 
goal is to produce a mechanism for supporting multiple subscriber 
certificates<https://github.com/davidben/tls-trust-expressions/blob/main/explainer.md>
 and efficiently negotiating which to serve on a given TLS connection, even if 
that ends up requiring significant changes to the draft in its current state.


To that end, we’re interested in learning whether wg members support adoption 
of this deployment model and the currently-described certificate negotiation 
mechanism or if they oppose adoption (and why!).


Thanks!

David, Devon, and Bob

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] consensus call: draft-ietf-tls-ticketrequests

2020-03-05 Thread Kyle Nekritz
No.

FWIW, we (at my day job) run TLS 1.3 at large scale for server-to-server RPC 
communication, and have seen no appreciable performance overhead from ticket 
refresh on resumption.

I also have no objection to Martin's proposal.

Kyle

-Original Message-
From: TLS  On Behalf Of Sean Turner
Sent: Wednesday, March 4, 2020 11:07 AM
To: TLS List 
Subject: [TLS] consensus call: draft-ietf-tls-ticketrequests

one more time ...

All,

The purpose of this message is to help the chairs judge consensus on the way 
forward for draft-ietf-tls-ticketrequests. The issue at hand is whether the 
client-initiated ticket request mechanism [0] should be modified to add support 
for ticket reuse, see [1] lines 160-214. As we see it, the way forward involves 
either one draft or two. To that end, we would like your input (YES or NO) on 
the following question by 2359 UTC 18 March 2020:

 Must the ticket reuse use case be addresses  in draft-ietf-tls-ticketrequests?

Full disclosure: RFC 8446 recommends against ticket reuse to help protect 
clients from passive observers correlating connections [2]. The PR supports 
ticket reuse for use cases for a server-to-server connection that has fixed 
source addresses and no connection racing; if adopted the WG will need to 
ensure that the security considerations are properly documented.

Note: There have been at least three threads on this draft [3][4][5]. Please, 
let’s try to avoid re-litigating the points made therein.

Joe & Sean

[0] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__datatracker.ietf.org_doc_draft-2Dietf-2Dtls-2Dticketrequests_=DwIGaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=gDDe89teswtnbN_GsYRvrhMURnKf5xGhnqooz8HMQmM=4Kz5f1xxZ6P7E_YXEhOUWD50NCnQZIkbCRZbqojrJBk=
[1] https://github.com/tlswg/draft-ietf-tls-ticketrequest/pull/18
[2] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__tools.ietf.org_html_rfc8446-23appendix-2DC.4=DwIGaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=gDDe89teswtnbN_GsYRvrhMURnKf5xGhnqooz8HMQmM=N3Qf8xv0MnsGbbTtjHb5U2IuianYB7TQ8a8CXcIS7Gc=
[3] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__mailarchive.ietf.org_arch_msg_tls_2cpoaJRushs09EFeTjPr-2DKa3FeI_=DwIGaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=gDDe89teswtnbN_GsYRvrhMURnKf5xGhnqooz8HMQmM=PELoSAnGtZHjPCTcCWS1d0a-Ta0ZLvhHFeFQ8wbagqs=
[4] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__mailarchive.ietf.org_arch_msg_tls_-2D7J3gMmpHNw9t3URzxvM-2D3OaTR8_=DwIGaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=gDDe89teswtnbN_GsYRvrhMURnKf5xGhnqooz8HMQmM=Mfbl7bM_Esrm6nALbKFAUwBtA9G-nCrqy7Le2efwdi8=
[5] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__mailarchive.ietf.org_arch_msg_tls_FjhqbYYTwzgiV9weeCuxn0tHxPs_=DwIGaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=gDDe89teswtnbN_GsYRvrhMURnKf5xGhnqooz8HMQmM=pPiiy8g1rqhJ3lj78u9JrggB4p3yo2yHLLV35NPvFrk=
___
TLS mailing list
TLS@ietf.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_tls=DwIGaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=gDDe89teswtnbN_GsYRvrhMURnKf5xGhnqooz8HMQmM=sU60CTb3C9iJnQ7Xnnn5xxjDMHWjjccFZybS8IzGCG0=
 
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-06-14 Thread Kyle Nekritz
That’s definitely a possibility if using a single key that never changes. With 
periodically rolling new keys, I’m not sure the risk is much different than 
with periodically rolling new versions. Ossifying on updated versions of either 
requires the middlebox to take a hard dependency on having the updated version 
available. Updating for arbitrary changes in the protocol is more complex than 
just updating a config for an encryption key, but I suspect we could also roll 
out an updated key mapping much faster than a new version that required TLS 
library code changes.

Using a negotiated key from a previous connection completely avoids that issue 
(for example the server sends an encrypted extension with identifier X and key 
Y, which the client remembers for future connections).

From: Steven Valdez 
Sent: Thursday, June 14, 2018 10:35 AM
To: Kyle Nekritz 
Cc: David Benjamin ;  
Subject: Re: [TLS] Enforcing Protocol Invariants

This scheme probably isn't sufficient by itself, since a middlebox just has to 
be aware of the anti-ossification extension and can parse the server's response 
by decrypting it with the known mapping (either from the RFC or fetching the 
latest updated mapping), and then ossifying on the contents of the 'real' 
ServerHello. To keep the ServerHello from ossifying, you'll need to change the 
serialization and codepoints of the ServerHello at each rolling version.

On Wed, Jun 13, 2018 at 8:29 PM Kyle Nekritz 
mailto:knekr...@fb.com>> wrote:
I think there may be lower overhead ways (that don’t require frequent TLS 
library code changes) to prevent ossification of the ServerHello using a 
mechanism similar to the cleartext cipher in quic. For example, a client could 
offer an “anti-ossification” extension containing an identifier that 
corresponds to a particular key. The identifier->key mapping can be established 
using a couple of mechanisms, depending on the level of defense desired against 
implementations that know about this extension:
* static mapping defined in RFC
* periodically updated mapping shared among implementations
* negotiated on a previous connection to the server, similar to a PSK
This key can then be used to “encrypt” the ServerHello such that it is 
impossible for a middlebox without the key to read (though would not add actual 
confidentiality and would probably involve aead nonce-reuse). There’s a couple 
of options to do this:
* Simply replace the plaintext record layer for the ServerHello with an 
encrypted record layer, using this key (this would not be compatible with 
existing middleboxes that have caused us trouble)
* Put a “real” encrypted ServerHello in an extension in the “outer” plaintext 
ServerHello
* Send a fake ServerHello (similar to how we encapsulate HelloRetryRequest in a 
ServerHello), and then send a real ServerHello in a following encrypted record
All of these would allow a server to either use this mechanism or negotiate 
standard TLS 1.3 (and the client to easily tell which one is in use).

With the small exception of potentially updating the identifier->key mapping, 
this would not require any TLS library changes (once implemented in the first 
place), and I believe would still provide almost all of the benefits.

From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of 
David Benjamin
Sent: Tuesday, June 12, 2018 12:28 PM
To: mailto:tls@ietf.org>> mailto:tls@ietf.org>>
Subject: [TLS] Enforcing Protocol Invariants

Hi all,

Now that TLS 1.3 is about done, perhaps it is time to reflect on the 
ossification problems.

TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may be 
incrementally rolled out in an existing compliant TLS 1.2 deployment. Yet we 
had problems. Widespread non-compliant servers broke on the TLS 1.3 
ClientHello, so versioning moved to supported_versions. Widespread 
non-compliant middleboxes attempted to parse someone else’s ServerHellos, so 
the protocol was further hacked to weave through their many defects.

I think I can speak for the working group that we do not want to repeat this 
adventure again. In general, I think the response to ossification is two-fold:

1. It’s already happened, so how do we progress today?
2. How do we avoid more of this tomorrow?

The workarounds only answer the first question. For the second, TLS 1.3 has a 
section which spells out a few protocol 
invariants<https://urldefense.proofpoint.com/v2/url?u=https-3A__tlswg.github.io_tls13-2Dspec_draft-2Dietf-2Dtls-2Dtls13.html-23rfc.section.9.3=DwMFaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=SaWplkxkbv9O_Z71HdE6L71vu2f7ovWWaEQrPRRbywc=F7IJ9Nac5Of_GCvBVk0_QhTplCJjoboBSemHl0pi1oM=>.
 It is all corollaries of existing TLS specification text, but hopefully 
documenting it explicitly will help. But experience has shown specification 
text is only necessary, not sufficient.

For extensibility problems in servers, we have 
GREASE<https://urldefense.proofpoint.com/v2/url?u=https-3A__tools.ie

Re: [TLS] Enforcing Protocol Invariants

2018-06-13 Thread Kyle Nekritz
I think there may be lower overhead ways (that don’t require frequent TLS 
library code changes) to prevent ossification of the ServerHello using a 
mechanism similar to the cleartext cipher in quic. For example, a client could 
offer an “anti-ossification” extension containing an identifier that 
corresponds to a particular key. The identifier->key mapping can be established 
using a couple of mechanisms, depending on the level of defense desired against 
implementations that know about this extension:
* static mapping defined in RFC
* periodically updated mapping shared among implementations
* negotiated on a previous connection to the server, similar to a PSK
This key can then be used to “encrypt” the ServerHello such that it is 
impossible for a middlebox without the key to read (though would not add actual 
confidentiality and would probably involve aead nonce-reuse). There’s a couple 
of options to do this:
* Simply replace the plaintext record layer for the ServerHello with an 
encrypted record layer, using this key (this would not be compatible with 
existing middleboxes that have caused us trouble)
* Put a “real” encrypted ServerHello in an extension in the “outer” plaintext 
ServerHello
* Send a fake ServerHello (similar to how we encapsulate HelloRetryRequest in a 
ServerHello), and then send a real ServerHello in a following encrypted record
All of these would allow a server to either use this mechanism or negotiate 
standard TLS 1.3 (and the client to easily tell which one is in use).

With the small exception of potentially updating the identifier->key mapping, 
this would not require any TLS library changes (once implemented in the first 
place), and I believe would still provide almost all of the benefits.

From: TLS  On Behalf Of David Benjamin
Sent: Tuesday, June 12, 2018 12:28 PM
To:  
Subject: [TLS] Enforcing Protocol Invariants

Hi all,

Now that TLS 1.3 is about done, perhaps it is time to reflect on the 
ossification problems.

TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may be 
incrementally rolled out in an existing compliant TLS 1.2 deployment. Yet we 
had problems. Widespread non-compliant servers broke on the TLS 1.3 
ClientHello, so versioning moved to supported_versions. Widespread 
non-compliant middleboxes attempted to parse someone else’s ServerHellos, so 
the protocol was further hacked to weave through their many defects.

I think I can speak for the working group that we do not want to repeat this 
adventure again. In general, I think the response to ossification is two-fold:

1. It’s already happened, so how do we progress today?
2. How do we avoid more of this tomorrow?

The workarounds only answer the first question. For the second, TLS 1.3 has a 
section which spells out a few protocol 
invariants.
 It is all corollaries of existing TLS specification text, but hopefully 
documenting it explicitly will help. But experience has shown specification 
text is only necessary, not sufficient.

For extensibility problems in servers, we have 
GREASE.
 This enforces the key rule in ClientHello processing: ignore unrecognized 
parameters. GREASE enforces this by filling the ecosystem with them. TLS 1.3’s 
middlebox woes were different. The key rule is: if you did not produce a 
ClientHello, you cannot assume that you can parse the response. Analogously, we 
should fill the ecosystem with such responses. We have an idea, but it is more 
involved than GREASE, so we are very interested in the TLS community’s feedback.

In short, we plan to regularly mint new TLS versions (and likely other 
sensitive parameters such as extensions), roughly every six weeks matching 
Chrome’s release cycle. Chrome, Google servers, and any other deployment that 
wishes to participate, would support two (or more) versions of TLS 1.3: the 
standard stable 0x0304, and a rolling alternate version. Every six weeks, we 
would randomly pick a new code point. These versions will otherwise be 
identical to TLS 1.3, save maybe minor details to separate keys and exercise 
allowed syntax changes. The goal is to pave the way for future versions of TLS 
by simulating them (“draft negative one”).

Of course, this scheme has some risk. It grabs code points everywhere. Code 
points are plentiful, but we do sometimes have collisions (e.g. 26 and 40). The 
entire point is to serve and maintain TLS’s extensibility, so we certainly do 
not wish to hamper it! Thus we have some 

Re: [TLS] 2nd WGLC: draft-ietf-tls-tls13

2017-07-18 Thread Kyle Nekritz
Timestamps outside the expected window can happen due to variances in RTT, 
client clock skew, etc. (we see around .1% of clients outside of a 30s window 
for example). Not likely to happen on a given connection, but it certainly 
happens enough that you don’t want to abort the connection (rather than 
allowing 1-RTT) without reason. I’m not sure I understand the desire to abort 
these connections. What value does it provide? The timestamp mechanism does not 
provide protection against a malicious client that has possession of the PSK 
key.

If the server is 100% sure that a CH is replayed it is more reasonable to abort 
the connection, but I think it should be explicit to the client that that is 
the reason for the error (ie using a new alert type rather than 
illegal_parameter) so the client can know to retry without 0-RTT. I’m also 
slightly concerned that it allows a passive attacker to cause connection 
failures if it can front-run a copy of the CH to the server, and I still don’t 
think it is providing much value.

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Benjamin Kaduk
Sent: Tuesday, July 18, 2017 2:11 PM
To: Eric Rescorla 
Cc:  
Subject: Re: [TLS] 2nd WGLC: draft-ietf-tls-tls13

On 07/18/2017 08:07 AM, Eric Rescorla wrote:



On Wed, Jul 12, 2017 at 3:39 PM, Benjamin Kaduk 
> wrote:

That is, in this case, the CH+0RTT data can be replayed by an observer once 
enough time has elapsed that the expected_arrival_time is within the window, 
similar to one of the reordering attacks mentioned elsewhere.  We could add the 
CH to the strike register in this case, which would bloat its storage somewhat 
and have entries that take longer than the window to expire out.

I don't have a good sense for how often we expect postdated CHs to occur and 
whether the ensuing breakage would be manageable, but I'm not sure that we've 
thought very hard as a group about this question.

I think post-dated are going to happen pretty often based on what I understand 
from
Kyle and others. I wouldn't be comfortable with hard fail, especially given 
that this
just seems like the dual of the other case. Adding the CH to the list seems like
a problem because it might stay forever.


The "stay forever" part is awkward, yes.  It would be great if Kyle/etc. could 
say a bit about why post-dated seems likely on the list, but I guess for the 
purposes of WGLC we can consider this comment resolved.

-Ben
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-22 Thread Kyle Nekritz
The stateless technique certainly doesn’t solve all issues with replay. Neither 
do the other techniques, unless a fairly unrealistic (imo, in most use cases) 
retry strategy is used. But the stateless technique is definitely an 
improvement over no anti-replay mechanism at all (for instance it reduces the 
possible number of rounds of a cache probing attack, assuming the cache TTL > 
replay window). Which mechanisms to use, and whether to enable 0-RTT in the 
first place (or PSK mode at all), should be decided considering the tradeoff 
between security/performance/implementation constraints, etc. In the case of 
DNS, most DNS security protocols (dnssec, etc.) do allow this kind of replay so 
I think it is a pretty reasonable tradeoff to consider.

Additionally, I think the stateless technique is quite useful as a 
defense-in-depth mechanism. I highly doubt all deployments will end up 
correctly implementing a thorough anti-replay mechanism (whether accidentally 
or willfully). The stateless method is very cheap, and can be implemented 
entirely within a TLS library even in a distributed setup, only requiring 
access to an accurate clock. I’d much rather deployments without a robust and 
correct anti-replay mechanism break down to allowing replay over a number of 
seconds, rather than days (or longer).

Kyle

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Colm MacCárthaigh
Sent: Sunday, May 21, 2017 10:29 PM
To: Eric Rescorla 
Cc: tls@ietf.org
Subject: Re: [TLS] Security review of TLS1.3 0-RTT



On Sun, May 21, 2017 at 3:47 PM, Eric Rescorla 
> wrote:
- Clients MUST NOT use the same ticket multiple times for 0-RTT.

I don't understand the purpose of this requirement. As you note below,
servers are ultimately responsible for enforcing it, and it's not clear to
me why clients obeying it makes life easier for the server.

I think clients should duplicate them sometimes, just to keep servers on their 
toes ;-) this is what we talked about maybe being a grease thing .. if at all.

- Servers MUST NOT accept the same ticket with the same binder multiple
  times for 0-RTT (if any part of ClientHello covered by binder is
  different, one can assume binders are different). This holds even
  across servers (i.e., if server1 accepts 0-RTT with ticket X and
  binder Y, then server2 can not accept 0-RTT with ticket X and binder
  Y).

I assume that what you have in mind here is that the server would know
which tickets it was authoritative for anti-replay and would simply reject
0-RTT if it wasn't authoritative? This seems like it would significantly cut
down on mass replays, though it would of course still make application-level
replay a problem.

I'm happy to write this up as part of the first two techniques. I'd be
interested in hearing from others in the WG what they think about:

1. Requiring it.
2. Whether they still want to retain the stateless technique.

I'm for requiring it, and for removing the stateless technique ... because it 
prevents the side-channel and DOS attacks and those seem like the most serious 
ones (and also, the new ones).

So far each case where we've thought "Actually stateless might be ok in this 
case" ... like the example of DNS ... turns out not to be safe when examined 
more closely (in DNSes case it would compromise privacy because caches could be 
probed).

--
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-04 Thread Kyle Nekritz
Yep, I think your PR is in the right direction.

> I have been basically assuming that you can't really do TLS without a 
> real-time clock, but maybe that's wrong?

Well, it’s possible, although I do not know if anyone actually does (and of 
course certificate validation would be a little challenging). Maybe someone 
from the embedded crowd can enlighten us.

> "For identities established externally an obfuscated_ticket_age of 0 SHOULD 
> be used, and servers MUST ignore the value."

Ah, I had missed that. I think the MUST might be a little strong there, even 
without direct involvement of the server in issuing the external PSK the ticket 
age can still be useful, for example using a similar method we use with Zero 
Protocol (Time-bound 0-RTT data section of 
https://code.facebook.com/posts/608854979307125/building-zero-protocol-for-fast-secure-mobile-connections/
 ).

From: Eric Rescorla [mailto:e...@rtfm.com]
Sent: Thursday, May 4, 2017 5:37 PM
To: Kyle Nekritz <knekr...@fb.com>
Cc: Colm MacCárthaigh <c...@allcosts.net>; tls@ietf.org
Subject: Re: [TLS] Security review of TLS1.3 0-RTT



On Thu, May 4, 2017 at 2:27 PM, Kyle Nekritz 
<knekr...@fb.com<mailto:knekr...@fb.com>> wrote:
> 1. A SHOULD-level requirement for server-side 0-RTT defense, explaining both 
> session-cache and strike register styles and the merits of each.

First, a point of clarification, I think two issues have been conflated in this 
long thread:
1) Servers rejecting replayed 0-RTT data (using a single use session 
cache/strike register/replay cache/some other method)

There are definitely cases (i.e. application profiles) where this should be 
done. I think a general case HTTPS server is one. But I don’t think this is 
strictly necessary across the board (for every application using 0-RTT at all). 
DNS was brought up earlier in this thread as an example of a protocol that is 
likely quite workable without extra measures to prevent replay.

We already state “Protocols MUST NOT use 0-RTT data without a profile that 
defines its use.”. We could also describe methods that may be used to provide 
further replay protection. But I don’t think it’s appropriate to make a blanket 
requirement that *all* application protocols should require it.

I also consider it quite misleading to say TLS 1.3 is insecure without such a 
recommendation. Uses of TLS can be insecure, that does not mean the protocol 
itself is. It’s insecure to use TLS without properly authenticating the server. 
Some users of TLS do not do this correctly. I’d actually argue that it is 
easier to mess this up than it is to mess up a 0-RTT deployment (and it can 
result in worse consequences). That doesn’t mean we should require a particular 
method of authentication, for all uses of TLS.

I think this is basically right. In the PR I just posted, I spent most of my 
time describing the
mechanisms and used a SHOULD-level requirement to do one of the mechanisms.
I think there's a bunch of room to wordsmith the requirement. Perhaps we say:

- You can't do 0-RTT without an application profile
- Absent the application profile saying otherwise you SHOULD/MUST do one of 
these mitigations?


2) Preventing clients from sending 0-RTT data multiple times (on separate 
connections) using the same PSK (for forward secrecy reasons)

I think this should be allowed. Otherwise, clients will not be able to retry 
0-RTT requests that fail due to an unknown network error prior to receiving a 
NST (if they are out of cached PSKs). I’d expect the need for these retries to 
be larger with 0-RTT data, particularly when 0-RTT data is sent without even a 
transport roundtrip (in the case of TFO or QUIC). Servers are definitely not 
required to accept multiple 0-RTT connections using the same PSK, but I don’t 
think clients should be banned from attempting.

I agree, and the PR I provided doesn't attempt to do so.


> 4. I would add to this that we recommend that proxy/CDN implementations 
> signal which data is 0-RTT and which is 1-RTT to the back-end (this was in 
> Colm's original message).

I’m not sure that the TLS 1.3 spec is the right place to make recommendations 
for this. I can see several reasonable approaches here, for example:
- Adding some kind of application level annotation (for example an HTTP header)
- Robustly preventing replay on the 0-RTT hop
- Sending proxied early data with a different TLS ContentType, etc.
I don’t see a need to specifically endorse any particular method here.

I think Colm has also agreed we shouldn't do this and it's not in my PR.



There was also a point brought up about the use of ticket_age without 0-RTT. 
I’m not aware of any use for ticket_age other than 0-RTT replay protection. I 
believe that ticket_age is sent with all PSKs mostly out of 
convenience/consistency. I don’t really have an objection to the current 
method, but I also wouldn’t be opposed to moving the ticket age to the early 
data

Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-04 Thread Kyle Nekritz
> 1. A SHOULD-level requirement for server-side 0-RTT defense, explaining both 
> session-cache and strike register styles and the merits of each.

First, a point of clarification, I think two issues have been conflated in this 
long thread:
1) Servers rejecting replayed 0-RTT data (using a single use session 
cache/strike register/replay cache/some other method)

There are definitely cases (i.e. application profiles) where this should be 
done. I think a general case HTTPS server is one. But I don’t think this is 
strictly necessary across the board (for every application using 0-RTT at all). 
DNS was brought up earlier in this thread as an example of a protocol that is 
likely quite workable without extra measures to prevent replay.

We already state “Protocols MUST NOT use 0-RTT data without a profile that 
defines its use.”. We could also describe methods that may be used to provide 
further replay protection. But I don’t think it’s appropriate to make a blanket 
requirement that *all* application protocols should require it.

I also consider it quite misleading to say TLS 1.3 is insecure without such a 
recommendation. Uses of TLS can be insecure, that does not mean the protocol 
itself is. It’s insecure to use TLS without properly authenticating the server. 
Some users of TLS do not do this correctly. I’d actually argue that it is 
easier to mess this up than it is to mess up a 0-RTT deployment (and it can 
result in worse consequences). That doesn’t mean we should require a particular 
method of authentication, for all uses of TLS.

2) Preventing clients from sending 0-RTT data multiple times (on separate 
connections) using the same PSK (for forward secrecy reasons)

I think this should be allowed. Otherwise, clients will not be able to retry 
0-RTT requests that fail due to an unknown network error prior to receiving a 
NST (if they are out of cached PSKs). I’d expect the need for these retries to 
be larger with 0-RTT data, particularly when 0-RTT data is sent without even a 
transport roundtrip (in the case of TFO or QUIC). Servers are definitely not 
required to accept multiple 0-RTT connections using the same PSK, but I don’t 
think clients should be banned from attempting.

> 2. Document 0-RTT greasing in draft-ietf-tls-grease

I’m not convinced this is actually a productive thing to do in the same manner 
as grease, particularly if servers are taking anti-replay measures (in which 
case I see this being useful in a TLS testing tool like ssllabs, but not in 
actual deployments). I think we can leave this discussion for later though.

> 3. Adopt PR#448 (or some variant) so that session-id style implementations 
> provide PFS.

Sounds good to me.

> 4. I would add to this that we recommend that proxy/CDN implementations 
> signal which data is 0-RTT and which is 1-RTT to the back-end (this was in 
> Colm's original message).

I’m not sure that the TLS 1.3 spec is the right place to make recommendations 
for this. I can see several reasonable approaches here, for example:
- Adding some kind of application level annotation (for example an HTTP header)
- Robustly preventing replay on the 0-RTT hop
- Sending proxied early data with a different TLS ContentType, etc.
I don’t see a need to specifically endorse any particular method here.


There was also a point brought up about the use of ticket_age without 0-RTT. 
I’m not aware of any use for ticket_age other than 0-RTT replay protection. I 
believe that ticket_age is sent with all PSKs mostly out of 
convenience/consistency. I don’t really have an objection to the current 
method, but I also wouldn’t be opposed to moving the ticket age to the early 
data extension, so that it is only sent along with 0-RTT data.
It also seems a little under-specified what implementations unable to compute a 
reasonable ticket age should send (for example in the case of a device without 
a real time clock, or with an external PSK).

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Eric Rescorla
Sent: Wednesday, May 3, 2017 11:13 PM
To: Colm MacCárthaigh 
Cc: tls@ietf.org
Subject: Re: [TLS] Security review of TLS1.3 0-RTT

[Deliberately responding to the OP rather than to anyone in particular]

Hi folks,

I'm seeing a lot of back and forth about general philosophy and the
wisdom of 0-RTT but I think it would be useful if we focused on what
changes, if any, we need to make to the draft.

I made some proposals yesterday
(https://www.ietf.org/mail-archive/web/tls/current/msg23088.html).

Specifically:
1. A SHOULD-level requirement for server-side 0-RTT defense, explaining
both session-cache and strike register styles and the merits of each.

2. Document 0-RTT greasing in draft-ietf-tls-grease

3. 

Re: [TLS] WGLC: draft-ietf-tls-tls13-19

2017-03-28 Thread Kyle Nekritz
I raised this before in PR #693 
(https://www.ietf.org/mail-archive/web/tls/current/msg21600.html).

I'm not sure it makes sense to rename this to legacy while other parts of the 
document still refer to it. But I'm definitely in favor of deprecating it.

From: TLS  on behalf of Dave Garrett 

Sent: Tuesday, March 28, 2017 4:42 PM
To: tls@ietf.org
Subject: Re: [TLS] WGLC: draft-ietf-tls-tls13-19
    
On Tuesday, March 28, 2017 02:23:33 am Kaduk, Ben wrote:
> Should Alert.level be Alert.legacy_level?

Yep. Trivial to fix, so quick PR filed for it.


Dave

___
TLS mailing list
TLS@ietf.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_tls=DwICAg=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=ix39YzN5D9ZIP69oc6EpeIHky4mBDPr78L-0dRI3acY=wDljuAs_X9UuW_4VC-TwYR9TkDPrKiVZz7oRdOmL3aA=

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] (no subject)

2017-02-15 Thread Kyle Nekritz
I have a small preference for option 1. I think in a way #2 and #3 require a 
special case as well for the PSK binder transcript. Unless we consider the 
truncated ClientHello and the rest of the ClientHello separate messages, the 
handshake hash will have to move backwards after computation of the binders 
(removing ClientHello[truncated] and adding the full ClientHello). Allowing the 
hash to solely move forward allows for a bit simpler implementation.

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Eric Rescorla
Sent: Thursday, February 9, 2017 4:18 PM
To: tls@ietf.org
Subject: [TLS] (no subject)

Hi folks,

We need to close on an issue about the size of the
state in the HelloRetryRequest. Because we continue the transcript
after HRR, if you want a stateless HRR the server needs to incorporate
the hash state into the cookie. However, this has two issues:

1. The "API" for conventional hashes isn't designed to be checkpointed
   at arbitrary points (though PKCS#11 at least does have support
   for this.)
2. The state is bigger than you would like b/c you need to store both
   the compression function and the "remainder" of bytes that don't
   fit in [0]

Opinions differ about how severe all this is, but it's certainly
unaesthetic, and it would be nice if the state that was stored in
the HRR cookie was just a hash output. There seem to be three
major approaches for this (aside from "do nothing").

1. Special case HRR and say that the transcript is either

   CH || SH    (no HRR)

 or

   Hash(CH1) || HRR || CH ... (HRR)  [1]


2. Pre-hash the messages, so that the handshake hash
   becomes:

   Handshake_hash_N = Hash(Hash(msg_1) || Hash(msg_2)
   ... Hash(msg_N))

3. Recursively hash, so that the handshake hash becomes:

   Handshake_hash_N= Hash(Handshake_hash_N-1 || msg_N)

[As Antoine Delignat-Lavaud points out, this is basically making
a new Merkle-Damgard hash with H as the compression function.]


I've posted PR#876, which implements version #2, but we could do any one of the 
three.
and they all have the same state size. The argument for #1 seems to be
that it's the minimal change, and also the minimal overhead, and the
argument against is that it's non-uniform because CH1 is treated
differently.  We might imagine making it seem more uniform by also
hashing HRR but that doesn't make the code any simpler. Versions #2
and #3 both are more uniform but also more complicated changes.

The arguments for #2 versus #3 are that #3 is somewhat faster
(consider the case where you have a short message to add, #2 always
needs to run the compression function twice whereas #3 can run it
once). However, with #3 it is possible to take a hash for an unknown
transcript and create a new hash that matches that unknown transcript
plus an arbitrary suffix.  This is already a property of the M-D
hashes we are using but it's worse here because those hashes add
padding and length at the end before finalizing, so an extension
wouldn't generally reflect a valid handshake transcript, whereas in
this case you get to append a valid message, because the padding is
added with every finalization stage. I don't know of any reason
why this would be a security issue, but I don't have any proof it's
not, either.

I'd like to get the WG's thoughts on how to resolve this issue over the next
week or so so we can close this out.

-Ekr

[0] The worst-case overhead for SHA-256 is > 64 bytes and for SHA-512
it’s > 128 bytes. The average is half that.

[1] We actually need to do something to make it injective, because
H(CH1) might look like a handshake message, but that should be easy.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Deprecating alert levels

2016-10-24 Thread Kyle Nekritz
+1 to both Martin and ekr, I think simplifying these alerts with clearly 
defined behavior for each alert description is the best way forward.

Kyle 

-Original Message-
From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Martin Thomson
Sent: Wednesday, October 19, 2016 10:18 PM
To: Eric Rescorla 
Cc: tls@ietf.org
Subject: Re: [TLS] Deprecating alert levels

On 20 October 2016 at 05:28, Eric Rescorla  wrote:
>> 2.  Are there cases, such as unrecognized name. where it is useful to 
>> indicate that an alert is not fatal?  If so how should this case be handled?
>
>
> I think this alert was a mistake :)

In NSS is to tolerate it, but it's an exception.  I'm happier with a lone 
exception than with atrophied and redundant alert levels continuing as they 
are.  I'd prefer to take the PR, with a minor amendment noting the hazard 
caused by unrecognized_name(112).  Clients that intend to accept TLS 1.2 and 
lower probably have to ignore warning alerts until they see that the server is 
doing TLS 1.3 or higher.

___
TLS mailing list
TLS@ietf.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_tls=DQICAg=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=1svSdxAuionbHyrUN4ThSCRLZ1pCQuLaO0qtgQ8Dk7A=jWxxDB9uWwT6kP_7TcZ4isUa_Z5LNWOhgMX_O1s3oaw=
 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Deprecating alert levels

2016-10-17 Thread Kyle Nekritz
> You are suggesting that end_of_early_data and close_notify will be marked 
> "fatal".

Yes, technically they would have no alert level (since alert level is 
deprecated), but as far as bytes on the wire changes with this PR, that's 
correct.

> could you expand on why it's a problem?

Alert level is not conveying any additional information since there is one (and 
only one) alert level each alert type must be sent as. Having a separate field 
that does not convey additional information is only providing an opportunity 
for implementations to misuse it and create subtle bugs (for example if one 
implementation ignores warning level alerts, while another incorrectly sends 
alerts defined as fatal at warning level).

> This list is already missing the warning-level "unrecognized_name" alert, and 
> such a change would imply that all new/unrecognized alerts are going to be 
> treated as fatal forever (i.e. that no new warning-level alerts can ever be 
> defined).

That alert is currently defined as a fatal alert (see section 6.2 in the 
current draft). RFC 6066 also states "It is NOT RECOMMENDED to send a 
warning-level unrecognized_name(112) alert, because the client's behavior in 
response to warning-level alerts is unpredictable.", which I think illustrates 
the problem. Allowing new non-fatal alerts to be added later would require that 
existing clients ignore unknown warning alerts, which I think is somewhat 
dangerous.

-Original Message-
From: Martin Thomson [mailto:martin.thom...@gmail.com] 
Sent: Sunday, October 16, 2016 5:53 AM
To: Kyle Nekritz <knekr...@fb.com>
Cc: tls@ietf.org
Subject: Re: [TLS] Deprecating alert levels

I'm sympathetic to this, but just to be clear...

You are suggesting that end_of_early_data and close_notify will be marked 
"fatal".

WFM.

On 15 October 2016 at 08:07, Kyle Nekritz <knekr...@fb.com> wrote:
> After PR #625 all alerts are required to be sent with fatal AlertLevel 
> except for close_notify, end_of_early_data, and user_canceled. Since 
> those three alerts all have separate specified behavior, the 
> AlertLevel field is not serving much purpose, other than providing 
> potential for misuse. We
> (Facebook) currently receive a number of alerts at incorrect levels 
> from clients (internal_error warning alerts, etc.). I propose 
> deprecating this field to simplify implementations and require that any 
> misuse be ignored.
>
>
>
> PR: https://github.com/tlswg/tls13-spec/pull/693
>
>
>
> Kyle
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mail
> man_listinfo_tls=DQIBaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2
> z7jPw=D6vgejwavMxoWSed2ANWwkecWIlc1MnHfgXfiejFS8Y=-fFwoxaS4TZXNkoZ
> M04oEUCKz283dy6QYRuhlOK0mQo=
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Deprecating alert levels

2016-10-14 Thread Kyle Nekritz
After PR #625 all alerts are required to be sent with fatal AlertLevel except 
for close_notify, end_of_early_data, and user_canceled. Since those three 
alerts all have separate specified behavior, the AlertLevel field is not 
serving much purpose, other than providing potential for misuse. We (Facebook) 
currently receive a number of alerts at incorrect levels from clients 
(internal_error warning alerts, etc.). I propose deprecating this field to 
simplify implementations and require that any misuse be ignored.

PR: https://github.com/tlswg/tls13-spec/pull/693

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ALPN with 0-RTT Data

2016-10-12 Thread Kyle Nekritz
Reordering the ALPN offer has a couple advantages:
* It explicitly defines the protocol that the 0-RTT data is using on that 
connection. Without this, both the client and the server must independently 
store the ALPN in use (of course the server can put it in the ticket). While 
this should work if implemented properly, there is nothing in the protocol that 
enforces they match before the server accepts the data. If the client ALPN 
offer does happen to change, it’s even possible for the selected ALPN to be one 
that the client didn’t even offer.
* If the client knows out-of-band (or learns over the application protocol) 
that the server supports multiple protocols, it will not be able to use its 
current connection to start up a 0-RTT connection over the new protocol.
* I think realistically for many clients the protocol used to send 0-RTT data 
will end up being the only protocol that can be used on that connection, even 
if 0-RTT is rejected. Rejected 0-RTT data can’t be resent 1-RTT if a different 
application protocol is used, and it’s difficult API-wise to tell the higher 
layer “Your http/2 early data failed, but you can send http/1.1 requests if you 
want”. Thus it makes sense for these clients to advertise the 0-RTT data’s 
application protocol as the most preferred.

I’m not sure how this makes protocol transitions awkward. I’d still expect 
clients to choose the application protocol that was previously negotiated, so 
preferring h3 wouldn’t cause all h2 0-RTT to go away.

Kyle

From: Eric Rescorla [mailto:e...@rtfm.com]
Sent: Wednesday, October 12, 2016 4:03 PM
To: David Benjamin <david...@chromium.org>
Cc: Kyle Nekritz <knekr...@fb.com>; tls@ietf.org
Subject: Re: [TLS] ALPN with 0-RTT Data



On Wed, Oct 12, 2016 at 1:01 PM, David Benjamin 
<david...@chromium.org<mailto:david...@chromium.org>> wrote:
My interpretation was:

1. Client and server remember the previous selected ALPN protocol in the 
session.

2. The client may offer whatever ALPN protocols it likes. It does not need to 
match the previous offer list, though it presumably will unless you've got a 
persistent session cache or so.

3. The client assumes that session's ALPN protocol was selected for purposes of 
minting 0-RTT data.

4. The server must decline 0-RTT if it choses a different ALPN protocol. This 
can be implemented by just doing ALPN negotiation as normal and declining 0-RTT 
if the result does not match. (If client and server prefs have not changed, 
0-RTT will work. If prefs have changed, 0-RTT will miss but future sessions 
will start being 0-RTT-able. I think this is probably the sanest behavior.)

5. The client performs the usual checks on the selected ALPN protocol (must be 
one of the advertised ones). In addition, it enforces that, if 0-RTT was 
accepted, the protocol must match the session one.

This matches the behavior I intended in the spec (and the one NSS implements).

-Ekr


Pinning on the most preferred one causes awkward transitions when the most 
preferred ALPN protocol is not the same as the most commonly deployed one. If 
we ever define, say, h3, we want that one in front of h2 presumably, but we 
wouldn't want to lose 0-RTT against all the h2 servers out there.

I don't think we should be reorder preferences based on the sessions we are 
offering. That makes it much harder to reason about the behavior of preference 
lists.

David

On Wed, Oct 12, 2016 at 3:49 PM Kyle Nekritz 
<knekr...@fb.com<mailto:knekr...@fb.com>> wrote:
Currently the draft specifies that the ALPN must be "the same" as in the 
connection that established the PSK used with 0-RTT, and that the server must 
check that the selected ALPN matches what was previously used. I find this 
unclear if

1) the client should select and offer one (and only one) application protocol

2) the client can offer multiple protocols, but use the most preferred one 
offered for 0-RTT data

3) the client must send the exact same ALPN extension as in the previous 
connection, but must use the ALPN previously selected by the server (even if it 
was not the client's first offer).



To clarify this we can instead

* allow the client to offer whatever ALPN extension it wants

* define that the 0-RTT data uses the client's most preferred application 
protocol offer (and the server must pick this ALPN if it accepts 0-RTT), 
similar to using the first PSK offer if multiple are offered

* recommend that the client uses the same application protocol that was used on 
the previous connection.



PR: https://github.com/tlswg/tls13-spec/pull/681



Kyle



___

TLS mailing list

TLS@ietf.org<mailto:TLS@ietf.org>

https://www.ietf.org/mailman/listinfo/tls<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_tls=DQMFaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=diBVKMBm_GJBrqBsilZWP0aN5KD9xwvX

[TLS] ALPN with 0-RTT Data

2016-10-12 Thread Kyle Nekritz
Currently the draft specifies that the ALPN must be "the same" as in the 
connection that established the PSK used with 0-RTT, and that the server must 
check that the selected ALPN matches what was previously used. I find this 
unclear if
1) the client should select and offer one (and only one) application protocol
2) the client can offer multiple protocols, but use the most preferred one 
offered for 0-RTT data
3) the client must send the exact same ALPN extension as in the previous 
connection, but must use the ALPN previously selected by the server (even if it 
was not the client's first offer).

To clarify this we can instead
* allow the client to offer whatever ALPN extension it wants
* define that the 0-RTT data uses the client's most preferred application 
protocol offer (and the server must pick this ALPN if it accepts 0-RTT), 
similar to using the first PSK offer if multiple are offered
* recommend that the client uses the same application protocol that was used on 
the previous connection.

PR: https://github.com/tlswg/tls13-spec/pull/681

Kyle

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Asking for certificate authentication when doing 0-RTT

2016-05-24 Thread Kyle Nekritz
What is the rationale for restricting a change in certificate? If the server 
has a new certificate that the client would accept with a full handshake, what 
threat is added by also accepting that certificate with a PSK handshake?

Requiring the certificate to remain the same will make rollout of a new 
certificate more challenging (even with longer-lived certificates), 
particularly on distributed servers where the update is not immediate 
fleet-wide. It will also add noise to metrics when rolling out a new 
certificate when ideally you want everything relatively constant (to be 
confident the new certificate is working properly).

Kyle

-Original Message-
From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Martin Thomson
Sent: Tuesday, May 24, 2016 5:00 PM
To: Ilari Liusvaara 
Cc: tls@ietf.org
Subject: Re: [TLS] Asking for certificate authentication when doing 0-RTT

On 20 May 2016 at 12:41, Ilari Liusvaara  wrote:
> On Wed, May 18, 2016 at 10:10:29AM -0400, Martin Thomson wrote:
>> I just posted this:
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__datatracker.ietf
>> .org_doc_draft-2Dthomson-2Dtls-2D0rtt-2Dand-2Dcerts_=CwICAg=5VD0R
>> TtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=cJNWhprXjoJ1zN66VdIpTtqs
>> s54Z2N5U0l61Yj7RL_I=hd6AtOpOp_1ULpSo_TRrpuxmTwQKZrjr6oz0Tm0hASs=
>>
>> It's fairly self explanatory.  The idea is to create a way to signal 
>> that the client wants the server to re-authenticate itself, even if 
>> it successful in using a pre-shared key.
>
> - How is the capability signaled? New flag bits in session ticket
>   for these ciphersuites?


I just uploaded -01 that corrects this oversight.

I have raised https://github.com/martinthomson/tls-0rtt-and-certs/issues/1
which tracks whether certificates might change.

___
TLS mailing list
TLS@ietf.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_tls=CwICAg=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=cJNWhprXjoJ1zN66VdIpTtqss54Z2N5U0l61Yj7RL_I=2divOhlUGndixU6HSAFfmzyMt_Ufkc58GwbPbrg19GM=
 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Narrowing the replay window

2016-03-29 Thread Kyle Nekritz
I think this will better account for the round trip delay if the elapsed_time 
is defined on the client as the time since the request for the session ticket 
(in other words, the time since the client hello was sent). That way both the 
server computed time and the client reported time will include 1 round trip.

-Original Message-
From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Martin Thomson
Sent: Tuesday, March 29, 2016 6:29 AM
To: tls@ietf.org
Subject: [TLS] Narrowing the replay window

https://github.com/tlswg/tls13-spec/pull/437

In short, have the client report the time since it received the configuration.  
Then have the server reject early data if the time doesn't match.

I think that this is a relatively easy change to make.  Now, your exposure to 
replay is much less.

It's not ironclad, since the server needs to account for a round trip, but I 
think that would could probably get the window down to single-digit seconds.

___
TLS mailing list
TLS@ietf.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_tls=CwICAg=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=yV13qQWQtVr_en1cI1kcs4zRx07qsOQ4PNkMQFvEIQw=XQHs_Zge-MaIU6aeUJxO3PTm-2SGhf8O-AAoRKk_vws=
 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-14 Thread Kyle Nekritz
If a client nonce cache is used then the threat is essentially the same as with 
ordinary retries.

As far as forward secrecy, yes, the 0-RTT data loses some forward secrecy. I 
think this is a reasonable trade off for a lot of use cases. Currently, TLS 1.2 
implementations commonly use session tickets to improve performance. This 
actually sacrifices more forward secrecy (the whole connection, instead of just 
the initial client->server 0-RTT flight), for a smaller performance gain (it 
doesn’t even save a roundtrip compared with TLS false start). 0-RTT has a 
smaller forward secrecy cost and larger benefit compared to session tickets in 
use today.

Kyle

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Colm MacCárthaigh
Sent: Monday, March 14, 2016 2:29 PM
To: Subodh Iyengar 
Cc: tls@ietf.org
Subject: Re: [TLS] analysis of wider impact of TLS1.3 replayabe data



On Mon, Mar 14, 2016 at 11:04 AM, Subodh Iyengar 
> wrote:
Like Kyle mentioned the thing that 0-RTT adds to this is infinite 
replayability. As mentioned in the other thread we have ways to reduce the 
impact of infinite replayable data for TLS, making it reasonably replay safe.

That too is a mis-understanding. The deeper problem is that a third party can 
do the replay, and that forward secrecy is gone for what likely is sensitive 
data. Neither is the case with ordinary retries.

--
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Limiting replay time frame of 0-RTT data

2016-03-14 Thread Kyle Nekritz
As others have said I do think adding client time solves much more than just 
this specific issue. Running a client nonce cache at scale also likely requires 
client time in order to limit the storage needed to a reasonable amount.

I like the idea of including time since receiving the session ticket instead of 
absolute time. This allows for clients with high clock skew but still an 
accurate steady clock to use 0-RTT, and likely for tighter bounds to be placed 
on the time.  The natural place to put this would be the client hello, where it 
would be unencrypted, which would leak the time of ticket issuance. How 
concerning is this, is it worth encrypting this time?

Kyle

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Martin Thomson
Sent: Sunday, March 13, 2016 5:36 AM
To: Erik Nygren <erik+i...@nygren.org>
Cc: tls@ietf.org
Subject: Re: [TLS] Limiting replay time frame of 0-RTT data


I assume that you would run a tight tolerance on the 0RTT resumption by saving 
the client's clock error in the ticket.  That a way only clients with bad drift 
get no 0RTT. To do that all sessions need time.

I do not see how the server can do this in general without client help. But the 
solution also addresses Brian's concern. The client doesn't need absolute time, 
just relative time: How long since the ticket was issued. Then it's an addition 
to the psk_identity only.
On 13 Mar 2016 1:49 PM, "Erik Nygren" 
<erik+i...@nygren.org<mailto:erik%2bi...@nygren.org>> wrote:
That does seem like a good idea to include a client time stamp in the 0RTT flow 
to let the server force 1RTT in the case where this is too far off as this 
bounds the duration of the replay window.  (I suspect we'll find a whole range 
of other similar attacks using 0RTT.)  An encrypted client timestamp could 
presumably be probed by the server.  (ie, if the server response is a different 
size for timestamp-expired vs timestamp-not-expired an attacker could keep 
probing until they change?)  That does seem like more effort, however.
   Erik

On Sat, Mar 12, 2016 at 7:56 AM, Eric Rescorla 
<e...@rtfm.com<mailto:e...@rtfm.com>> wrote:
Hi Kyle,

Clever attack. I don't think it would be unreasonable to put a low granularity 
time stamp in the
ClientHello (and as you observe, if we just define it it can be done backward 
compatibly)
or as you suggest, in an encrypted block. With that said, though couldn't you
also just include the information in the HTTP header for HTTP? Do you think 
this is a sufficiently
general issue that it merits a change to TLS.

-Ekr


On Fri, Mar 11, 2016 at 9:21 PM, Kyle Nekritz 
<knekr...@fb.com<mailto:knekr...@fb.com>> wrote:
Similar to the earlier discussion on 0.5-RTT data, I’m concerned with the long 
term ability to replay captured 0-RTT early data, and the attack vectors that 
it opens up. For example, take a GET request for an image to a CDN. This is a 
request that seems completely idempotent, and that applications will surely 
want to send as 0-RTT data. However, this request can result in a few things 
happening:
1) Resource unavailable
2) Resource cached locally at edge cluster
3) Cache miss, resource must be fetched from origin data center
#1 can easily be differentiated by the length of the 0.5-RTT response data, 
allowing an attacker to determine when a resource has been deleted/modified. #2 
and #3 can also be easily differentiated by the timing of the response. This 
opens up the following attack: if an attacker knows a client has requested a 
resource X_i in the attacker-known set {X_1, X_2, ..., X_n}, an attacker can do 
the following:
1) wait for the CDN cache to be evicted
2) request {X_1, X_2, …, X_(n/2)} to warm the cache
3) replay the captured client early data (the request for X_i)
4) determine, based on the timing of the response, whether it resulted in a 
cache hit or miss
5) repeat with set {X_1, X_2, …, X_(n/2)} or {X_(n/2 + 1), X_(n/2 + 2), …, 
X_n} depending on the result
This particular binary search example is a little contrived and requires that 
no-one else is requesting any resource in the set, however I think it is 
representative of a significant new attack vector that allowing long-term 
replay of captured early data will open up, even if 0-RTT is only used for 
seemingly simple requests without TLS client authentication. This is a much 
different threat than very short-term replay, which is already somewhat 
possible on any TLS protocol if clients retry failed requests.

Given this, I think it is worth attempting to limit the time frame that 
captured early data is useful to an attacker. This obviously doesn’t prevent 
replay, but it can mitigate a lot of attacks that long-term replay would open 
up. This can be done by including a client time stamp along with early data, so 
that servers can choose to either ignore the early data, or to delay the 
0.5-RTT response to 1.5-RTT if the time stamp i

Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

2016-03-14 Thread Kyle Nekritz
I think that idempotency is a relatively straightforward idea, it seems 
reasonable to trust that the application layer will only send 0-RTT data that 
is idempotent in terms of server-side state. I also don't think it's that much 
different than the current state of TLS. Providing strict replay protection 
over an unreliable network link requires that requests are never retried in the 
case of network failure - which is not practical in most applications. If the 
application doesn't already have some way to provide idempotency to requests 
that change state, it's likely already in a bad place (see 
http://blog.valverde.me/2015/12/07/bad-life-advice/).

However, to safely use 0-RTT (in the current draft), the 0-RTT data needs a 
stricter property than idempotency, it needs to be completely replay-safe. 
Unlike TLS now, where the opportunity for replay is linear in relation to the 
number of times a client is willing to retry a request, an attacker can replay 
a request an unlimited number of times.

There's two capabilities a passive eavesdropper gains from this that concern me 
the most:
1) Changes to the response's length and timing can be observed over time
2) The response's timing can be statistically analyzed (since the request 
can be replayed an unlimited number of times)
Mitigating these at the application layer seems much harder, if not impossible, 
to get right.

Adding a client time indicator (as I suggested in the other thread) makes this 
harder, but it certainly doesn't solve either problem. A server-side client 
nonce cache does solve these problems, which I think may be required to use 
0-RTT safely. That said, I echo Bill's comments about the need for 0-RTT in TLS 
1.3. Speed is very important.

Kyle

From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Bill Cox
Sent: Sunday, March 13, 2016 6:23 PM
To: Scott Schmit 
Cc: tls@ietf.org
Subject: Re: [TLS] analysis of wider impact of TLS1.3 replayabe data

On Sun, Mar 13, 2016 at 2:23 PM, Scott Schmit 
> wrote:

So why are we adding a protocol optimization known from the start to be
insecure (or less secure than you'd expect from using a PFS cipher
suite)?

If you require PFS with resolution on the order of seconds to minutes rather 
than hours to days, you probably do not want to use tickets either.  The ticket 
decryption key rotation schedule limits PFS.  0-RTT resumes do not make this 
worse.

What percentage of servers that have a perceived need for 0-RTT will be
able to securely use and benefit from this feature as their
infrastructure is actually implemented?

Well, Google already sees a significant fraction.  Going back to 1-RTT would be 
a significant downgrade.

If almost everyone should turn it off, why are we including it?

Almost every small site on the Internet should turn it off, but the large sites 
that want to enable it could make up a large fraction of all traffic.

Most server admins won't be reading the TLSv1.3 spec.  They're going to
see "shiny feature added specifically in this version that makes it
faster!" with *maybe* a warning that there are risks, which they'll
dismiss because "if it was so insecure, they wouldn't have included it
in the protocol in the first place."  Unless 0-RTT can be fixed, it
looks like an attractive nuisance.

I agree.  Instead of dropping 0-RTT, I think we should make it easy for admins 
to learn about what is involved in using 0-RTT in ways we believe are secure.  
The two modes I am aware of that are potentially as secure as TLS 1.2 session 
resumption are:

- Do 0-RTT session resumption using a session cache, using the ticket as the 
session ID.  This should have the same security as TLS 1.2 resume, right?
- At the HTTP app layer, make all requests that change state transaction based 
with unique transaction numbers, so replay attacks fail to change server state. 
 Done successfully, this should be more secure than TLS 1.2 resumption, 
shouldn't it?

Are we aware of other secure ways to do 0-RTT?

Bill
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Limiting replay time frame of 0-RTT data

2016-03-11 Thread Kyle Nekritz
Similar to the earlier discussion on 0.5-RTT data, I'm concerned with the long 
term ability to replay captured 0-RTT early data, and the attack vectors that 
it opens up. For example, take a GET request for an image to a CDN. This is a 
request that seems completely idempotent, and that applications will surely 
want to send as 0-RTT data. However, this request can result in a few things 
happening:
1) Resource unavailable
2) Resource cached locally at edge cluster
3) Cache miss, resource must be fetched from origin data center
#1 can easily be differentiated by the length of the 0.5-RTT response data, 
allowing an attacker to determine when a resource has been deleted/modified. #2 
and #3 can also be easily differentiated by the timing of the response. This 
opens up the following attack: if an attacker knows a client has requested a 
resource X_i in the attacker-known set {X_1, X_2, ..., X_n}, an attacker can do 
the following:
1) wait for the CDN cache to be evicted
2) request {X_1, X_2, ..., X_(n/2)} to warm the cache
3) replay the captured client early data (the request for X_i)
4) determine, based on the timing of the response, whether it resulted in a 
cache hit or miss
5) repeat with set {X_1, X_2, ..., X_(n/2)} or {X_(n/2 + 1), X_(n/2 + 2), 
..., X_n} depending on the result
This particular binary search example is a little contrived and requires that 
no-one else is requesting any resource in the set, however I think it is 
representative of a significant new attack vector that allowing long-term 
replay of captured early data will open up, even if 0-RTT is only used for 
seemingly simple requests without TLS client authentication. This is a much 
different threat than very short-term replay, which is already somewhat 
possible on any TLS protocol if clients retry failed requests.

Given this, I think it is worth attempting to limit the time frame that 
captured early data is useful to an attacker. This obviously doesn't prevent 
replay, but it can mitigate a lot of attacks that long-term replay would open 
up. This can be done by including a client time stamp along with early data, so 
that servers can choose to either ignore the early data, or to delay the 
0.5-RTT response to 1.5-RTT if the time stamp is far off. This cuts down the 
time from days (until the server config/session ticket key is rotated) to 
minutes or seconds.

Including the client time also makes a client random strike register possible 
without requiring an unreasonably large amount of server-side state.

I am aware that client time had previously been removed from the client random, 
primarily due to fingerprinting concerns, however these concerns can be 
mitigated by
1) clients can choose to not include their time (or to include a random time), 
with only the risk of their .5-RTT data being delayed
2) placing the time stamp in an encrypted extension, so that it is not visible 
to eavesdroppers


Note: it's also useful for the server to know which edge cluster the early data 
was intended for, however this is already possible in the current draft. In 
ECDHE 0-RTT server configs can be segmented by cluster, and with tickets, the 
server can store cluster information in the opaque ticket.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls