Re: [TLS] Fwd: New Version Notification for draft-davidben-tls-trust-expr-00.txt

2023-10-20 Thread Colm MacCárthaigh
This is very awesome. Just some quick thoughts:

This extension seems very useful in internal environments with their
own proprietary PKI. The first bullet in the intro does get at this,
but I think it still undersells just how compelling this extension
would be. FSIs/Banks, Governments, technology companies often have CAs
with validities in short durations and have to perform rotations very
often. This extension makes that so much more manageable and safer.  I
think there's a strong internet stability argument here ... many large
outages of organizations have been caused by mishandling these
rotations.

In the security / privacy considerations:   1.  Since this extension
can appear in the CH in the clear, should we consider the case where
network operators may use it to "enforce" support of particular trust
anchors? For example, a firewall may reject connections based on
anchors that do or don't support.  and 2. Is this another fingerprint,
at least at system level?  For example; corporate user's whose
organizations who add their own TAs may become finger-printable as
coming from those organizations.

Why sort TrustStores first by name-length, and then lexicographically?
Just seems a little unnecessarily complex and unfriendly to indices.





On Thu, Oct 19, 2023 at 8:38 AM David Benjamin  wrote:
>
> Hi all,
>
> We just published a document on certificate negotiation. It's a TLS 
> extension, which allows the client to communicate which trust anchors it 
> supports, primarily focused on use cases like the Web PKI where trust stores 
> are fairly large. There is also a supporting ACME extension, to allow CAs to 
> provision multiple certificate chains on a server, with enough metadata to 
> match against what the client sends. (It also works in the other direction 
> for client certificates.)
>
> The hope is this can build towards a more agile and flexible PKI. In 
> particular, the Use Cases section of the document details some scenarios 
> (e.g. root rotation) that can be made much more robust with it.
>
> It's very much a draft-00, but we're eager to hear your thoughts on it!
>
> David, Devon, and Bob
>
> -- Forwarded message -
> From: 
> Date: Thu, Oct 19, 2023 at 11:36 AM
> Subject: New Version Notification for draft-davidben-tls-trust-expr-00.txt
> To: Bob Beck , David Benjamin , Devon 
> O'Brien 
>
>
> A new version of Internet-Draft draft-davidben-tls-trust-expr-00.txt has been
> successfully submitted by David Benjamin and posted to the
> IETF repository.
>
> Name: draft-davidben-tls-trust-expr
> Revision: 00
> Title:TLS Trust Expressions
> Date: 2023-10-19
> Group:Individual Submission
> Pages:35
> URL:  https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.txt
> Status:   https://datatracker.ietf.org/doc/draft-davidben-tls-trust-expr/
> HTML: 
> https://www.ietf.org/archive/id/draft-davidben-tls-trust-expr-00.html
> HTMLized: https://datatracker.ietf.org/doc/html/draft-davidben-tls-trust-expr
>
>
> Abstract:
>
>This document defines TLS trust expressions, a mechanism for relying
>parties to succinctly convey trusted certification authorities to
>subscribers by referencing named and versioned trust stores.  It also
>defines supporting mechanisms for subscribers to evaluate these trust
>expressions, and select one of several available certification paths
>to present.  This enables a multi-certificate deployment model, for a
>more agile and flexible PKI that can better meet security
>requirements.
>
>
>
> The IETF Secretariat
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls



--
Colm

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] External PSK design team

2020-01-20 Thread Colm MacCárthaigh
Interested, as it happens - this is something I've been working on at Amazon.

On Mon, Jan 20, 2020 at 8:01 PM Sean Turner  wrote:
>
> At IETF 106, we discussed forming a design team to focus on external PSK 
> management and usage for TLS. The goal of this team would be to produce a 
> document that discusses considerations for using external PSKs, privacy 
> concerns (and possible mitigations) for stable identities, and more developed 
> mitigations for deployment problems such as Selfie. If you have an interest 
> in participating on this design team, please reply to this message and state 
> so by 2359 UTC 31 January 2020.
>
> Cheers,
>
> Joe and Sean
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls



-- 
Colm

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Broken browser behaviour with SCADA TLS

2018-07-04 Thread Colm MacCárthaigh
On Wed, Jul 4, 2018 at 8:15 AM, David Benjamin 
wrote:
>
> Indeed. The bad feedback was not even at a 2048-bit minimum, but a mere
> 1024-bit minimum. (Chrome enabled far more DHE ciphers than others, so we
> encountered a lot of this.) 2048-bit was completely hopeless. At the time
> of removal, 95% of DHE negotiations made by Chrome used a 1024-bit minimum.
> See here for details:
> https://groups.google.com/a/chromium.org/d/msg/blink-dev/
> ShRaCsYx4lk/46rD81AsBwAJ
>

>From the server side: we found that enforcing a 2048-bit size was
unworkable, it breaks clients that will negotiate DHE but then fail when
the exchange happens, including versions of Java. Because the breakage
happens post-handshake, there was little recourse to fix it. We did look at
fingerprinting the clients and trying to use a different size for those,
but even that led to too high an error rate. So we removed DHE in general
and use ECDHE for FS.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.2 and sha256

2018-06-11 Thread Colm MacCárthaigh
Just to add to this excellent answer  ... there is the signature on the
certificates used, which is independent of the cipher suite that you
negotiate but also commonly uses SHA256. Truly moving from SHA256 would
require CAs, Browsers, etc to adopt something new there too.

On Mon, Jun 11, 2018 at 2:52 PM, David Benjamin 
wrote:

> In both TLS 1.2 and TLS 1.3, SHA-256 isn't hardcoded per se. It's a
> function of the cipher suite you negotiate (and also, separately, the
> signature algorithm you negotiate). That said, in practice, both are pretty
> solidly dependent on SHA-256. Most options involve it. AES-128-GCM and
> ChaCha20-Poly1305 are currently paired with SHA-256 while only AES-256-GCM
> is paired with SHA-384.
>
> We could certainly define new cipher suites for either of TLS 1.2 and TLS
> 1.3 as needed. But defining a new cipher suite for TLS 1.2 doesn't
> magically deploy it for all existing TLS 1.2 servers. Those servers must
> deploy new code, at which point updating your TLS library to include it
> would also pull in TLS 1.3 anyway (or whatever the latest TLS version is by
> then).
>
> So I think there will likely be no point in bothering with TLS 1.2
> allocations at that point. More options means more combinatorial complexity
> for implementations, which means more our rather limited collective
> resources in this space get even more thinly spread.
>
> David
>
> On Mon, Jun 11, 2018 at 5:25 PM Daniel Migault <
> daniel.miga...@ericsson.com> wrote:
>
>> Hi,
>>
>> TLS 1.2 uses sha256 as the prf hash function. When sha256 will not be
>> considered secured, I am wondering if we can reasonably envision
>> deprecating sha256 for TLS 1.2 or if TLS 1.2 will at that time be
>> deprecated in favor of TLS 1.X X>= 3 ?
>>
>> In other words, I am wondering how much we can assume TLS 1.2 is
>> associated to sha256.
>>
>> Yours,
>> Daniel
>>
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Breaking into TLS to protect customers

2018-03-19 Thread Colm MacCárthaigh
It's true that breaking open cleartext runs counter to the mission of
end-to-end TLS, but it also seems like operators are going to do it if they
can. Whether by staying on plain RSA, using static-DH, MITM through
installing a private trusted CA, or exporting session secrets, they can
certainly do it.  I worry that we'll lose a good opportunity to improve the
security and transparency of things by turning our nose up at that.

Here's a straw-man suggestion:

Suppose we took on a draft that adds a new optional handshake message. The
message could go immediately before FINISHED, in either direction (or
both), and contain an encrypted version of the soon-to-be-session-key.  For
back-compat: it could be encrypted with RSA, or whatever else the endpoints
want to support. This is basically what STEK encrypted tickets look like
today with TLS1.2 anyway, though usually with symmetric encryption, so it's
not that wild a departure.

Obviously this breaks forward secrecy, and allows passive tapping and
session hi-jacking, but then that's the point. But I think there are some
security advantages too:

1/ By making the functionality part of the handshake transcript, it is
unforgeably evident to both sides that it is happening. The proponents of
this functionality claim that everything is opt-in, but this gives some
cryptographic teeth to it.

2/ clients and browsers could easily consider such sessions insecure by
default. This would mean that adopters would have to deploy configurations
and mechanisms to enable this functionality, similar to - but beyond - how
private root CAs can be inserted.

Wouldn't those be good properties to have? Compared to servers secretly
exporting transcripts or session keys, or just having a private root CA
installed which breaks all sorts of certificate verification
infrastructure?

I think the existence of a "standard" here could also serve to encourage
providers to do things the more transparent way, and create less likelihood
of a mass-market of products that could also be used for more surreptitious
tapping.


On Mon, Mar 19, 2018 at 12:32 AM, Daniel Kahn Gillmor  wrote:

> On Thu 2018-03-15 20:10:46 +0200, Yoav Nir wrote:
> >> On 15 Mar 2018, at 10:53, Ion Larranaga Azcue 
> wrote:
> >>
> >> I fail to see how the current draft can be used to provide visibility
> >> to an IPS system in order to detect bots that are inside the bank…
> >>
> >> On the one hand, the bot would never opt-in for visibility if it’s
> >> trying to exfiltrate data…
> >
> > The presumption is that any legitimate application would opt-in, so
> > the IPS blocks any TLS connection that does not opt in.
>
> Thanks for clarifying the bigger picture here, Yoav.
>
> So if this technology were deployed on a network where not all parties
> are mutually trusting, it would offer network users a choice between
> surveillance by the network on the one hand (opt-in) and censorship on
> the other (opt-out and be blocked).  Is that right?
>
> Designing mechanism for the Internet that allows/facilitates/encourages
> the network operator to force this choice on the user seems problematic.
> Why do we want this for a protocol like TLS that is intended to be used
> across potentially adversarial networks?
>
> datacenter operators who want access to the cleartext passing through
> machines they already control already have mechanisms at their disposal
> to do this (whether they can do so at scale safely without exposing
> their customers' data to further risks is maybe an open question,
> regardless of mechanism).
>
> Mechanisms that increase "visibility" of the cleartext run counter to
> the goals of TLS as an end-to-end two-party secure communications
> protocol.
>
> Regards,
>
>  --dkg
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Possible timing attack on TLS 1.3 padding mechanism

2018-03-01 Thread Colm MacCárthaigh
There's another, more cache-friendly approach too, which is to record the
position of the highest non-zero byte as the decryption occurs (while that
cache-line of plaintext is still in-cache) in-order. I found that a bit
easier to implement in constant time too because it's easy to generate an
all-1s mask that's conditional on a non-zero value.

On Thu, Mar 1, 2018 at 1:52 PM, Paterson, Kenny 
wrote:

> Hi,
>
> I've been analysing the record protocol spec for TLS 1.3 a bit,
> specifically the new padding mechanism. I think there's a possible timing
> attack on a naïve implementation of de-padding. Maybe this is already known
> to people who've been paying more attention than me!
>
> Recall that the padding mechanism permits an arbitrary number of 00 bytes
> to be added after the plaintext and content type byte, up to the max record
> size. This data is then encrypted using whichever AEAD scheme is specified
> in the cipher suite. This padding scheme is quite important for TLS 1.3
> because the current AEAD schemes do leak the length of record plaintexts.
> There should be no padding oracle style attack possible because of the
> integrity guarantees of the AEAD schemes in use.
>
> The idea for the timing attack is as follows.
>
> The natural way to depad (after AEAD decryption) is to remove the 00 bytes
> at the end of the plaintext structure one by one, until a non-00 byte is
> encountered. This is then the content type byte. Notice that the amount of
> time needed to execute this depadding routine would be proportional to the
> number of padding bytes. If there's some kind of response record for this
> record, then measuring the time taken from reception of the target record
> to the appearance of the response record can be used to infer information
> about the amount of padding, and thereby, the true length of the plaintext
> (since the length of the padded plaintext is known from the ciphertext
> length).
>
> The timing differences here would be small. But they could be amplified by
> various techniques. For example, the cumulative timing difference over many
> records could allow leakage of the sum of the true plaintext lengths. Think
> of a client browser fetching a simple webpage from a browser. The page is
> split over many TLS records, each of which is individually padded, with the
> next GET request from the client being the "response record". (This is a
> pretty simplistic view of how a web browser works, I know!). The total
> timing difference might then be sufficient for webpage fingerprinting, for
> example.
>
> I'm not claiming this is a big issue, but maybe something worth thinking
> about and addressing in the TLS 1.3 spec.
>
> There's at least a couple of ways to avoid the problem:
>
> 1. Do constant-time depadding - by examining every byte in the plaintext
> structure even after the first non-00 byte is encountered.
> 2. Add an explicit padding length field at the end of the plaintext
> structure, and removing padding without checking its contents. (This should
> be safe because of the AEAD integrity guarantees.)
>
> Option 2 is probably a bit invasive at this late stage in the
> specification process. Maybe a sentence or two on option 1 could be added
> to the spec.
>
> Thoughts?
>
> Cheers,
>
> Kenny
>
>
>
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>



-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Last Call: (The Transport Layer Security (TLS) Protocol Version 1.3) to Proposed Standard

2018-02-19 Thread Colm MacCárthaigh
Since this IETF-LC/IESG process is a good chance to get a sanity check, I'd
like to boil down what I think are the nits and risks with 0-RTT, and if
others want to weigh in they can. I'll state my own position at the bottom.

Broadly, I think there are three issues with 0-RTT:

   1) The TLS 1.3 draft allows for 0-RTT data, including things like
requests and headers, to be replayed by attackers.

   2) 0-RTT data, again including requests and headers, has no
cryptographic guarantee of forward-secrecy and will likely be protected by
symmetric session ticket encryption keys (STEK) that can be used quite
broadly with no limits on re-use, rotation, and rely on vendors being able
to share and revoke keys frequently and securely. Basically: If a vendors
STEK is compromised, than an unbounded number of end-user requests and
headers can be decrypted. This obviously defeats the goal of achieving
forward secrecy.

  3) While no working attack has been found, some cryptographers and
protocol experts believe that the 0-RTT exchange is overly-complex and a
source of risk. Kenny Paterson made the most prominent statement (
http://bristolcrypto.blogspot.com/2017/03/pkc-2017-kenny-paterson-accepting-bets.html
), but I've heard it echoed at several IACR events. It is definitely true
that 0-RTT resumption complicates the TLS state machine and creates unusual
conditions such as needing to restart messaging sequences.

The TLS-WG was chartered with "aiming for one roundtrip for a full
handshake and one or zero roundtrip for repeated handshakes. The aim is
also to maintain current security features" (
https://datatracker.ietf.org/wg/tls/charter/).  But with these 3 issues,
there is a clearly a trade-off between security (the S in TLS) and speed.

Issue 3 is matter of judgment; my personal judgement is that we will see
implementation bugs due to state machine complexity, but there's no
evidence that the cryptographic and protocol semantics are not robust.

With regards to issues 1, and 2, the latest TLS draft makes it possible to
achieve both of these aims. Through the use of single-use session tickets,
it is possible to provide anti-replay and forward-secrecy properties for
0-RTT data. I'm grateful for the changes that were introduced for this.

At the same time though, most vendors have stated that they don't plan to
do that and instead have designed around limited replay time windows,
non-transactional strike registers, and non-forward secure tickets. This is
what I expect to see deployed, and already see with some TLS1.3 deployment
experiments.  TLS1.3 could be more restrictive here; limiting the size of
session tickets to smaller than the size of session state would effectively
forbid any kind of session encoding which would force the issue, but
several vendors are against it because it doesn't align with current
practices and it incurs the cost of server-size caching. For balance, in
the last year I have heard from most vendors that they do plan to implement
some anti-replay mitigation though, beyond the simple time-windowing, which
goes a way to protecting users from throttle limits.

I am disappointed by the unfortunate preference for cost-saving over robust
security. Good cryptography usually costs money, or else we'd still be
using RC4. I do think that we will see security and correctness issues due
to replays interacting with non-idempotent services and throttling
configurations. While it's true that browsers can be made to replay
requests already, there are many web and non-HTTP services that are
certainly not tolerant of replays. Secondly, I think that it is inevitable
that vendor security compromises will disclose troves of user requests,
passwords, credit cards to decryption; but this is perhaps more of a
nation-state-adversary level risk. Some more detail on attacks related to
issues 1/ and 2/ is available in the security review of 0-RTT data:
https://github.com/tlswg/tls13-spec/issues/1001 .


After all of that, here's my own position:

I strongly support the current TLS1.3 draft progressing to RFC status. I
work at Amazon, where one of our leadership principles is "Disagree and
Commit" (https://www.amazon.jobs/principles); the idea is that it's
important to make yourself heard, but also to move forward and not be
endlessly bogged down. I've been vocal about 0-RTT risks, and certainly
heard and understood, and those concerns have been reflected in generous
changes to the draft. I'm happy that it's possible to build a
forward-secret, non-repayable 0-RTT implementation and that's what I'm
doing. I wish everyone else would too, but that's not consensus; others
have a different weighting for the trade-offs between speed, security and
cost and those views are also legitimate.

But my more important reason for supporting is that overall TLS1.3 is much
much better than TLS1.2, including in regards to forward-secrecy, which is
now guaranteed for all non-0RTT data. I still believe that it will
meaningfully increase the 

Re: [TLS] 3rd WGLC: draft-ietf-tls-tls13

2018-01-14 Thread Colm MacCárthaigh
Thanks for the abundant generosity of patience, but I didn't mean that I
wanted to add a note to the text of the I-D, there's been enough delay and
I'm excited to see this progress. I just meant "add a note" in my e-mail
;-) Though I do like your terse note, it's right to the point.

On Sun, Jan 14, 2018 at 9:47 PM, Eric Rescorla <e...@rtfm.com> wrote:

> Hi Colm,
>
> Thanks for your note. This seems straightforward to handle before IETF-LC.
>
> Maybe something like:
> "Note: many application layer protocols implicitly assume that replays are
> handled at lower levels. Tailure to observe these precautions may exposes
> your application to serious risks which are difficult to assess without a
> thorough top-to-bottom analysis of the application stack"?
>
> -Ekr
>
>
> On Sun, Jan 14, 2018 at 12:15 PM, Colm MacCárthaigh <c...@allcosts.net>
> wrote:
>
>>
>> Back during the previous last call, I felt really guilty about bringing
>> up the 0-RTT stuff so late. Even though it turned out that middle boxes
>> turned out to be a bigger problem to deal with anyway, I just want to say
>> that I'm really grateful for the 0-RTT related changes in the document and
>> for the time and effort that went into all that. I think those changes are
>> sufficient to make a TLS1.3 implementation that handles 0-RTT in a
>> forward-secret, secure and safe way. The changes represent a good
>> compromise between having a secure state and supporting vendors who want to
>> be a bit more loose because their application environment can tolerate it
>> and forward secrecy is not as valuable to their users. Thanks especially to
>> ekr for inventing the fixes, for stewarding the clarifications, and for
>> being awesome about it.
>>
>> At the same time, I just want to add a small note of caution to vendors;
>> if you're going to accept 0-RTT, trying to cut corners by tolerating
>> replays - even a little, is really likely to bite you! I've found even more
>> examples of application protocols and web protocols that implement
>> transactions. Also, if the secrecy of trillions and trillions of users web
>> requests are going to rest on how well session ticket encryption keys are
>> managed, protected, rotated and revoked, we really owe it to users to come
>> up with some collective guidance for vendors on how to do that well.
>>
>>
>> On Fri, Jan 12, 2018 at 9:10 PM, Sean Turner <s...@sn3rd.com> wrote:
>>
>>> All,
>>>
>>> This is the 3rd working group last call (WGLC) announcement for
>>> draft-ietf-tls-tls13; it will run through January 26th.  This time the WGLC
>>> is for version -23 (https://datatracker.ietf.org/
>>> doc/draft-ietf-tls-tls13/).  This WGLC is a targeted WGLC because it
>>> only address changes introduced since the 2nd WGLC on version -21, i.e.,
>>> changes introduced in versions -22 and -23.  Note that the editor has
>>> kindly included a change log in s1.2 and the datatracker can also produce
>>> diffs (https://www.ietf.org/rfcdiff?url1=draft-ietf-tls-tls13-21
>>> rl2=draft-ietf-tls-tls13-23).  In general, we are considering all other
>>> material to have WG consensus, so only critical issues should be raised
>>> about that material at this time.
>>>
>>> Cheers,
>>>
>>> spt
>>> ___
>>> TLS mailing list
>>> TLS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/tls
>>>
>>
>>
>>
>> --
>> Colm
>>
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
>>
>


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 3rd WGLC: draft-ietf-tls-tls13

2018-01-14 Thread Colm MacCárthaigh
Back during the previous last call, I felt really guilty about bringing up
the 0-RTT stuff so late. Even though it turned out that middle boxes turned
out to be a bigger problem to deal with anyway, I just want to say that I'm
really grateful for the 0-RTT related changes in the document and for the
time and effort that went into all that. I think those changes are
sufficient to make a TLS1.3 implementation that handles 0-RTT in a
forward-secret, secure and safe way. The changes represent a good
compromise between having a secure state and supporting vendors who want to
be a bit more loose because their application environment can tolerate it
and forward secrecy is not as valuable to their users. Thanks especially to
ekr for inventing the fixes, for stewarding the clarifications, and for
being awesome about it.

At the same time, I just want to add a small note of caution to vendors; if
you're going to accept 0-RTT, trying to cut corners by tolerating replays -
even a little, is really likely to bite you! I've found even more examples
of application protocols and web protocols that implement transactions.
Also, if the secrecy of trillions and trillions of users web requests are
going to rest on how well session ticket encryption keys are managed,
protected, rotated and revoked, we really owe it to users to come up with
some collective guidance for vendors on how to do that well.


On Fri, Jan 12, 2018 at 9:10 PM, Sean Turner  wrote:

> All,
>
> This is the 3rd working group last call (WGLC) announcement for
> draft-ietf-tls-tls13; it will run through January 26th.  This time the WGLC
> is for version -23 (https://datatracker.ietf.org/doc/draft-ietf-tls-tls13/).
> This WGLC is a targeted WGLC because it only address changes introduced
> since the 2nd WGLC on version -21, i.e., changes introduced in versions -22
> and -23.  Note that the editor has kindly included a change log in s1.2 and
> the datatracker can also produce diffs (https://www.ietf.org/rfcdiff?
> url1=draft-ietf-tls-tls13-21=draft-ietf-tls-tls13-23).  In general,
> we are considering all other material to have WG consensus, so only
> critical issues should be raised about that material at this time.
>
> Cheers,
>
> spt
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>



-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

2018-01-08 Thread Colm MacCárthaigh
On Mon, Jan 8, 2018 at 6:29 AM, Hubert Kario  wrote:
>
> except that what we call "sufficiently hard plaintext recovery" is over
> triple
> of the security margin you're proposing as a workaround here
>
> 2^40 is doable on a smartphone, now
> 2^120 is not doable on a supercomputer, and won't be for a very long time
>

This isn't how these kinds of attacks work. 2^40 would be small for
something that could be attacked in parallel by a very large computing
system. But it's an absolutely massive difficulty factor against a live
on-line attack. Can you propose a credible mechanism where an attacker
would be able to mount say billions (to use the low end) of repeated
connections, without detection? And that's before they wean the signal out.
And since the delay can't be avoided, the attack also costs thousands of
years of attacker-controlled computer time.  I have much much more
confidence in that simple kind of defense giving me real-world security
than I do in my code being absolutely perfect, which I've never achieved.

I'm being stubborn and replying here because I want to argue against a
common set of biases I've seen that I think are harmful to real-world
security, Making attacks billions to trillions of times harder absolutely
does protect real-world users, and we shouldn't be biasing simply for what
we think the research community will take seriously or not scoff at, but
for what will actually protect users.  Those aren't the same.

I'll give another example: over the last few years we have significantly
*regressed* on the real-world security of TLS by moving to AES-GCM and
ChaCha20. Both of their cipher suites leak the exact content-length and
make content-fingerprinting attacks far easier than they were previously
(CBC blocks made this kind attack exponentially more expensive). The
current state is that passive tappers with relatively unsophisticated
databases can de-cloak a high percent of HTTPS connections. This
compromises secrecy, the main user benefit of TLS. That is staggering me,
but it's also an uninteresting attack to the research community, it's long
known about and isn't going to result in much publication or research
grants.



> > This bears repeating: attempting to make OpenSSL rigorously constant time
> > made it *less* secure.
>
> yes, on one specific hardware type, because of a bug in implementation
>
> I really hope you're not suggesting "we shouldn't ever build bridges
> because
> this one collapsed"...
>
> also, for how long was it *less* secure? and for how long was it
> vulnerable to
> Lucky13?


I'm saying that trade-offs are complicated and that constant-time "rigor"
isn't worth it sometimes. Adding ~500 lines of hard-to-follow
hard-to-maintain code with no systematic way to confirm that it stays
correct was a mistake and it led to a worse bug. Other implementations
chose more simple approaches; code-balancing, that were
close-to-constant-time but not rigorously so.  I think the latter was
ultimately smarter, all code is dangerous because bugs can be lurking in
its midst, and those bugs can be really really serious like memory
disclosure and remote execution, so leaning towards simple and easy to
follow should be heavily weighted.  So when we see the next bug like
Lucky13, which was un-exploitable against TLS, but still publishable and
research-worthy, we should lean towards simpler fixes rather than complex
ones, while also just abandoning whatever algorithm is effected and
replacing it.


> > Delaying to a fixed interval is a great approach, and emulates how
> clocking
> > protects hardware implementations, but I haven't yet been able to succeed
> > in making it reliable. It's easy to start a timer when the connection is
> > accepted and to trigger the error 30 seconds after that, but it's hard to
> > rule out that a leaky timing side-channel may influence the subsequent
> > timing of the interrupt or scheduler systems and hence exactly when the
> > trigger happens. If it does influence it, then a relatively clear signal
> > shows up again, just offset by 30 seconds, which is no use.
>
> *if*
>
> in other words, this solution _may_ leak information (something which you
> can
> actually test), or the other solution that _does_ leak information, just
> slowly so it's "acceptable risk"
>

Sorry, I'll try to be more clear: A leak in a fixed-interval delay would be
catastrophic, because it will result in a very clean signal, merely offset
by the delay. A leak in a random-interval delay will still benefit from the
random distribution and require many samples.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

2018-01-04 Thread Colm MacCárthaigh
On Thu, Jan 4, 2018 at 4:17 AM, Hubert Kario  wrote:

> > No, I strongly disagree here. Firstly, frustrating attackers is a good
> > definition of what the goal of security is. Some times increasing costs
> for
> > attackers does come at the cost of making things harder to analyze or
> debug,
> > but we shouldn't make the latter easier at the expense of the former.
>
> No, the goal of security is to stop attacks from being successful, not
> make them harder. Making attack harder is security through obscurity.
> Something that definitely doesn't work for open source software.
>

Unless you're shipping one-time-pads around, cryptography is founded on
making successful attacks highly improbable, but not impossible. There are
measures of likelihood of key and plaintext recovery for all of the
established algorithms. The delay approach is no different, and risk can be
expressed in mathematical ways.  The numbers are lower, for sure, delays
can add a security factor of maybe up to 2^40, but that's still very very
effective and unlike encryption or hashes, do not have to withstand
longterm attacks.

This bears repeating: attempting to make OpenSSL rigorously constant time
made it *less* secure. The LuckyMinus20 bug was much worse than the Lucky13
bug the code was trying to fix. It would have been better to leave it
un-patched (at least for TLS, maybe not DTLS). A delay in the error case on
the other hand, would have made either issue un-exploitable in the real
world. Evaluating that trade-off takes a lots of "grey area" analysis
though; one has to have a sense of judgement for how much risk a complex
code change is "worth", being mindful that complex code changes come with
their own risks.

honestly, I consider this approach completely misguided. If you are OK with
> tying up a socket for 30 seconds, simply start a timer once you get the
> original client hello (or the first message of second flight, in TLS 1.2),
> close the socket if the handshake is not successful in 30 seconds. In case
> of errors, send nothing, let it timeout. The only reason why this approach
> to constant time error handling is not used is because most people are not
> ok with tying up resources for so long.
>

This is real code we use in production; thankfully errors are very
uncommon, but connections also cost very little, in part due to work done
for DDOS and trickle attacks, a different kind of security problem.

Delaying to a fixed interval is a great approach, and emulates how clocking
protects hardware implementations, but I haven't yet been able to succeed
in making it reliable. It's easy to start a timer when the connection is
accepted and to trigger the error 30 seconds after that, but it's hard to
rule out that a leaky timing side-channel may influence the subsequent
timing of the interrupt or scheduler systems and hence exactly when the
trigger happens. If it does influence it, then a relatively clear signal
shows up again, just offset by 30 seconds, which is no use.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

2018-01-03 Thread Colm MacCárthaigh
On Wed, Jan 3, 2018 at 3:45 AM, Hubert Kario  wrote:
>
> > *Second: hide all alerts in suspicious error cases*
> > Next, when the handshake does fail, we do two non-standard things. The
> > first is that we don't return an alert message, we just close the
> > connection.
> >
> > *Third: mask timing side-channels with a massive delay*
> > The second non-standard thing we do is that in all error cases, s2n
> behaves
> > as if something suspicious is going on and in case timing is involved, we
> > add a random delay. It's well known that random delays are only partially
> > effective against timing attacks, but we add a very very big one. We
> wait a
> > random amount of time between a minimum of 10 seconds, and a maximum of
> 30
> > seconds.
>
> Note that both of those things only _possibly_ frustrate attackers while
> they
> definitely frustrate researchers trying to characterise your
> implementation as
> vulnerable or not. In effect making it seem secure while in reality it may
> not
> be.
>

No, I strongly disagree here. Firstly, frustrating attackers is a good
definition of what the goal of security is. Some times increasing costs for
attackers does come at the cost of making things harder to analyze or
debug, but we shouldn't make the latter easier at the expense of the
former.

In practical terms; it's not that big a deal. For the purposes of research
it's usually easy to remove the delays or masking - I did it myself
recently to test various ROBOT detection scripts; turning off the delay was
a one line code patch and I was up and running, it hardly hindered research
at all.

Similarly, we've already had to reduce the granularity of TLS alerts, so
TLS experts and analysts are used to having to dive into code,
step-throughs, to debug some kinds of problems. I can't how many hours I've
spent walking through why a big opaque blob didn't precisely match another
big opaque blob. We could make all of that easier by logging and alerting
all sorts of intermediate states, but it would be a terrible mistake
because it would leak so much information to attackers.

As for delays possibly making an attackers job harder; I'm working on some
more firm, signal-analysis based, grounding for the approach as the impact
varies depending on the amount of noise present, as well as the original
distribution of measurement due to ordinary scheduling and network jitter,
but the approach certainly makes timing attacks take millions to trillions
more attempts, and can push real-world timing leaks well into unexploitable
in the real world.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

2017-12-14 Thread Colm MacCárthaigh
On Thu, Dec 14, 2017 at 5:01 PM, Hanno Böck <ha...@hboeck.de> wrote:

> On Thu, 14 Dec 2017 16:45:57 -0800
> Colm MacCárthaigh <c...@allcosts.net> wrote:
>
> > But what would that look like? What would we do now, in advance, to
> > make it easy to turn off AES? For example.
>
> I think this is the wrong way to look at it.
>
> From what I'm aware nobody is really concerned about the security of
> AES. I don't think that there's any need to prepare for turning off AES.
>

Well, DJB is a notable concerned critic of AES and its safety in some
respects ... but I was using AES as kind of a worst-case scenario since so
many things do depend on it and it's especially hard to leave. I'm not
aware of some ground-breaking cryptanalysis :) But I do think the question
is worth having an answer for. I think we *do* need to prepare for turning
off AES, there's always a chance we might have to.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

2017-12-14 Thread Colm MacCárthaigh
Bringing this back to TLS-WG territory. Deprecating algorithms is hard work
and can take a long time. Having been through MD5, RC4, 3DES, SHA1
deprecations and CBC de-prioritisations, it was a lot of work and network
effects work against rapid changes.

What else could we be doing here? One option might be to say that
implementations should aways maintain at least two active algorithms for
everything, both used with some frequency, and hence both likely to be
optimized, the goal being to be able to turn off one at a moments notice
with no availability or performance impact.

But what would that look like? What would we do now, in advance, to make it
easy to turn off AES? For example.


On Thu, Dec 14, 2017 at 2:58 PM, Watson Ladd  wrote:

> Let's not forget defense 0: migrating away from broken algorithms
> (which means turning them off). The fact that we didn't switch MTI
> away from RSA encryption in TLS 1.1 after these attacks were
> disclosed, or even in TLS 1.2, means that we've got a very long time
> before some sites can turn off these algorithms. Given that some
> places can't turn off SSL v3, it's not clear we can ever turn off a
> widely implemented protocol.
>
> Sincerely,
> Watson Ladd
>



-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

2017-12-14 Thread Colm MacCárthaigh
TLS folks,

A few weeks ago the s2n team got a mail from US CERT asking us to take a
look for any Bleichenbacher attack issues and get to back to them. We
didn't have any issues (thankfully!), but it was a good opportunity for us
to review how we defend against BB and other related attacks.  The notice
was of course based on the excellent work of Hanno Böck, Juraj Soorovsky,
and Craig Young.

Based on the notice, we also went looking in some other code-bases and
implementations we have access to, and did find issues, both classic BB98
style attacks, and small timing side channels. We've worked with vendors
and code owners in each case and fixes are in place, new releases made, and
so on.

Based on our experiences with all of this over the last few weeks, I'd like
to summarize and throw out a few suggestions for making TLS stacks more
defensive and robust against problems of this class. One or two may be even
worth considering as small additions to the forthcoming TLS RFC, and I'd
love to get feedback on that.

*First: the basic specific defense against BB98*
In s2n we went with the boring basic per-the-TLS defense against BB98. We
pre-generate a random PMS:

https://github.com/awslabs/s2n/blob/master/tls/s2n_client_key_exchange.c#L64

then we over-write this PMS only if RSA succeeds:

https://github.com/awslabs/s2n/blob/55a641dc29d17780620c16713854f1c9fd31f7ce/crypto/s2n_rsa.c#L165

since it's important not to leak timing, we use a special
"constant_time_copy_or_dont" routine to do the over-write:

https://github.com/awslabs/s2n/blob/master/utils/s2n_safety.c#L70

we then allow the handshake to proceed, and fail it later:

https://github.com/awslabs/s2n/blob/088240d081953131deefb27a70fa6f728156f0cf/tls/s2n_client_finished.c#L35

So far everything is standard, though we did notice in reviewing some other
implementations that things can be unnecessarily complicated around
handling the first step. I'm including the code links here just as an
example that literally following the RFC guidance is fairly doable, but it
helps a lot to have a constant-time copy-or-not kind of routine to make it
much easier.

*Second: hide all alerts in suspicious error cases*
Next, when the handshake does fail, we do two non-standard things. The
first is that we don't return an alert message, we just close the
connection. This is intentional, and even though it violates the written
standards, we believe it's better to optimize for frustrating attackers and
not leaking information Vs making occasional debugging by TLS experts
easier. I'd be interested in the thoughts of others here.  We did find
other implementations that just close() a connection, and we've never
noticed inter-operability problems. Should we tolerate this in the RFC?

*Third: mask timing side-channels with a massive delay*
The second non-standard thing we do is that in all error cases, s2n behaves
as if something suspicious is going on and in case timing is involved, we
add a random delay. It's well known that random delays are only partially
effective against timing attacks, but we add a very very big one. We wait a
random amount of time between a minimum of 10 seconds, and a maximum of 30
seconds. With the delay being granular to nanoseconds. This puts a delay of
20 seconds on average on every potential attacker measurement (though
obviously they can parallelize) and adds something around 2^60 bits of
entropy to a measurement. The effectiveness of this technique depends on
the size and distribution of the timing channel being leaked, as well the
distribution of measurement latency generally, but suppose the timing leak
is 10 microseconds, then the delay likely increases attacker difficulty by
a factor of probably hundreds of billions.  I'd be interested in thoughts
on this too, as a general recommendation as "something TLS stacks
can/should do is that when there's an error, mask it with a big random
delay".

The main "downside" of this delay is that it can tie up resources - but
we're not as concerned with that. Firstly, an attacker can always cause a
connection to stall for 30 seconds, and even ordinary network conditions
like packet loss can trigger that. That's why we chose 30 seconds as our
upper bound. Second, connections are cheap, especially in
asynchronous/epoll driven models, or any model that isn't one-connection
per process. We prefer to have the defense in depth. It's worth noting that
this protection also helps with other timing attacks, such as Lucky13 (read
https://aws.amazon.com/blogs/security/s2n-and-lucky-13/ for my write up on
how the earlier version of this, with microsecond granularity, wasn't
enough to defeat Lucky13, but still made it millions of times harder for an
attacker) or even AES timing issues.

The remainder of this mail doesn't concern changes that could made in the
TLS draft, but rather just other information that may be useful for
implementors. Sorry if I'm abusing the purpose of the list a little, but
hopefully the last item 

Re: [TLS] Publication of draft-rhrd-tls-tls13-visibility-00

2017-10-23 Thread Colm MacCárthaigh
On Mon, Oct 23, 2017 at 3:30 PM, Benjamin Kaduk  wrote:
>  There are no doubt folks here would claim that the writing has been on the 
> wall for
> five years or more that static RSA was out and forward secrecy was on
> the way in, and that now is the right time to draw the line and drop the
> backwards compatibility.In fact, there is already presumed WG
> consensus for that position, so a strong argument indeed would be needed
> to shift the boundary from now.  I won't say that no such argument can
> exist, but I don't think we've seen it yet.

I don't have too strong an interest in this thread, it's not going
anywhere, and I don't mind that. But I do want to chime in and point
out that forward secrecy is not completely on the way in. With STEK
based 0-RTT, it sounds like many implementors are happy to see user's
requests, cookies, passwords and other secret tokens protected only by
symmetric keys that are widely shared across many machines and
geographic boundaries, with no defined key schedule, usage
requirements or forward secrecy. Clearly, the consensus has been
willing to accept that trade-off, and there is definite wiggle room.

-- 
Colm

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-ietf-tls-tls13-21: TLS 1.3 record padding removal leaks padding size

2017-08-15 Thread Colm MacCárthaigh
On Tue, Aug 15, 2017 at 1:55 PM, Hubert Kario <hka...@redhat.com> wrote:
> On Tuesday, 15 August 2017 00:55:50 CEST Colm MacCárthaigh wrote:
>> On Mon, Aug 14, 2017 at 8:16 PM, Hubert Kario <hka...@redhat.com> wrote:
>> > the difference in processing that is equal to just few clock cycles is
>> > detectable over network[1]
>>
>> The post you reference actually says the opposite; "20 CPU cycles is
>> probably too small to exploit"
>
> exactly what we though about cbc padding at the time TLS 1.1 was published...

I'm not going to defend the poor design of TLS1.1 padding, but it does
remain unexploitable over real-world networks. The Lucky13 attack that
you reference is practical against DTLS, but not TLS. It is worth
understanding the nuance, because the differences can help us continue
to make TLS more robust and hint where to optimize. The property that
has protected has TLS Vs DTLS is non-replayability, so it's important
we keep that.

>> ... and even today with very low
>> latency networks and I/O schedulers it remains very difficult to
>> measure that kind of timing difference remotely.
>
> simply not true[1], you can measure the times to arbitrary precision with any
> real world network connection, it will just take more tries, not infinite
> tries

Surely the Nyquist limits apply? The fundamental resolution of
networks is finite. Clock cycles are measured in partial billionths of
a second, but even 10Gbit/sec networks use framing (85 byte minimum)
in a way that gives you a resolution of around 70 billionths of a
second. Nyquist says that to measure a signal you need a sampling
resolution twice that of the signal itself ... that's about 2 orders
of magnitude of distance to cover in this case.

>> But per the post, the
>> larger point is that it is prudent to be cautious.
>
> exactly, unless you can show that the difference is not measurable, under all
> conditions, you have to assume that it is.
>
>> > When you are careful on the application level (which is fairly simple when
>> > you just are sending acknowledgement message), the timing will still be
>> > leaked.
>> There are application-level and tls-implementation-level approaches
>> that can prevent the network timing leak. The easiest is to only write
>> TLS records during fixed period slots.
>
> sure it is, it also limits available bandwidth and it will always use that
> amount of bandwidth, something which is not always needed

Constant-time schemes work by taking the maximum amount of time in
every case. This fundamentally reduces the throughput; because small
payloads don't get a speed benefit.

> we are not concerned if the issue can be workarouded, we want to be sure that
> the TLS stack does not undermine application stack work towards constant time
> behaviour

The TLS stack can take a constant amount of time to encrypt/decrypt a
record, regardless of padding length, but it's very difficult to see
how it can pass data to/from the application in constant time; besides
the approach I outlined, which you don't like.

Note that these problems get harder with larger amounts of padding.
Today the lack of padding makes passive traffic analysis attacks very
easy. It's extremely feasible for an attacker to categorize request
and content lengths (e.g. every page on Wikipedia) and figure out what
page is user is browsing. That's a practical attack, that definitely
works, today, and it's probably the most practical and most serious
attack that we do know works. The fix for that attack is padding, and
quite large amounts are needed to defeat traffic analysis. But that
will make the timing challenges harder. In that context: it's
important to remember; so far those timing attacks have not been
practical. We don't want to optimize for the wrong problem.

-- 
Colm

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-ietf-tls-tls13-21: TLS 1.3 record padding removal leaks padding size

2017-08-14 Thread Colm MacCárthaigh
On Mon, Aug 14, 2017 at 8:16 PM, Hubert Kario  wrote:
> the difference in processing that is equal to just few clock cycles is
> detectable over network[1]

The post you reference actually says the opposite; "20 CPU cycles is
probably too small to exploit" ... and even today with very low
latency networks and I/O schedulers it remains very difficult to
measure that kind of timing difference remotely. But per the post, the
larger point is that it is prudent to be cautious.

> When you are careful on the application level (which is fairly simple when you
> just are sending acknowledgement message), the timing will still be leaked.

There are application-level and tls-implementation-level approaches
that can prevent the network timing leak. The easiest is to only write
TLS records during fixed period slots.

For example, suppose your process can handle 100 connections
concurrently, then you can divide 1ms into 100 slots of 10
microseconds each. Every 1ms you have a writer thread or process
'visit' each connection (you may use an epoll/kqueue driven I/O loop
or similar for this) during its fixed slot and send its pending
output.

-- 
Colm

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 32 byte randoms in TLS1.3 hello's

2017-07-26 Thread Colm MacCárthaigh
On Wed, Jul 26, 2017 at 1:57 PM, Martin Rex  wrote:
>
> Through the 10x compression of the RDRAND output, which will provably
> create an incredibly huge amount of collisions, the attacker will be
> unable to identify any particular output values of RDRAND.
>
> Your conceived attack could only work under the condition that
> 10 RDRAND consecutive outputs are always fully deterministic, and
> that also the seed used by RDRAND will be fully deterministic to
> the attacker -- or can otherwise be learned out-of-band by the attacker
> -- while at the same time this property will remain invisible to
> all external randomness tests.
>

I think this is pretty easy for some conceivable attacks. Suppose that your
adversary makes RDRAND a DRBG seeded with wall-clock time and device-ID.
That's going to produce deterministic output, but it will pass all of the
tests and appear to be random. If the attacker knows your device-ID, or
even just a range of possible device-IDs, they can confirm via a match.

The 10x compression would just make them do a bit more work.


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 32 byte randoms in TLS1.3 hello's

2017-07-26 Thread Colm MacCárthaigh
On Wed, Jul 26, 2017 at 11:58 AM, Martin Rex  wrote:

> With RDRAND, you would use e.g. SHA-256 to compress 10*256 = 2560 Bits of
> a black-box CPRNG output into a 256-bit _new_ output that you
> actually use in communication protocols.
>

If the relation between the RDRAND input and the output of your function is
fixed, then your attacker than just do the same thing. It doesn't help at
all really. You have to mix RDRAND with something else that is unknowable
to the attacker as part of the process.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 32 byte randoms in TLS1.3 hello's

2017-07-26 Thread Colm MacCárthaigh
On Wed, Jul 26, 2017 at 6:22 AM, Martin Rex  wrote:

> Since you also have no idea whether and how the internal hardware design
> behind Intel RDRAND is backdoored, you should not be using any of its
> output without an at least 10x cryptographic compression in any case.
>

Obviously your CPU can fully compromise you locally, so that's not very
interesting to think about. But for remote attacks, like the one you
describe here, where an adversary may use predictable PRNG output, it is
probably better to mix RDRAND output with something else. There are a few
layers of defense here, such as multi-source NRBGs, or personalization
strings. Those significantly distance the output from the RDRAND. The kind
of compression you mention here can be easily precomputed and tables
generated by someone with a large amount of resources, since it's a pure
function.

In BoringSSL, and s2n, we mix RDRAND in as part of the reseeding. But the
initial seed came from urandom (which is not pure RDRAND). In s2n, we also
use personalization strings to provide another degree of defense.


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 32 byte randoms in TLS1.3 hello's

2017-07-24 Thread Colm MacCárthaigh
On Mon, Jul 24, 2017 at 8:29 AM, Watson Ladd  wrote:

> Don't use bad prngs. And don't buy products from vendors who ship back
> doors and refuse to come completely clean when confronted.
>

Just yesterday DJB posted a blog post about AES-CTR-DRBG, one of the most
widely-used PRNGs, and he points out a security optimization that can be
applied to it, one that escaped years of review. The optimization only
applies if you're generating large chunks of random data, so doesn't apply
to TLS, where the chunks are small; but it's still interesting that we're
still finding improvements and problems in this area.

The PRNG sits at the very bottom of the security of TLS, and biases there
have the potential to break everything, including back in time; they could
defeat PFS and uncloak years worth of data. We don't always know what's bad
t the time that we are using it; e.g. arc4random was considered fine for
years.

I think it's wise to take some measures to handle the "Well, if it were
broken, how would we add defense in depth ...".

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 32 byte randoms in TLS1.3 hello's

2017-07-24 Thread Colm MacCárthaigh
On Mon, Jul 24, 2017 at 8:15 AM, Stephen Farrell 
wrote:

> Now if some TLS1.3 deployment were affected by a dual-ec
> attack, it'd seem like the -21 version of Random might be
> even better than the TLS1.2 version, for the attacker.
>

I think the fix for this is really at the application level; if you want
defense-in-depth against PRNG problems, it's probably best to use separate
RNG instances for public data (e.g. client_random, server_random,
explicit_IV) and for secret data (keys) so that a leak in the public data
doesn't compromise the private one. We do this in s2n, and I think
BouncyCastle does it too.

A protocol level fix probably isn't as helpful because the attacker can
make more connections and collect more data to derive more and more
information about the RNG state anyway.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-20 Thread Colm MacCárthaigh
On Thu, Jul 20, 2017 at 10:21 AM, Stephen Farrell  wrote:

> >
> > It is odd ... and I'm being deliberately provocative to get at the
> > doublethink. It is impossible to consider this mode wiretapping and not
> > claim the browsers support it today. Plainly, they do.
>
> In what sense does a browser "support" a server leaking a private
> key via some proprietary interface? That makes zero sense to me
> and seems sheer hyperbole, as you said yourself. And not useful.
>

I think it's most useful to focus on the real world, and what works and
doesn't work in the real world, and the implications for user security,
again in the real world.  I don't think it's useful to engage in
disconnected arguments and debate that lack that focus.

In the real world, the kind of wiretapping you are talking about works
today, people are using it, so plainly it is supported. Let's stay grounded
in that reality.

> happening, and is not already possible.
>
> When using crypto in a network protocol, it is impossible to know
> or not know that a peer is implementing crypto well or badly.
>

It absolutely is possible to know that a peer is implementing crypto badly;
for example if they use RC4, or have static DH parameters, those can be
detected. I can't fathom why those concerned with the problems of static-DH
are not advocating for dynamic-DH as a requirement. At a minimum that seems
like the most concrete real-world action that will prevent it. If nothing
is done, then static-DH remains possible.


> > It will also continue to be
> > possible to MITM traffic, if you have the RSA key, and some dh-static
> > opponents even advocate for this. I have seen no intellectually
> consistent
> > explanation for why that is ok,
>
> I never said it was "ok." I said it wasn't part of the TLS RFCs.
>
> The point here is to not specify mechanisms that enable wiretapping
> to the extent that is feasible


Two observations:

1. as static-DH  makes plain, TLS1.3 continues to enable this kind of
wiretapping. Are you proposing a modification? Do you then agree with my
suggestion that static-DH be actively forbidden and that clients can reject
duplicate DH params?

2. My concern is to specify things that will improve real-world security.
This should always be our goal.

(you seem to assume that all uses
> of crypto "support" wiretapping, since it's always possible for an
> implementation to leak keys - that's gibberish IMO, but I'm using
> the term in your sense in this paragraph).


That's not gibberish and it's a really really important point. It *is*
always possible to leak keys, or plaintext. We can't ignore that. That's
part of the security model. Our task is then to make it as unlikely as
possible and to accommodate the needs of users in ways that discourage it.

> why that won't be abused by coercive
> > authorities,
>
> It could be. It'd still be abuse IMO.
>

I think it's a lot less likely that signaled, opt-in, infrastructure would
be used by coercive authorities. It works ok in the corporate cases, where
they control both ends, but  for the coercers, they couldn't just show up
and say "Use a static DH and give us the key" to anyone. Instead they'd
have to say "enable this option that browsers don't enable by default".
This doesn't seem realistic.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-20 Thread Colm MacCárthaigh
On Thu, Jul 20, 2017 at 9:55 AM, Stephen Farrell <stephen.farr...@cs.tcd.ie>
wrote:

> On 20/07/17 17:43, Colm MacCárthaigh wrote:
> > that's the term that people keep applying,
>
> That term was appropriate for draft-green as justified in [1]
> That's disputed by some folks but it's the correct term.
>

If you maintain that draft-green is Wiretapping, then that is also the
correct term for what is happening today. Today, operators are using a
static key, used on the endpoints they control, to decrypt traffic
passively. The only difference is DH vs RSA, and I know you'll agree that's
not material.

Claiming that current browsers "support" wiretapping is plain
> odd to me and not that useful for the discussion but isn't a
> TLS protocol issue as we don't currently have, and don't have
> a WG proposal for a draft-green like wiretapping API as part
> of the TLS WG set of RFCs.
>

It is odd ... and I'm being deliberately provocative to get at the
doublethink. It is impossible to consider this mode wiretapping and not
claim the browsers support it today. Plainly, they do.

We don't live in an abstract theoretical world in which this is not already
happening, and is not already possible. It will also continue to be
possible to MITM traffic, if you have the RSA key, and some dh-static
opponents even advocate for this. I have seen no intellectually consistent
explanation for why that is ok, why that won't be abused by coercive
authorities, and hence why it is not better to have something in between;
that gives providers what they claim to need, but not the coercers.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-20 Thread Colm MacCárthaigh
On Thu, Jul 20, 2017 at 12:44 AM, Salz, Rich  wrote:

> It’s like saying “all browsers that support TLS support wiretapping
> because of the static RSA key exchange.”
>
>
>
> It’s a little disingenuous
>


It sure is! and hyperbolic, but that's the term that people keep applying,
so it's clarifying to use it consistently whenever we talk about this.

While I'm at it, I can't make sense of:

"Using the RSA key to decrypt traffic to your server is wire-tapping."
"Using the RSA key to impersonate and MITM your server isn't wire-tapping."

We'll still support the latter, which is much worse than the former :( I
can't see how offering something /between/ the two, more secure than the
latter, isn't a net improvement on where we'll be with TLS1.3.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-20 Thread Colm MacCárthaigh
On Wed, Jul 19, 2017 at 11:40 PM, Andrei Popov 
wrote:

> Hi Colm,
>
>
>
>- Today browsers do turn on wiretapping support in the normal case.
>There's nothing they can do about it, and it works right now.
>
> This is news to me; which browsers do this (so that I can avoid using
> them)?
>

Like I said, all of them. I don't know of a single browser that forces
DH-only and insists on unique DH parameters today, and it wouldn't be
practical.  So if we're going to refer to an operator who has the server's
private key using their own key to decrypt traffic as wire-tapping, then in
those terms currently all browsers have support for that turned on, as it's
part of existing versions of TLS.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-19 Thread Colm MacCárthaigh
On Wed, Jul 19, 2017 at 3:50 PM, Stephen Farrell 
wrote:

> That is a perfect example of the hideous dangers of all of this.
> The implication in the above is that browsers would/should turn
> on wiretapping support in the normal case.
>

Today browsers do turn on wiretapping support in the normal case. There's
nothing they can do about it, and it works right now.

If static-DH is permitted, and I don't mean if we release a document
describing it, I mean if we don't forbid static DH parameters; this will
also continue to be the case. My take: I think we should forbid static DH
for this reason.

Next, if proxies are deployed as the mechanism, this will also continue to
be the case. Again, nothing a browser can do, and I argue that real-world
security is left much much worse for users too.

On the other hand, if we standardize a signaled, opt-in, mechanism; then
browsers have more fine-grained options. I suspect that browsers would NOT
support this by default, just as they don't accept private CAs by default.
Instead the browser would have to configured per a corporate policy. But
they could /also/ choose to disable incognito mode in such circumstances,
to be more fair to end-users. It's an example of something that can't be
done today at all.

Such a mode is likely fine for the corporate users and what they want, but
is not so useful for intelligence agencies and so on, precisely because its
signaled and a bit more transparent. In real world terms, I would regard it
much /less/ likely to create the kind of MITM infrastructure that's useful
for that case.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-19 Thread Colm MacCárthaigh
On Wed, Jul 19, 2017 at 10:55 AM, Ted Lemon <mel...@fugue.com> wrote:

> Sorry, the more I think about how to do this in a way that doesn't make
> things worse, the less faith I have that it is possible.   But if you know
> of a way to do it, I certainly don't oppose you doing it.   I'm not an
> expert: the fact that I don't see how to do it doesn't mean it can't be
> done.
>

I think it was Nick who had the idea of adding a message to the TLS
transcript that encrypts the PMS or session key under a public key. This
had some advantages:

* Static-DH can be banned and clients can check for changing DH parameters.
* The technique would be signaled and opt-in to clients; they can terminate
the connection if they don't want it. Clients could insist on being
configured to support it, not support it in incognito mode, etc.
* The PMS/key could be encrypted under a different key than the key pair
used to authenticate the server; this means that the servers needn't have a
key that can decrypt these transcripts. It can be kept offline. It also
means that the investigation team needn't have access to the servers
certificate private key. Much better all-round.
* Still can work with tcpdump/wireshark etc, but not very useful to three
letter agencies, etc.


>
> On Wed, Jul 19, 2017 at 7:45 PM, Colm MacCárthaigh <c...@allcosts.net>
> wrote:
>
>>
>>
>> On Wed, Jul 19, 2017 at 10:38 AM, Ted Lemon <mel...@fugue.com> wrote:
>>
>>> The problem is that the actual solution to this problem is to accept
>>> that you aren't going to be able to decrypt the streams, and then figure
>>> out what to do instead.   Which is work the proponents of not doing that
>>> are not interested in doing, understandably, and which is also not the work
>>> of this working group.
>>>
>>> I'm skeptical that there is a way for this working group to solve the
>>> proposed problem, but if there is, it involves figuring out a way to do
>>> this that doesn't make it easy to MiTM *my* connections, or, say, those
>>> of activists in dangerous places.
>>>
>>
>> I find this a very bizarre outcome that works against our collective
>> goals. If there is no mechanism at all, then it is quite likely that
>> organizations will use static-DH or stay on TLS1.2. Those are bad options,
>> in my opinion, because there's no signaling or opt-in to the client. We can
>> do much better than that.  Client opt-ins are far from academic. For
>> example, browser's incognito modes may refuse to support such sessions, if
>> they knew what was going on.
>>
>> It's also bad if organizations end up deploying static-DH and that means
>> we can't do things like checking for changing DH parameters.
>>
>> It seems like we would be rejecting a good opportunity to make what the
>> network operators want work in a better and more secure way, while making
>> it harder for passive observers and coercive authorities, to use the same
>> mechanism for other purposes. What do we gain? beyond a hollow moral
>> victory.
>>
>> --
>> Colm
>>
>
>


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-19 Thread Colm MacCárthaigh
On Wed, Jul 19, 2017 at 10:38 AM, Ted Lemon  wrote:

> The problem is that the actual solution to this problem is to accept that
> you aren't going to be able to decrypt the streams, and then figure out
> what to do instead.   Which is work the proponents of not doing that are
> not interested in doing, understandably, and which is also not the work of
> this working group.
>
> I'm skeptical that there is a way for this working group to solve the
> proposed problem, but if there is, it involves figuring out a way to do
> this that doesn't make it easy to MiTM *my* connections, or, say, those
> of activists in dangerous places.
>

I find this a very bizarre outcome that works against our collective goals.
If there is no mechanism at all, then it is quite likely that organizations
will use static-DH or stay on TLS1.2. Those are bad options, in my opinion,
because there's no signaling or opt-in to the client. We can do much better
than that.  Client opt-ins are far from academic. For example, browser's
incognito modes may refuse to support such sessions, if they knew what was
going on.

It's also bad if organizations end up deploying static-DH and that means we
can't do things like checking for changing DH parameters.

It seems like we would be rejecting a good opportunity to make what the
network operators want work in a better and more secure way, while making
it harder for passive observers and coercive authorities, to use the same
mechanism for other purposes. What do we gain? beyond a hollow moral
victory.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-19 Thread Colm MacCárthaigh
On Wed, Jul 19, 2017 at 9:26 AM, Blumenthal, Uri - 0553 - MITLL <
u...@ll.mit.edu> wrote:
>
> Same question. At some point in time you need to decide to start examining
> all the traffic. At that point you can start capturing its plaintext. The
> proposed alternative seems to be capturing the ciphertext and the key so
> the ciphertext can be decrypted later – which makes no sense to me.
>

That's not what I've seen. Instead, I see administrators creating port
mirrors on demand and then filtering the traffic they are interested in
using standard tcpdump rules, and I see MITM boxes that selectively decrypt
some traffic to look inside it and apply some kind of security filtering.
In the former case, DNS lookups and IP/port destinations are commonly used
to trigger some suspicions too.


> They are, though it's a big change. I think we can do better than logs; a
> mechanism that's in TLS itself could be opt-in and user-aware, and so less
> likely to be abused in other situations. There's also some basic security
> model advantages to encrypting the PMS under a public-private key pair, and
> one that isn't using the private key that the servers themselves hold.
>
>
>
> To use the key you need to have the corresponding ciphertext stored.
>

That's not how the tcpdump/wireshark approach usually works. You give it
the private key and decrypts the TLS connection as it's happening.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-19 Thread Colm MacCárthaigh
On Wed, Jul 19, 2017 at 7:48 AM, Watson Ladd  wrote:
>
> On Jul 17, 2017 12:29 PM, "Roland Dobbins"  wrote:
>
> On 17 Jul 2017, at 21:11, Watson Ladd wrote:
>
> How do you detect unauthorized access separate from knowing what
>> authorization is?
>>
>
> I think we're talking at cross purposes, here.  Can you clarify?
>
>
> You said you need to look at packets to see unauthorized access. How do
> you that access is unauthorized unless the authorization system is doing
> the monitoring?
>

Over the years I've met with businesses who have these kinds of set ups.
The way it usually works is that the analysis is secondary and based on a
suspicion of some kind. For example: if an employee is suspected of insider
trading, or stealing proprietary data, then the administrators may take the
extreme measure of inspecting all of their traffic. This is why many
corporate environments have those "No expectation of privacy" disclaimers.

Another example is where traffic to a set of suspicious destinations is
subject to a higher level of scrutiny. For example, maybe traffic bound for
well known file sharing services.

I've never seen an environment with pervasive always-on monitoring;
creating a trove of plaintext would be a net security negative, and
organizations rarely have the resources it would take to keep or analyze
all of it anyway.

Yes, but you'll rot13 or rot 128 the file first. Why wouldn't you?
>>
>
> Many don't.  And being able to see rot(x) in the cryptostream has value.
>
>
> As the IRA pointed out to the Prime Minister, she needed to get lucky
> every time.
>

Where I come from, if you're quoting the IRA to support an argument, nobody
takes you seriously.


> The tools that network engineers and security personnel need analyze
> network traffic.  Logs are insufficient.
>
>

They are, though it's a big change. I think we can do better than logs; a
mechanism that's in TLS itself could be opt-in and user-aware, and so less
likely to be abused in other situations. There's also some basic security
model advantages to encrypting the PMS under a public-private key pair, and
one that isn't using the private key that the servers themselves hold.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-16 Thread Colm MacCárthaigh
On Sun, Jul 16, 2017 at 2:08 AM, Ted Lemon  wrote:

> What it means for users to be denied the benefits of TLS 1.3 is that they
> don't get, for example, perfect forward secrecy.  Since the proposal was to
> do away with that anyway, but for all users, not just some users, that
> doesn't seem like it is better than just continuing to use TLS 1.2.
>

DH by default is just one benefit of TLS1.3, there are many others or else
we wouldn't be shipping it with so many changes and improvements. Otherwise
there would be no TLS1.3, and only a deprecation of the non-PFS cipher
suites. But that plainly isn't the case.

The main one I'm concerned about is me having to support non-TLS1.3 clients
;-) 1RTT key exchange is worth it alone.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-16 Thread Colm MacCárthaigh
On Sun, Jul 16, 2017 at 1:52 AM, Salz, Rich  wrote:

> I would also like to understand why TLS 1.2 is not sufficient for, say,
> the next five years.
>

It probably is ... but isn't that the problem? If the answer is "Just let
them stay on TLS1.2", I find it very hard to interpret the arguments
against all of this as resulting in anything other than grand-standing.
Clearly the users would be no better off, and also end up denied the other
benefits of TLS1.3.

This seems self-defeating, when there is so easy a path that may improve
things for all cases (forbid static-DH, add an opt-in mechanism instead).

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-16 Thread Colm MacCárthaigh
On Sun, Jul 16, 2017 at 12:59 AM, Stephen Farrell  wrote:
>
> (*) I am not asking that people tell me that "pcap+key-leaking"
> might work, but for them to describe when that works but nothing
> else works. And that has to include the details of what it is
> they can only find in the recovered cleartext that cannot be
> detected without access to cleartext using this particular
> method.
>

Of course other techniques could work, every system involved from the
network devices to the endpoints is practically Turing complete. For me,
the more interesting question is really whether the providers/users are
likely to take on the costs of doing it differently, or wether they are
more likely to block TLS1.3 and stay on legacy crypto. Given everything
I've read, I think the latter is more likely.

> Are you skeptical that existing network operators don't do this kind
> > of decryption?
>
> I believe that people do this kind of key-leak+pcap decryption.
>
> People do all sorts of other unwise things too (myself included,
> and fairly frequently;-), that is not a reason to encourage more
> of "it" for any "it."
>

Well, they have the keys, and they have the desire, so I expect them to do
it by some means. The question then becomes not about promoting or
encouraging it, but how we may limit the damage and achieve the best
outcome. I don't understand how proxies are a better solution, as I've
outlined; they have drastically worse security properties.

I would also note that the "use a proxy" argument seems to me
> to mostly be offered as a strawman counter-argument by folks
> who would like to break TLS via static DH, and doesn't seem to
> be a common argument offered by those against breaking TLS.
> (Adding proxies is of course another way to break TLS, depending
> on how and where it's done.)
>

I can't make sense of this paragraph. A static-DH solution has no need for
proxies, so I'm unclear on why a static-DH proponent would suggest using
them. If proxies aren't the alternative, then what is? are you suggesting
none? That's the brinksmanship approach, which is valid of course, you can
risk TLS1.3 adoption. And the pcap operators may flinch first, or you may.
But is it really wise to ask your opponent for the evidence that they won't
flinch first?


> > I'm not skeptical of that at all, but would be interested in what
> > acceptable evidence would look like.
>
> I'm not sure of the phrase "acceptable evidence" but regardless
> of that:
>
> TLS is an important protocol, extremely widely used. For any attempt
> to weaken or break TLS, I think the onus is on the proponents of the
> break-TLS proposal to produce convincing evidence that their scheme
> will at least be a net positive, considering the entire ecosystem
> that is dependent on TLS. And even if there is evidence that a scheme
> would be a net positive, it may still be a bad idea, if the negative
> aspects of the scheme have serious enough impacts in some use-cases
> for TLS.
>
> That's a pretty high bar, yes. And so it should be. I'm not at all
> clear it can be cleared, ever.
>

Real world: They have the keys, so they can break FS by using proxies if
they want. If they do that, and they likely would, everyone is much /worse/
off because now there's less FS, more plaintext floating around, and more
exploitable software floating around. Is that really a sensible outcome?


>
> > Though I'll point out again: TLS 1.3 is the new thing that we want
> > to gain adoption, so really we should be looking for evidence that
> > it's /not/ a burdensome change.
>
> Sure, that is another fine thing to do. It'd be helped along if we
> had evidence about the precise scenarios in which the pcap+key-leak
> wiretapping is the only possible usable approach. That hasn't been
> described on the list. (It has been asserted that such scenarios
> exist, and it has been claimed that we should all know and accept
> all this already, but those were TBBA non-arguments.)
>

Imagine you're blindfolded, with your finger on a button that fires a gun.
The gun might be pointed at you, it might be pointed at your opponent. Your
opponent has no blindfold. Your argument is "Tell me if the gun is pointed
at me or I'm going to push the button". Now if the gun is pointed at them,
can you really trust them? And if it's pointed at you, why should they care
to help you out?

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-15 Thread Colm MacCárthaigh
On Sat, Jul 15, 2017 at 12:12 PM, Salz, Rich  wrote:

> > On the public internet, it's increasingly common for traffic to be MITMd
> in the form of a CDN.
>
> A CDN is not a middlebox, it is not a MITM.  It is a site that the origin
> has hired to act as a "front" to it.
>

Don't take it as a criticism; I've built two CDNs, and I think they are an
awesome and important part of the internet. But CDNs certainly are middle
boxes; they sit between the origin and the client. A box, in the middle.

What I'm trying to get it is the inconsistency of logic we are applying. So
far responses on the mailing list have been saying "Don't use pcap, instead
run proxies". For some reason we find proxies less distasteful, even though
they have unbounded capability to destroy forward secrecy, even though they
must be in-line and hence subject to exploit, even though it comes at
massive cost (in my opinion), even though it's much harder to use proxies
to examine plaintext in a forensic and selective way. Not only is this very
unlikely to be an answer that will work for the enterprise network folks,
if they did take our advice, it would actually be /worse/ security than
what they have today. That has to be a bizarre outcome to promote. For
what? moral purity?

With regard to CDNs, that's more illogic: why are we so against a key being
shared to decrypt session keys, but fine with a key being shared to
facilitate total impersonation? I can't make sense of it.

PS: I expect everyone who argues against facilitating PCAP decryption on
the ground of "Forward secrecy is a must have" to make identical demands of
0-RTT, which can do much more damage to FS.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-15 Thread Colm MacCárthaigh
On Fri, Jul 14, 2017 at 11:12 PM, Daniel Kahn Gillmor  wrote:

>  * This proposed TLS variant is *never* acceptable for use on the public
>Internet.  At most it's acceptable only between two endpoints within
>a datacenter under a single zone of administrative control.
>

>  * Forward secrecy is in general a valuable property for encrypted
>communications in transit.


> If there's anyone on the list who disagrees with the above two
> statements, please speak up!
>

I agree with the second statement, but I don't really follow the logic of
the first. On the public internet, it's increasingly common for traffic to
be MITMd in the form of a CDN. Many commenters here have also responded
"Just use proxies". I don't get how that's better.

A proxy sees all of the plaintext, not just selected amounts. All of the
same coercion and compromise risks apply to a proxy too, but since it
undetectably sees everything,  that would seem objectively worse from a
security/privacy risk POV.

Or put another way: if these organizations need to occasionally inspect
plaintext, would I prefer that it's the kind of system where they have to
go pull a key from a store, and decrypt specific ciphertexts on demand
offline, or do I want them recording plaintext *all* of the time inline? It
seems utterly bizarre that we would collectively favor the latter. We end
up recommending the kinds of systems that are an attacker's dream.

Here's what I'd prefer:

 * Don't allow static DH. In fact, forbid it, and recommend that clients
check for changing DH params.
 * For the pcap-folks, define an extension that exports the session key or
PMS, encrypted under another key. Make this part of the post-handshake
transcript.
 * pcap-folks can do what they want, but clients will know and can issue
security warnings if they desire. Forbiding static DH enforces this
mechanism, and we can collectively land in a better place than we are
today.


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] chairs - please shutdown wiretapping discussion...

2017-07-10 Thread Colm MacCárthaigh
On Mon, Jul 10, 2017 at 8:14 AM, Nikos Mavrogiannopoulos 
wrote:

> Certainly, but that doesn't need to happen on this working group, nor
> protocols which implement similar solutions need to be called TLS.
>

I'll belabor this point: rather than thinking about what these providers
are owed, which is nothing, it is better to think about what is best for
TLS overall. Selfishly, I have a strong preference to see TLS1.3 succeed
and that within a matter of years, we no longer have to support TLS1.2 or
earlier versions.

If some networks and operators feel that they can't feasibly use TLS1.3,
they're very likely to stay on the older versions. We could consider
brinkmanship; and see who blinks first if we try to disable the older
versions anyway, but that's a gambit that often makes hostages out of
innocent users, and can end up serving to taint TLS1.3 with reliability
issues and hold back its adoption.

It's clear that there is a strong distaste here for the kind of MITM being
talked about, and many wish not to give it any kind of stamp of approval
within the standard; that that itself would also taint TLS1.3 with security
concerns. Proxies are proposed as a work-around instead, as it avoids any
changes to protocol. But this seems like cutting our noses off to spite our
faces. Proxies tend to be always-on and render plaintext much more
accessible than a tcpdump tap. Proxies are also inline, read-write, and
subject to exploit in a worse way than a tcpdump tap (which can be network
isolated). In real security terms, I absolutely buy that proxies would be
worse for overall security and all of the properties that TLS is supposed
to provide, in some environments. That would seem like a bizarre conclusion.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] chairs - please shutdown wiretapping discussion...

2017-07-09 Thread Colm MacCárthaigh
On Sat, Jul 8, 2017 at 6:04 PM, Eric Mill  wrote:

>
> Stating that proxies are not viable for enterprise organizations due to
> the scale and complexity of their network environments is subjective,
> generally not well-detailed, and much more open to skepticism.
>
> The burden on the proposers should be to address this skepticism, and to
> justify to the working group why enterprises that are large enough and
> well-funded enough to have such vast and complex networks cannot invest in
> upgrading those networks to an approach that doesn't rely on directly
> weakening their own connection security and potentially the security of
> others' through the unintended consequences of formalizing this RFC.
>

TLS1.3 isn't a debate, or a legal argument. It's an actual thing in the
world that we'd like to see succeed and be as pervasive as possible. The
folks reporting saying it won't work are doing us a favor, they don't owe
us anything.

So when those users show up saying "This won't work for me", it is better
to have a very open mind and make every attempt to understand them. If
their explanations are not clear, then burrow further. Be charitable and
lean as heavily towards why they may be right, search for good reasoning in
/their/ favor, and state it as well as it can possibly be presented. Only
on those terms try to tackle it with alternatives.

If the presenters are wrong, and the skepticism is merited, that approach
will still work. But if they happen to be right, it makes the alternatives
or adaptations more clear, or the necessity for them more obvious.
Dismissing concerns with trivial and shallow analysis can serve to diminish
the success of TLS1.3, because the users don't need to adopt it, and can
end up blocking it and creating a failure of "TLS 1.3 doesn't work in XXX
environments".

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-09 Thread Colm MacCárthaigh
On Sat, Jul 8, 2017 at 9:27 AM, Watson Ladd  wrote:
>
> > They also don’t want to install TLS proxies all over the place.  That’s a
> > large extra expense for them.
>
> Nginx exists. What's the blocker?


Here's how these networks work today:

* Key servers are configured to use RSA KX, no DH.
* In some cases, all outbound (e.g. internet) connectivity is also proxied
via such a server. Clients are made to trust a private CA for this purpose.
* All data is not necessarily logged or stored somewhere, and almost
certainly not in the plain, as that would increase over-all risk.
* Admins use port-mirrors and tools like tcpdump to investigate/scan
suspicious flows from time to time, or as part of a targeted investigation.
Occasionally it might also be used for debugging. The RSA keys can be used
to render the connections plain on demand.
* That doesn't mean that the RSA private keys are readily available, they
are often very tightly controlled.

Migrating to proxies would:

* Be a very big operational change. Gotta get nginx on all of the boxes, is
that even possible?
* Completely change the access mechanisms, invalidate almost all of the
operational controls.
* Probably more than double the basic compute costs associated with
encryption.
* Create more sensitive environments where plaintext is floating around.


That doesn't mean that these vendors/operators are owed a solution, or an
easy-to-insert more-or-less-compatible-with-today mechanism. But it does
help assess whether they are really likely to adopt TLS1.3 to begin with.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Closing on 0-RTT

2017-06-26 Thread Colm MacCárthaigh
On Sun, Jun 25, 2017 at 11:43 PM, Ilari Liusvaara 
wrote:

> I understood that the cache probing attack requires much less replays
> than the other side-channel ones. And furthermore, distributing the
> replays among zones makes the attack easier (because replay with the
> cached data hot doesn't tell that much).
>

In practice with real world HTTP caches, one replay is often sufficient.
That's because in addition to the faster load time you can look at the
cache headers (like max-age) to pinpoint that it was the replay that put
the item in the cache. This would work with DNS too, where TTL or RRSET
cycling leaks more information in the same way.

Using more zones does help, and if the attacker were targeting a busy
cache, then it can certainly help to weed out the noise and increase the
likelihood of finding a zone/node where the cache is empty to begin with.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Separate APIs for 0-RTT

2017-06-14 Thread Colm MacCárthaigh
On Wed, Jun 14, 2017 at 3:23 PM, David Benjamin 
wrote:

> That is, it is not the identity of the bytes that matters much. It's
> whether the connection has been confirmed when you perform an unsafe
> action. I believe this still satisfies the properties we want, but without
> breaking standard interfaces. Very near the TLS stack, at the point where
> the record boundary abstraction starts leaking (it's common to only give
> you back a single record on read), either API is equally easy to provide.
> The looser phrasing is needed for composition once you start going up a
> layer or to.
>

Suppose a request, or a frame, spans two different client certificate
authentication contexts (or unauthenticated, and authenticated); how is
that handled today? or is it just forbidden?

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Separate APIs for 0-RTT

2017-06-13 Thread Colm MacCárthaigh
On Tue, Jun 13, 2017 at 11:04 AM, Benjamin Kaduk  wrote:

> I have been operating under the impression that at least some application
> profiles for early data will require that certain application protocol
> requests (e.g., something like HTTP POST) must be rejected at the
> application layer as "not appropriate for 0-RTT data", which requires the
> application to know if the request was received over 0-RTT data.
>


That's a really good point; you've changed my mind. It's obviously a good
idea to return a 5XX to a POST over 0-RTT and that would need this.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Closing on 0-RTT

2017-06-12 Thread Colm MacCárthaigh
On Mon, Jun 12, 2017 at 11:19 AM, Salz, Rich  wrote:

> I agree with this.  Which is why I prefer separate streams for early data,
> and some kind of signaling to the content provider that is clear and
> unambiguous.  I don't know how to do that when, say, the intermediary/CDN
> has a persistent connection to the backend...
>

I've given this some thought, and I think it might be unworkable to have
some kind of end-to-end 0-RTT. A simple example is that a CDN might want to
make a slightly different request, with extra headers, towards the origin
than the request that came in from the browser. For instance, the CDN might
add an If-None-Match: or an If-Modified-Since. But those may not fit within
the 0-RTT size limit.

It gets really complicated across layering boundaries to have the CDN only
accept 0-RTT if the origin also does, and if the request towards the origin
will also fit, and so on. I think the CDN would have to defer accepting the
0-RTT from the browser until the origin accepted the 0-RTT from the origin;
which defeats a lot of the intended speed/throughput benefit.

Through this I've come to the conclusion that separate streams create more
problems than they solve, and robust replay mitigation is a better answer.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Closing on 0-RTT

2017-06-12 Thread Colm MacCárthaigh
On Mon, Jun 12, 2017 at 11:19 AM, Salz, Rich  wrote:

> > The one case here where I'd really argue for a "MUST" is middle-boxes
> like CDNs. The concern I have is if someone has an application that uses
> throttling or is vulnerable to a resource-exhaustion problem and goes and
> puts a CDN in front of it, it's not obvious that enabling TLS 1.3 could
> open them to a new kind of DOS attack.
>
> A CDN is not a middle box.  It *is* origin as far as the end-users are
> concerned, because of the business relationship between the CDN and the
> content provider.  Or, if you don't like that reasoning, then it's not a
> middlebox as the IETF uses the term.
>
> If the intermediary is vulnerable to the resource attacks, that's the
> intermediary's issue.
>

[ Browser ] <> [ CDN ] <> [ Origin ]

Sorry - I'm not trying to be inflammatory here, it's just a descriptive
term. All I mean is that the CDN is a box in the middle, as in that
diagram.  Here's what I imagine:

* Operator A operates the origin, and they incorporate throttling as a
routine security feature.
* Operator B operates the CDN, and they offer TLS 1.3 as a feature, without
replay protection.
* Customer enables TLS 1.3 on the CDN, because they want the speed benefit.
Seems totally reasonable!
* If the CDN caches the requests, then the customer is now vulnerable to a
new cache-analysis vulnerability.
* If the CDN doesn't cache the requests, then the customer is now
vulnerable to a new DOS vulnerability, in that the origin can be tipped
over or locked out via the throttling.

In this setup I say middle-box because the CDN is proxying requests. The
latter problem here is created for the origin; but by the CDN. It's a real
awful externality; because the CDN has a lot of incentive not to invest in
real replay protection and hand-wave the issue away. That's my real core
interoperability concern.

> We've already seen CDNs enable TLS 1.3 with unintentionally broken 0-RTT
> mitigations, so that's clear evidence that the existing guidance isn't
> sufficient. I think it would help manage the interoperability risks if we
> can point out to their customers that the configuration is unambiguously
> broken. Or at least, it helps to flag it as a security issue, which makes
> it more likely to get fixed. Absent this, the operators of "backend"
> applications would have to live with risk that is created by the upstream
> CDN providers for their own convenience. That seems like a really bad
> interoperability set up.
>
> I agree with this.  Which is why I prefer separate streams for early data,
> and some kind of signaling to the content provider that is clear and
> unambiguous.  I don't know how to do that when, say, the intermediary/CDN
> has a persistent connection to the backend...
>

That doesn't seem to be what some have deployed in the experimental
deployments. There seems to be remarkably little traction for the separate
streams.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Closing on 0-RTT

2017-06-12 Thread Colm MacCárthaigh
On Sun, Jun 11, 2017 at 8:18 AM, Eric Rescorla  wrote:

> Here's what I propose to do:
>
> - Describe the attacks that Colm described.
>
> - Distinguish between replay and retransmission
>
> - Mandate (SHOULD-level) that servers do some sort of bounded
>   (at-most-N times) anti-replay mechanism and emphasize that
>   implementations that forbid replays entirely (only allowing
>   retransmission) are superior.
>
> - Describe the stateless mechanism as a recommended behavior but not
>   as a substitute for the other mechanisms. As Martin Thomson has
>   pointed out, it's a nice pre-filter for either of these other
>   mechanisms.
>
> - Clarify the behavior you need to get PFS.
>
> - Require (MUST) that clients only send and servers only accept "safe"
>   requests in 0-RTT, but allow implementations to determine what is
>   safe.
>
> Note: there's been a lot of debate about exactly where this stuff
> should go in the document and how it should be phrased.  I think these
> are editorial questions and so largely my discretion.
>

First of all, thanks for doing this, that all sounds great! The TLS spec is
obviously monumentally important to the internet, and years of hard work
aren't made easier by late-coming changes, that shouldn't be thankless.


> Here's what I do not intend to do.
>
> - Mandate (MUST-level) any anti-replay mechanism. I do not believe
>   there is any WG consensus for this.
>

The one case here where I'd really argue for a "MUST" is middle-boxes like
CDNs. The concern I have is if someone has an application that uses
throttling or is vulnerable to a resource-exhaustion problem and goes and
puts a CDN in front of it, it's not obvious that enabling TLS 1.3 could
open them to a new kind of DOS attack.

We've already seen CDNs enable TLS 1.3 with unintentionally broken 0-RTT
mitigations, so that's clear evidence that the existing guidance isn't
sufficient. I think it would help manage the interoperability risks if we
can point out to their customers that the configuration is unambiguously
broken. Or at least, it helps to flag it as a security issue, which makes
it more likely to get fixed. Absent this, the operators of "backend"
applications would have to live with risk that is created by the upstream
CDN providers for their own convenience. That seems like a really bad
interoperability set up.

I'd argue for at-most-once protection here, since that's the only way a
client can make deterministic decisions, and it's also easier to audit and
GREASE. But there doesn't seem to be consensus around that. At the moment,
I feel that's a bit like the lack of consensus the "Clean coal" industry
has on global warming though, because it seems to be an argument rooted in
operational convenience rather than actual security. This is not the
standard we apply in other cases; we wouldn't listen to those who say that
it's ok to keep RC4 or MD5 in the standard because the problems are small
and the operational performance benefit is worth it. My spidey-sense is
that these attacks will get better and more refined over time.

Nevertheless, /some/ guaranteed replay protection would be better than
none. particularly in this case. So if it's at-most-N, and N is small
enough to at least avoid many throttling cases, that's something worth
taking, even though it does leave open the easier cache-analysis attacks.

- Design a mechanism to allow the server to tell the client that it
>   either (a) enforces strong anti-replay or (b) deletes PSKs after
>   first use. Either of these seem like OK ideas, but they can be added
>   to NST as extensions at some future time, and I haven't seen a lot
>   of evidence that existing clients would consume these.
>

This can happen totally outside of the protocol too; as-in an operator can
advertise it as a feature. Likely most useful for the forward secrecy case.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-06-04 Thread Colm MacCárthaigh
On Fri, Jun 2, 2017 at 2:25 PM, Eric Rescorla  wrote:
>
> Sure. For the sake of clarify, I'm going to suggest we call:
>
> - replay == the attacker re-sends the data with no interaction
> with the client
> - retransmission == the client re-sends (possibly with some slight
> changes)
>

O.k., cool.


> 7. With the current design, clients have no way of knowing what, if any,
>anti-replay mechanisms the servers are using. Thus, they cannot be
>sure that servers are ensuring at-most-once semantics for the 0-RTT
>data (at-most-twice if the client retransmits in response to 0-RTT
>failure) [0]. This makes it difficult for clients to know what is
>safe to send in 0-RTT.
>
> 8. The more broadly distributed the information required to process
>a session ticket (on the server), the worse the FS situation is,
>with session tickets encrypted under long-lived keys being the
>worst.
>
> I note that you suggest separating out 0-RTT tickets and resumption
> tickets, but I don't actually see how that changes matters. As Ilari
> notes, it is possible to say that a ticket cannot be used for 0-RTT
> and if you have a ticket which can be used for resumption globally
> but for 0-RTT at just one site, the server can implement that policy
> unilaterally.
>

Yep, that's right, and should work.


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-06-02 Thread Colm MacCárthaigh
On Thu, Jun 1, 2017 at 5:22 PM, Eric Rescorla  wrote:

> I've just gone through this thread and I'm having a very hard time
> understanding what the actual substantive argument is about.
>
> Let me lay out what I think we all agree on.
>

This is a good summary, I just have a few clarifications ...


> 1. As long as 0-RTT is declinable (i.e., 0-RTT does not cause
>connection failures) then a DKG-style attack where the client
>replays the 0-RTT data in 1-RTT is possible.
>

This isn't what I call a replay. It's a second request, but the client is
control of it. That distinction matters because the client can modify it if
it needs to be unique in some way and that turns out to be important for
some cases.

2. Because of point #1, applications must implement some form
>of replay-safe semantics.
>

Yep; though note that in some cases those replay-safe semantics themselves
actually depend on uniquely identifiable requests. For example a protocol
that depends on client-side-versioning, or the token-binding case.


> 3. Allowing the attacker to generate an arbitrary number of 0-RTT
>replays without client intervention is dangerous even if
>the application implements replay-safe semantics.
>

Yep.


> 4. If implemented properly, both a single-use ticket and a
>strike-register style mechanism make it possible to limit
>the number of 0-RTT copies which are processed to 1 within
>a given zone (where a zone is defined as having consistent
>storage), so the number of accepted copies of the 0-RTT
>data is N where N is the number of zones.
>

This is much better than the total anarchy of allowing completely unlimited
replay, and it does reduce the risk for side-channels, throttles etc, but I
wouldn't consider it a proper implementation or secure. Importantly it gets
us back to a state where clients may have no control over a deterministic
outcome.

Some clients need idempotency tokens that are consistent for duplicate
requests, this approach works ok then. Other kinds of clients also need
tokens that are unique to each request attempt, this approach doesn't work
ok in that case. That's the qualitative difference.

I'd also add that the suggested optimization here is clearly to support
globally resumable session tickets that are not scoped to a single site.
That's a worthy goal; but it's unfortunate that in the draft design it also
means that 0-RTT sections would be globally scoped. That's seems bad to me
because it's so hostile to forward secrecy, and hostile to protecting the
most critical user-data. What's the point of having FS for everything
except the requests, where the auth details often are, and which can
usually be used to generate the response? Synchronizing keys that can
de-cloak an arbitrary number of such sessions to many data centers spread
out across the world, seems just so defeating. I realize that it's common
today, I've built such systems, but at some point we have to decide that FS
either matters or it doesn't. Are users and their security auditors really
going to live with that? What is the point of rolling out ECDHE so
pervasively only to undo most of the benefit?

Maybe a lot of this dilemma could be avoided if the the PSKs that can be
used for regular resumption and for 0-RTT encryption were separate, with
the latter being scoped smaller and with use-at-most-once semantics.

5. Implementing the level of coherency to get #4 is a pain.
>
> 6. If you bind each ticket to a given zone, then you can
>get limit the number of accepted 0-RTT copies to 1
>(for that zone) and accepted 1-RTT copies to 1 (because
>of the DKG attack listed above).
>

Yep! Agreed :)

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-06-01 Thread Colm MacCárthaigh
On Thu, Jun 1, 2017 at 1:50 PM, Victor Vasiliev  wrote:

> I am not sure I agree with this distinction.  I can accept the difference
> in
> terms of how much attacker can retry -- but we've already agreed that
> bounding
> that number is a good idea.  I don't see any meaningful distinction in
> other
> regards.
>

It's not just a difference in the number of duplicates. With retries, the
client maintains some control, so it can do things like impose delays and
update request IDs. Bill followed up with an exactly relevant example from
Token Binding where the retry intentionally has a different token value.
That kind of control is lost with attacker driven replays.

But even if we focus on just the number; there is something special about
allowing 0 literal replays of a 0-RTT section; it is easy for users to
confirm/audit/test. If there's a hard-guaranteee that 0-RTT "MUST" never be
replayable, then I feel like we have a hope of producing a viable 0-RTT
ecosystem. Plenty of providers may screw this up, or try to cut corners,
but if we can ensure that they get failing grades in security testing
tools, or maybe even browser warnings, then we can corral things into a
zone of safety. Otherwise, with no such mechanism, I fear that bad
operators will cause the entire 0-RTT feature to be tainted and entirely
turned off over time by clients.

>
> Sure, but this is just an argument for making N small.  Also, retrys can
> also
> be directed to arbitrary nodes.
>

This is absolutely true, but see my point about the client control.
Regardless, it is a much more difficult attack to carry out. That is to
intercept and rewrite a whole TCP connection Vs grabbing a 0-RTT section
and sending it again.


>
>
>> What concerns me most here is that people are clearly being confused by
>> the TLS 1.3 draft into mis-understanding how this interacts with 0-RTT. For
>> example the X-header trick, to derive an idempotency token from the binder,
>> that one experimental deployment innovated doesn't actually work because it
>> doesn't protect against the DKG attack. We're walking into rakes here.
>>
>
> Of course it doesn't protect against the DKG attack, but nothing at that
> layer
> actually does.
>
> This sounds like an issue with the current wording of the draft.  As I
> mentioned, I believe we should be very clear on what the developers should
> and
> should not expect from TLS.
>

Big +1 :)


> So, in other words, since we're now just bargaining about the value of N,
>>> operational concerns are a fair game.
>>>
>>
>> They're still not fair game imo, because there's a big difference between
>> permitting exactly
>> one duplicate, associated with a client-driven retry, and permitting huge
>> volumes of replays. They enable different kinds of attacks.
>>
>>
> Sure, but there's a space between "one" and "huge amount".
>

It's not just quantitive, it's qualitative too. But now I'm duplicating
myself more than once ;-)


> Well in the real world, I think it'll be pervasive, and I even think it
>> /should/ be. We should make 0-RTT that safe and remove the sharp edges.
>>
>
> Are you arguing that non-safe requests should be allowed to be sent via
> 0-RTT?
> Because that actually violates reasonable expectations of security
> guarantees
> for TLS, and I do not believe that is acceptable.
>

I'm just saying that it absolutely will happen, and I don't think any kind
of lawyering about the HTTP spec and REST will change that. Folks use GETs
for non-idempotent side-effect-bearing APIs a lot. And those folks don't
generally understand TLS or have anything to do with it. I see no real
chance of that changing and it's a bit of a deceit for us to think that
it's realistic that there will be these super careful 0-RTT deployments
where everyone from the Webserver administrator to the high-level
application designer is coordinating and fully aware of all of the
implications. It crosses layers that are traditionally quite far apart.

So with that in mind, I argue that we have to make TLS transport as secure
as possible by default, while still delivering 0-RTT because that's such a
beneficial improvement.


> I do not believe that this to be the case.  The DKG attack is an attack
>>> that allows
>>> for a replay.
>>>
>>
>> It's not. It permits a retry. The difference here is that the client is
>> in full control. It can decide to delay, to change a unique request ID, or
>> even not to retry at all. But the legitimate client generated the first
>> attempt, it can be signaled that it wasn't accepted, and then it generates
>> the second attempt. If it really really needs to it can even reason about
>> the complicated semantics of the earlier request being possibly
>> re-submitted later by an attacker.
>>
>
> That's already not acceptable for a lot of applications -- and by enabling
> 0-RTT for non-safe HTTP requests, we would be pulling the rug from under
> them.
>

Yep; but I think /this/ risk is manageable and tolerable. Careful 

Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-31 Thread Colm MacCárthaigh
On Wed, May 31, 2017 at 12:49 PM, Victor Vasiliev 
wrote:

> I think I am not getting my key point across here clearly.  I am not
> arguing
> that they are inconvenient, I am arguing that the guarantee you are trying
> to
> provide is impossible.
>
> I wholeheartedly agree that if it is possible to provide guarantee B (zero
> replays), we really should provide it.  The problem is, we cannot.  Since
> 0-RTT
> is by its very design declinable, there will be always a possibility for at
> least one retry.
>

This is the part we agree on; but as I've said before, the kinds of attacks
enabled by these kinds of client-initiated retries, and by
attacker-initiated replays are fundamentally different.


> Once you concede you can have at least one replay, the difference between
> one
> replay and N replays (for all N > 0) is not that large, which is why I
> refer to
> this as "nebulous bound" (guarantee C).
>

There's a very big difference and it leads to real-world attacks. With many
replays, all sorts of side-channel analyses become possible, and I've
provided examples. It's particularly nasty that the replays can be
out-of-bound and to arbitrary nodes of the attacker's choosing, which makes
the attacks even more effective.


> Your applications already have to
> contend with replays, it's now just the matter of preventing side-channel
> amplification.
>

Yes; applications using 0-RTT do have to make themselves retry-tolerant,
 as they already must in many scenarios (e.g. a browser driven
application).  What concerns me most here is that people are clearly being
confused by the TLS 1.3 draft into mis-understanding how this interacts
with 0-RTT. For example the X-header trick, to derive an idempotency token
from the binder, that one experimental deployment innovated doesn't
actually work because it doesn't protect against the DKG attack. We're
walking into rakes here.

So, in other words, since we're now just bargaining about the value of N,
> operational concerns are a fair game.
>

They're still not fair game imo, because there's a big difference between
permitting exactly
one duplicate, associated with a client-driven retry, and permitting huge
volumes of replays. They enable different kinds of attacks.

I keep forgetting about forward secrecy in all of this, but it's worth
noting that your argument for globally resumable sessions also implies that
we shouldn't really shoot for  FS for some of the most critical user data.
I also find that bizarre.

To clarify, I am not suggesting that two streams would help.  I completely
> agree with you that two streams is not going to mitigate the DKG attack or
> others.  What I meant is that 0-RTT inherently has slightly different
> properties from 1-RTT and must be used with that in mind.  Specifically, I
> meant that it will not be enabled for applications by default, and HTTP
> clients
> would only allow it for methods that RFC 7231 defines as safe.
>

Well in the real world, I think it'll be pervasive, and I even think it
/should/ be. We should make 0-RTT that safe and remove the sharp edges.

>
> I do not believe that this to be the case.  The DKG attack is an attack
> that allows
> for a replay.
>

It's not. It permits a retry. The difference here is that the client is in
full control. It can decide to delay, to change a unique request ID, or
even not to retry at all. But the legitimate client generated the first
attempt, it can be signaled that it wasn't accepted, and then it generates
the second attempt. If it really really needs to it can even reason about
the complicated semantics of the earlier request being possibly
re-submitted later by an attacker.

Replays are different though; the attacker just literally copies the data
and can resend and resend as desired. As I've shown; this breaks real-world
things like authenticated throttling systems, and leads to side-channel and
cache analysis; where only very rarely is one attempt sufficient to
exploit.


> There is an enormous difference between a protocol that does not
> allow replays and the protocol that allows one.  For many applications, it
> only
> takes one replay for things to go terribly wrong.
>
>
>> The only place that the DKG attack can be mitigated is at application
>> layer.
>>
>
> Indeed.
>
>
>> The TLS APIs and different streams are pointless here.
>>
>
> I agree that different streams are pointless.  The only ways APIs can help
> is
> to not send replay-sensitive requests via 0-RTT, and to not accept those
> requests until peer liveness is confirmed.
>

This puts a massive smile on my face :)

>
> Indeed, but I am not suggesting we ignore those attacks.  The way I see
> this is
> that, originally when we did 0-RTT in QUIC, we had strike registers which
> are
> approximately what you are advocating.
>

Well I'd really advocate for single-use caches, because I think Forward
Secrecy is important :)


> We thought this provided property B
> (at-most-once semantics for 0-RTT), but at 

Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-30 Thread Colm MacCárthaigh
On Tue, May 30, 2017 at 2:38 PM, Victor Vasiliev  wrote:

> Thank you for your analysis!  I appreciate the attention to the security
> properties of the 0-RTT requests, as it is the more delicate part of the
> protocol.  It took me a while to get through the entire review, and there
> are
> many things on which I would like to comment, but if I do that for every
> one of
> them, this reply would be too long.  Hence, I’ll try to concentrate on the
> key
> points.
>

Thanks too for reading and the response!


> TLS 1.3 as-is does not remove any of the replay protection guarantees
> provided
> by TLS 1.2.  However, if the user chooses to waive said protection in
> order to
> do 0-RTT, they can do that with an API explicitly designed for that
> purpose.
>

I don't think this holds true;  the signs are that browsers and servers
will enable by default, with well-meaning limits on the kinds of requests.
But in the real-world; it'll absolutely be enabled lots, that's the point
after all. I hope we can have pervasive 0-RTT. It'll be awesome, because
the speed of light sucks.


>
> 3) Full replay protection for 0-RTT is not realistically feasible at TLS
> layer,
>because TLS layer is not the right place for that.
>
> You’ve already pointed out the existence of the DKG attack, where the
> attacker
> forces the client to retry the request with 1-RTT.  In this scenario, the
> attacker has a buffered 0-RTT request and is ready to insert it at any
> convenient point in time, violating the initial order assumptions.
>

Yep, though as I point out too, I think the DKG attack is different from
other replay attacks in ways that matter. The only place that the DKG
attack can be mitigated is at application layer. The TLS APIs and different
streams are pointless here. And I think that's actually an ok trade-off for
some kinds of applications - I don't think it should be a blocker for
0-RTT.  I think we agree on all this.


> With respect to the non-browser HTTP applications, I am not sure I am sold
> on
> the notion of “careful clients”. My understanding is that you suggest to
> treat
>
0-RTT failure as semantically equivalent to a regular connection failure,
> under
> the assumption that all of your clients already have to handle that case
> properly.  I see how this is supposed to work out, but most systems of this
> nature work well because connections succeed most of the time -- and as
> you go
> more and more towards the path of promising full replay protection, you
> will
> break more and more 0-RTT handshakes.  Both session resumption in general
> and
> 0-RTT in particular are designed with assumption that they can be declined
> at
> any point, that this is a normal and expected event, and at worst it would
> result in a minor performance degradation; “careful clients” violate that
> assumption.
>

The careful client is just to illustrate what it takes to make a
replay-safe application on top of an eventually consistent data store model
(which isn't uncommon) ... I don't think it's that realistic either, but it
was to point out that separate streams don't help. Only timings do.


> 4) Operational experience on datacenter-local strike registers is negative.
>
> We deployed strike registers in the initial QUIC deployment, and it was an
> operational hassle, so once we discovered that they do not provide full
> replay
> protection (due to all issues outlined above), the cost/benefit analysis
> became
> decidedly not in their favor.
>

The argument about operational inconvenience is irrelevant and dangerous.
It doesn't matter that it's hard. It doesn't matter how much it costs. It's
the cost of doing 0-RTT securely. Doing it insecurely should not be an
option, and shouldn't be something specified in an RFC.


> Our deployment experience also suggests that the negative impact from
> limiting
> 0-RTT to the same datacenter is not negligible.
>

This is the terribly false premise that gets to the heart of things. You
acknowledge yourself that there are attacks. Here you argue, essentially,
that it is too inconvenient to mitigate those attacks for users. I don't
think we can seriously take that approach.

If the methods are too inconvenient, the secure alternative is to not use
0-RTT at all.

The key phrase here "the negative impact from limiting 0-RTT to the same
datacenter is not negligible" is simply utterly the wrong way around. Yes,
it's true that fewer 0-RTT sections are accepted in a datacenter-local
system than if they are globally valid; but globally valid ones are not
secure; so what's the benefit of that comparison? It's not an alternative
we can responsibly consider. The only secure comparison is to not having
0-RTT at all. At least with datacenter local 0-RTT we do get /some/ 0-RTT.

So the statement should really be "Datacenter-local 0-RTT resumption allows
us to use 0-RTT at all, which is great, considering that it would otherwise
lead to side-channel and privacy-defeating attacks. What a 

Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-24 Thread Colm MacCárthaigh
On Wed, May 24, 2017 at 7:30 AM, Ilari Liusvaara 
wrote:
>
> > Right, this would bring replays down from the millions hypothesized for
> > the weak time-based thing to more like tens, which is kind of in the
> > regime that we are currently in with (at least some) application
> behavior.
>
> Actually, even tens of replays at TLS level is quite dangerous,
> especially if to different servers (bad information leaks via cache
> attacks).
>

It hard stops replays, but does nothing about retries, and I think that's
ok. The client is in much more control of retries, and it's similar (if not
identical) to what can happen today with an interrupted TCP or TLS
connection. If a client permits hundreds, or millions of forced retries, I
think that's more appropriately handled at the client/application level,
but keep in mind there are typically seconds between the retries and the
rate of attack is slow.

Replays are when an attacker can send the same data at-will and it's
completely unknown to the client, it can happen out of band over different
network connectivity, in a very short and sharp interval of time, possibly
even to different server endpoints, and can be used by attackers to gather
information. These I think need to be mitigated at the TLS level; and
should be hard-stopped.

> Another crazy idea would be to just say that servers MUST limit the use
> > of a single binder to at most 100 times, with the usual case being just
> > once, to allow for alternative designs that have weaker distributed
> > consensus requirements (but still describe these current two methods as
> > examples of ways to do so).
>
> You actually need strong distributed consensus about all accepted
> 0-RTT here.
>

This pattern doesn't need strong consensus. If you have 10 servers, you
could give each 10 goes at the ticket, and let each exhaust its 10 attempts
without any coordination or consensus. You likely won't get to "exactly
100", but you will get to "at most 100 times".

But the inner critical section would be inherently more complicated.  For
the at most once case we need to a critical section that performs an
exclusive-read-and-delete atomically. It's a mutex lock or a clever
construction of atomic instructions.  For "at most N" we now need to
perform a decrement and write in the critical section, and it becomes more
like a semaphore.

It's probably not a pattern that's worth the trade-offs.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-23 Thread Colm MacCárthaigh
On Tue, May 23, 2017 at 11:27 AM, Viktor Dukhovni 
wrote:

> Actually, nonces in DNScurve protect clients from replayed server
> responses (clients
> are stateful).  I see no explicit guidance to detect or refuse replays of
> client
> queries in DNScurve.  While servers could keep a nonce cache, in practice
> there
> are multiple servers and they don't share state (no "strike registers").
>

My apologies, you're right! I'll make sure to tease djb now. That's still
an insecure design (or at least a privacy defeating design) for the same
reasons as earlier. Though tinydns doesn't do RRL or Cyclic answers, so in
that coupled implementation it may be ok.

At one time we didn't think the kinds of side-channels present in TLS were
a big deal; the "it is not believed to be large enough to be exploitable" note
in section 6.2.3.2 of RFC5246 comes to mind. Here we risk repeating history.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-23 Thread Colm MacCárthaigh
On Tue, May 23, 2017 at 10:50 AM, Viktor Dukhovni 
wrote:

> The fix is to amend DNSpriv to require stateless (random rather
> than say round-robit) RRset rotation.  With random rotation, the
> next RRset order is independent of previous queries.
>

That's a good fix for that specific local problem. But next, consider a
different one; what if a DNS provider has q-tuple rate-limiting for DOS
attacks? That's not an unusual measure for large providers - even bind9 has
support for it. Well with stateless 0RTT I can replay the clients query
over and over until the rate-limiting trips; now I have a DOS attack *and*
a privacy-defeating attack; because the rate limit exposes what the query
was for.


> Secondly, even with the 0-RTT leak, while privacy against an active
> attacker might not be assured for all users, there is fact privacy
> for most users, especially against a purely passive adversary.
>

My reference here isn't really meant as a criticism of DNSPriv - we should
make DNS private and secure, that's awesome, and it's a small attack in the
overall context of DNS. It's meant like Christian said; it is really really
hard to make an application protocol idempotent and side-effect free and
very smart people are often wrong about assuming that they are. I see
at-most-once 0-RTT mitigation as essential to avoiding a lot of real world
security issues here, because of that difficulty.


> To the extent that DNSpriv over TLS happens at all, 0-RTT
> will be used for DNS, and will be used statelessly (allowing
> replays).


That's not good for users, and seems like another very strong reason to
make it clear in the TLS draft that that it is not secure. FWIW; DNSCurve
includes nonces to avoid attacks like this:
https://dnscurve.org/replays.html (which means keeping state).

Stateless mechanisms simply aren't secure. We wish they were; because it is
so attractive operationally - just as it would be nice if my MD5
accelerators were still useful. But they don't hold up. We've even seen
this before with DTLS; where replay tolerance opened up the window to
several cryptographic attacks. It's an all-round bad idea.

I've seen a number of arguments here that essentially boil down to "We'd
like to keep it anyway, because it is so operationally convenient". Is that
really how this process works? Don't demonstrable real-world attacks
deserve deference?

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-22 Thread Colm MacCárthaigh
On Mon, May 22, 2017 at 10:46 AM, Christian Huitema 
wrote
>
> Check DKG's analysis of 0-RTT for DNS over TLS: https://www.ietf.org/mail-
> archive/web/dns-privacy/current/msg01276.html. There is only one point of
> concern, a minor privacy leak if the DNS queries in the 0-RTT data can be
> replayed at intervals chosen by the attacker. The idea is to replay the
> data to a resolver, and then observe the queries going out to authoritative
> servers in clear text. The correlation can be used to find out what domain
> the client was attempting to resolve. The attack requires "chosen time" by
> the attacker, and thus will probably be mitigated by a caching system that
> prevents replays after a short interval.
>


I have a reply to that too, linked at the bottom: there's actually a more
trivial side-channel (due to non-idempotence) that hadn't been considered
in the original analysis.

I've yet to find /any/ example application where 0-RTT replay would
actually be side-channel free.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-22 Thread Colm MacCárthaigh
On Mon, May 22, 2017 at 10:12 AM, Kyle Nekritz  wrote:

> The stateless technique certainly doesn’t solve all issues with replay.
> Neither do the other techniques, unless a fairly unrealistic (imo, in most
> use cases) retry strategy is used.
>


The stateless technique is insecure; plain and simple, and it is far more
easily exploited than issues that have caused us to deprecate cipher
suites. It should come out. It enables attacks that are materially
different from forced retries, such as cache probing and statistical
side-channel analysis.


> But the stateless technique is definitely an improvement over no
> anti-replay mechanism at all (for instance it reduces the possible number
> of rounds of a cache probing attack, assuming the cache TTL > replay
> window).
>

Today TLS has robust anti-replay; the comparison to "no anti-replay
mechanism at all" isn't relevant.



> Which mechanisms to use, and whether to enable 0-RTT in the first place
> (or PSK mode at all), should be decided considering the tradeoff between
> security/performance/implementation constraints, etc. In the case of DNS,
> most DNS security protocols (dnssec, etc.) do allow this kind of replay so
> I think it is a pretty reasonable tradeoff to consider.
>


This same argument could be made for keeping MD5, or RC4.  DNSSEC is not
concerned with secrecy. TLS is. This exact kind of replay would compromise
the secrecy of the data being transported.


> Additionally, I think the stateless technique is quite useful as a
> defense-in-depth mechanism.
>

It's tempting because it lowers the costs for implementors, but it's
absolutely not secure.  I seriously doubt that any application can be made
side-channel free in the manner that would be required to preserve secrecy.


> I highly doubt all deployments will end up correctly implementing a
> thorough anti-replay mechanism (whether accidentally or willfully).
>

This is why I think we should GREASE this and report (to users) any sites
that show any signs of replay tolerance.


> The stateless method is very cheap, and can be implemented entirely within
> a TLS library even in a distributed setup, only requiring access to an
> accurate clock. I’d much rather deployments without a robust and correct
> anti-replay mechanism break down to allowing replay over a number of
> seconds, rather than days (or longer).
>

I'd prefer if that were possible too, but it's not possible - it's
insecure.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-21 Thread Colm MacCárthaigh
On Sun, May 21, 2017 at 3:47 PM, Eric Rescorla  wrote:
>
> - Clients MUST NOT use the same ticket multiple times for 0-RTT.
>>
>
> I don't understand the purpose of this requirement. As you note below,
> servers are ultimately responsible for enforcing it, and it's not clear to
> me why clients obeying it makes life easier for the server.
>

I think clients should duplicate them sometimes, just to keep servers on
their toes ;-) this is what we talked about maybe being a grease thing ..
if at all.

- Servers MUST NOT accept the same ticket with the same binder multiple
>>   times for 0-RTT (if any part of ClientHello covered by binder is
>>   different, one can assume binders are different). This holds even
>>   across servers (i.e., if server1 accepts 0-RTT with ticket X and
>>   binder Y, then server2 can not accept 0-RTT with ticket X and binder
>>   Y).
>>
>
> I assume that what you have in mind here is that the server would know
> which tickets it was authoritative for anti-replay and would simply reject
> 0-RTT if it wasn't authoritative? This seems like it would significantly
> cut
> down on mass replays, though it would of course still make
> application-level
> replay a problem.
>
> I'm happy to write this up as part of the first two techniques. I'd be
> interested in hearing from others in the WG what they think about:
>
> 1. Requiring it.
> 2. Whether they still want to retain the stateless technique.
>

I'm for requiring it, and for removing the stateless technique ... because
it prevents the side-channel and DOS attacks and those seem like the most
serious ones (and also, the new ones).

So far each case where we've thought "Actually stateless might be ok in
this case" ... like the example of DNS ... turns out not to be safe when
examined more closely (in DNSes case it would compromise privacy because
caches could be probed).

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-19 Thread Colm MacCárthaigh
On Fri, May 19, 2017 at 1:58 PM, Viktor Dukhovni 
wrote:
>
> +1.  The additive obfuscation leaks nothing that is not already leaked
> just by sending the tickets.
>

You're both right, that does work out. I was thinking my balanced equations
stupidly and solving for x while forgetting that t1' is secret.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-19 Thread Colm MacCárthaigh
On Fri, May 19, 2017 at 11:44 AM, Eric Rescorla  wrote:

> Yup. There are no known reasons that prevent at-most-once 0-RTT delivery,
>
>> even with distributed servers for the origin.
>>
>
> I don't disagree with that necessarily, but if the client responds by
> retransmitting
> in 1-RTT, then you don't have overall at-most-once.
>

Obviously this is fine for browsers; retry's make sense there anyway, and
so if we prevent mass-replay then there are no new attacks like the
side-channels and DOSes.

If a client needs to be more careful; with a hard-time limit on ticket use,
it can actually reason its way to at-most-once. It needs to wait out the
time limit, then do a read; to see if the original attempt succeeded or
not, and only then retry. That's a fairly common mode in eventually
consistent systems, loss-tolerant protocols and distributed consensus
protocols. For example some S3 clients and PAXOS systems work like this.

Of course it's very inconvenient to have to sometimes block for 10 seconds
(or whatever we pick), in return for a speed up of maybe as much as ~200ms
in the "ordinary" case; but it's the kind of trade-off that an
instrumentation system might make; like an industrial controller - where
day-to-day system liveness is a massive optimization benefit and the
occasional interruption is no big deal.

Super esoteric, and maybe we shouldn't even think too much about it, but I
bring it out because that construction is the only one I've found that gets
to at-most-once delivery; and it highlights that the existing system
explicitly doesn't and might just be needless complexity (the multiple
streams signaling).

-
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-19 Thread Colm MacCárthaigh
On Fri, May 19, 2017 at 11:40 AM, Ilari Liusvaara 
wrote:

> > * In order to fully reason about when that message may later get
> received,
> > there needs to be an agreed upon time-cap for 0-RTT receipt. Agreed by
> all
> > potential middle-boxes in the pipe that may be using 0-RTT.
>
> Isn't that potentially multi-party problem if middleboxes are involved?
>

Yes; but if we can agree on a hard maximum time-window for the 0RTT
section, and all of the parties agree, it's possible for a careful client
to negotiate its way around it. Even if it's 10 seconds, this still has
some value I think.


> > And then separate to all of the above, and lower priority:
> >
> > * There's a contradiction between the obfuscated ticket age add parameter
> > and the desire to use tickets multiple times in other (non-0RTT) cases.
> We
> > can't do one without defeating the point of the other. Either remove the
> > obfuscation because it is misleading, or move it into an encrypted
> message
> > so that it is robust.
>
> The purpose of obfustication is not to hide sibling sessions. The
> client already blows its cover by using the same session ID twice. The
> purpose of obfustication is to hide the parent session.


> Are you talking about attackers being able to determine the rate of
> client clock?
>

Right now if a ticket is used multiple times, then the ticket age can be
derived (trivial cryptanalysis due to re-using the same obfuscated offset,
and because the progression of time between the ticket uses is public);
that means the parent session can be identified. So the point is defeated.

Either the one-time-pad can be used just one time (which means the ticket
can be used just once) or we should move it to an encrypted message. Or
just get rid of it and not be so misleading. But the current state is
weird, to say the least.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-19 Thread Colm MacCárthaigh
On Fri, May 19, 2017 at 2:53 AM, Ilari Liusvaara 
wrote:
>
> To me, that reads as gross understatement about the dangers involved in
> 0-RTT:
>
> - The side channel attacks with millions or billions of replays are hard
>   to protect against. This is if the side channels are in TLS library.
>   If not, protecting against that sort of side channels becomes
>   virtually impossible.

- Furthermore, with that kind of replay volume, protecting against DoS
>   attacks is virtually impossible.


> - Even if once-per-server or once-per-cluster replay detection limits
>   the number of replays to few hundred to few thoursand at maximum,
>   where the low-level crypto side channels are much less of a threat,
>   cache attacks can be used to break security (in fact, not sending a
>   mad burst of data to any one server is useful for carrying out these).
>

I wouldn't be too fatalistic about it. The speed of light is too slow for
human interaction, and 0-RTT is an important and awesome feature that we
should make safe and near universal.

Some protection is necessary; but it isn't too hard - a single-use session
cache, or a strike register, do protect against the side-channel and DOS
problems. Combined with a "fail closed" strategy and tickets that are
scoped to clusters or servers, these techniques do hard-stop the literal
0-RTT replays, and they are practical. Many of us run systems like that
already.


> Also, the kind of thing going on here seems exactly how I would imagine
> the past very bad decisions from TLS WG, that were known to be insecure
> at the time of specification and where then successfully attacked later,
> played out. However, I have not read those discussions from the ML
> archives.
>

In this case, I think people see the trade-offs differently and that's ok.
There's a sense that the risks or cost are worth it. After all, you can
mitigate a lot of the risk if you have a team of experts on standby who
manually mitigate these kinds of attacks, or more advanced automated
response systems. And many big providers do have both of these.

What concerns me most is that the 0-RTT interactions here are formalizable
and that the messaging interactions can be modeled in TLA+, F*, etc ... but
that hasn't been done as it has with the rest of the TLS1.3 state machine.

My TLA+ simple model convinced me that what's in the draft doesn't actually
work; there's nothing in the messages that allows a server to de-dupe, I
wrote up a simple 3-message example of this earlier in the thread, and it
seems to hold to the breaking the "X-" header trick that one provider came
up with.  That already in this draft deployment phase, that can advanced,
knowledgable, provider's attempt at a mitigation can be shown to be broken
should be cause for alarm. It's not a safe set-up.

Here's all I think we need to fix all of this though, in order of priority:

For relatively "Normal" clients (e.g. Browsers):

* Servers supporting 0-RTT need to robustly prevent replay of literal 0-RTT
sections. No time-based mitigation, which simply doesn't work. This is the
"cost" of doing 0-RTT.
* Clients should be the real arbiter of what to use 0-RTT; e.g. never use
for POST, etc. This could bear some emphasis. It's important because
middle-boxes exist.

For careful clients, think about something implementing a transaction over
TLS:

* If a 0-RTT section is sent but does not result in a successful receipt,
that failure needs to be signaled to the client.
* In order to fully reason about when that message may later get received,
there needs to be an agreed upon time-cap for 0-RTT receipt. Agreed by all
potential middle-boxes in the pipe that may be using 0-RTT.

And then separate to all of the above, and lower priority:

* There's a contradiction between the obfuscated ticket age add parameter
and the desire to use tickets multiple times in other (non-0RTT) cases. We
can't do one without defeating the point of the other. Either remove the
obfuscation because it is misleading, or move it into an encrypted message
so that it is robust.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] The case for a single stream of data

2017-05-11 Thread Colm MacCárthaigh
On Wed, May 10, 2017 at 10:00 PM, Benjamin Kaduk <bka...@akamai.com> wrote:
>
> However, I'm still not convinced that requiring strong 0-RTT non-replay is
> feasible/the right thing to do.
>

[ I've cut a lot for brevity, but hopefully what I've replied with can
address this, and the rest ]

The case for requiring servers, and especially middle-boxes to provide
strong 0-RTT non-replay is really the resource exhaustion, throttle
exhaustion and cache timing attacks. I think that's where the really
dangerous bugs are, the ones that will be exploited - maybe even
inadvertently - and that are a show-stopper for the stateless mitigation.
It just doesn't work.

Thankfully it's within our capabilities to fix this and there's nothing
logically impossible about providing non-replay for the 0-RTT sections.


> On 05/05/2017 11:28 AM, Colm MacCárthaigh wrote:
>
> "Client sends a request with a 0-RTT section. The attacker lets the server
> receive it, but suppresses the server responses, so the client downgrades
> and retries as a 1-RTT request over a new connection. Repeating the
> request".
>
> Does the client always downgrade?  We don't require it to do so, in TLS
> 1.3, though of course browsers would.
>

The current draft says that clients should only use tickets once, if they
do that, then the repeat attempt (even if it's still 0-RTT, with a
different ticket) is un-correlatable.

I feel like many applications that use delay-and-retry will see this and
> conclude that they should just not attempt to use 0-RTT.
>

Per the section of the original review about violation of layers and
separation of actors; this breaks the well-established relationship between
the layers. It's predictable that a site-administrator will enable 0-RTT
without any appreciation for the application-level impact. Vendors are
already providing 0-RTT in backwards compatible ways.

What is actually needed here, I think, is client-side signaling. Careful
> clients need to be made aware of the original 0-RTT failure.
>
> So for example, an SDK that writes to an eventually consistent data store
> may treat any 0-RTT failure as a hard failure, and *not* proceed to sending
> the request over 1-RTT. Instead it
>
>
> Right; nothing *requires* the client to retry failed 0-RTT as 1-RTT; the
> application can decide it's willing to take the risk or use some other
> strategy.  But I'm not convinced that the TLS stack needs to decide on its
> own, without application input.
>

I'm just describing the only backwards compatible safe-by-default scheme I
can concoct, it's definitely kludgey and awkward. For this thread, my main
take-away from it is that separating streams doesn't actually help the
application; instead, it's a timing thing.


> Does a careful client really need the TLS stack to signal connection
> error?  Surely the TLS stack will provide a mechanism to inquire as to the
> connection state, and whether 0-RTT was rejected.  The application could
> implement its own delay.
>

Yep, absolutely; but it'd be client-side signaling, and it's not
backwards-compatible or safe-by-default to rely on.


> And I read what you wrote in the github issue about how saying "don't use
> 0-RTT" is not practical, but I don't believe that it is universally true.
> Some (careful) applications will rightly consider the tradeoffs and decide
> to not use it.  On the web ... there are different forces at play, and it
> may well take a (security bug) name and website to cause webapp creators
> and framework authors to take proper notice.  But I am having a hard time
> squaring your point about careful clients with the point about
> non-practicality.  If there are careful clients, they will heed warnings;
> if just using warnings is not practical, then what are the careful clients
> doing?
>

I actually share the optimism here. I think it's workable for careful
clients, and I wouldn't hold 0-RTT hostage to the DKG corner case. I think
we should ship it. All I'm trying to get at is that separate streams are
needless complexity, and they seem to be getting ignored by implementors
anyway.

Right now there's a very weird take in the draft: let the server figure it
out by marking the data as replay-able. Well actually that doesn't work; on
the server side, the sections can't always be correlated, and even if they
were an application would still need its own anti-retry strategy. It's
well-meaning; but doesn't work. We should take it out.

Here's an example. Client has two tickets: AA and BB.

T1. Client makes 0-RTT request for /foo/ with ticket AA. Servers receives
the request and marks as replayable-data, maybe uses AA to derive a
convenient idempotency/anti-replay key (like the X- header trick).

T2. Client was blocked, Tries again, doesn't re-use the ticket, per the
draft. So repeats with 0-RTT again, with ticket BB

Re: [TLS] The case for a single stream of data

2017-05-09 Thread Colm MacCárthaigh
On Tue, May 9, 2017 at 11:12 AM, Ilari Liusvaara 
wrote:
>
> Doesn't this imply that clients or CDN are using unsafe HTTP methods in
> 0-RTT data? Which is of course _seriously_ broken.
>

It doesn't really. First; the CDN may be acting as a Layer 4 TLS proxy.
Some CDNs sell these as SSL accelerators that work by moving the handshake
closer to clients and do almost nothing else. For example S3 Transfer
acceleration works like that:
http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html .
In that case, the TLS proxy isn't necessarily even aware of the
under-lyling protocol. Will they be able to resist the easy-to-enable 0-RTT
time savings?

But secondly, in the real world, these can be GETs and that's how many APIs
are constructed. Why? because even though it ignores correct REST
principles, people can test and compose these operations more easily from
the command line (e.g. curl). We've all seen many APIs like this.

Because HTTP specification expressly forbids any and all updates and
> writes using safe methods. Ignoring that causes very severe security
> vulernabilities even today (e.g., causes essentially undefendable CSRF
> attacks).
>

These aren't browser requests, they can be SDK and other clients making
HTTP API requests. That's much more common too, by volume. CSRF isn't an
issue in cases like that.

I think everything I'm writing applies even for a careful REST-compliant
case too. Even if the client is using PUT or POST or DELETE or whatever,
the same kind of transactional semantics apply. All it takes is a
disconnect between the settings in any of the middle transport layers and
the expectations of the underlying protocol. I just think that disconnect
is very predictable.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] The case for a single stream of data

2017-05-09 Thread Colm MacCárthaigh
On Tue, May 9, 2017 at 9:41 AM, Salz, Rich  wrote:

> > It's actually impossible because a 0RTT request may be retried as a 1RTT
> request and there's no way to correlate the two. So when the 1RTT request
> shows up, we can't go "This might be a repeat" - for example the X- header
> trick doesn't work in this case. It's subtle, which makes it even more
> likely to trip on it.
>
> That also assumes that the 0RTT data will be a completely self-contained
> request.  I share the concern that Kyle and others have pointed out, that a
> single request will span the boundaries.
>

True; though I think the partial request case it's ok ... I think it's just
orphaned data. The mechanisms that prevent 0-RTT insertion across different
connections do seem robust.


> > I think the only approach that actually rigorously works is the
> client-side one
>
> Attackers will not use well-behaved clients; does your approach still work?
>

Yes; it's about signaling to the client that "Your data may or may not have
been received". This is an ordinary failure mode for TCP and TLS and
something many clients have careful reasoning around. The difference with
0-RTT is now there's a bigger window of time during which that request may
get received, but that's it really. If the attacker replays, then it can be
received; and the attacker can use any client, but it doesn't change the
original well-behaved client's reasoning.

>The second problem is that middle-boxes can break any signaling. For
> example a CDN or TLS accelerator may enable 0-RTT towards the back-end
> origin without enabling it to the original client. In this model, the
> client has *no* way to reason about retries or replay.
>
> A CDN is not a middlebox.  As far as the client is concerned a CDN *is*
> the origin.


No I don't think this works in transactional systems. For example; suppose
the client performs an update or write "through" the CDN, and 0-RTT is
being used on both sides. In the 0-RTT world, the CDN might be subject to
replay between the CDN and the Origin. But as defined, the actual client
gets no visibility of that. That breaks careful clients.  For example they
may get a 500 back and assume that the request failed, without knowing that
the request may be replayed any time in the next 10 seconds and therefore
succeed.

This can all be made workable though; with a careful consideration of how
the signaling propagates; that's not in the draft at this time though.


> > That's really very broken and a serious violation of the transport layer
> contract.
>
> Only if you believe CDN is a middlebox.  The transport layer contract is
> overridden by legal contracts or EULA :)
>

Of course, but I think this is a predictable security problem. The
implications are very subtle and may be exploited by attackers, while
people unknowingly enable the behavior. I realize I'm in dark corner cases
here; but I think we should approach all of this with the same rigor we
treat the TLS state machine and crypto proofs.
-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] The case for a single stream of data

2017-05-09 Thread Colm MacCárthaigh
On Tue, May 9, 2017 at 9:00 AM, Salz, Rich  wrote:

> To me, the argument against this comes down to this:  The 0RTT data has
> different security properties than the post-handshake data, and TLS
> implementations should not hide that difference from applications.
>

I absolutely agree with sentiment, but the first problem is that we've been
concentrating on the server side, where it's actually impossible.

It's actually impossible because a 0RTT request may be retried as a 1RTT
request and there's no way to correlate the two. So when the 1RTT request
shows up, we can't go "This might be a repeat" - for example the X- header
trick doesn't work in this case. It's subtle, which makes it even more
likely to trip on it.

I think the only approach that actually rigorously works is the client-side
one, that I outlined. That approach isn't very relevant to browsers, who
plan to retry aggressively anyway, but I think it's the only approach that
would work for a careful client. Interestingly, it doesn't really need
separate streams for the signaling to work. Though it'd be great to see a
formal proof in something like F*, Coq or TLA+.

The second problem is that middle-boxes can break any signaling. For
example a CDN or TLS accelerator may enable 0-RTT towards the back-end
origin without enabling it to the original client. In this model, the
client has *no* way to reason about retries or replay. That's really very
broken and a serious violation of the transport layer contract. I think we
should be more prescriptive here and say that if there are to be these
kinds of middle-boxes, then they need to honor maximum replay windows, and
they need to only enable 0-RTT from the client back. (e.g. client enabled,
but origin disabled is actually ok, but not the other way around).

In going through all of this, I'm assuming that the server at least has a
robust mitigation against 0-RTT replay, because not doing so is clearly
broken and I'm even entertaining how it can be supported. That stateless
form of mitigation is insecure and should die. If there's a notion that
server side signaling should be kept to facilitate stateless mitigation
(which permits millions of replays) ... we shouldn't have any time for
that, because no-one has a proposal for how the secrecy attacks can be
mitigated.


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] The case for a single stream of data

2017-05-05 Thread Colm MacCárthaigh
I wanted to start a separate thread on this, just to make some small
aspects of replay mitigating clear, because I'd like to make a case for TLS
providing a single-stream, which is what people seem to be doing anyway.

Let's look at the DKG attack. There are two forms of the attack, one is as
follows:

"Client sends a request with a 0-RTT section. The attacker lets the server
receive it, but suppresses the server responses, so the client downgrades
and retries as a 1-RTT request over a new connection. Repeating the
request".

In this case, server-side signaling such as the X-header trick doesn't work
at all. But thankfully this attack is equivalent to an ordinary socket
interference attack. E.g. if an attacker today suppressed a server response
to a HTTP request, then the client will do its retry logic. It's the same,
and I think everything is compatible with today's behavior.

Next is the more interesting form of the attack:

"Client sends a request with a 0-RTT section. For some reason the server
can't reach the strike register or single use cache, and falls back to
1-RTT. Server accepts the request over 1-RTT.  Then a short time later, the
attacker replays the original 0-RTT section."

In this case, server side signaling to the application (such as the neat X-
header trick) also doesn't work, and is not backwards compatible or secure
by default. It doesn't work because the server application can't be made
idempotent from "outside" the application, so any signaling is
insufficient, and is equivalent to the Exactly-Once message delivery
problem in distributed systems. Since a request might be retried as in case
1, it needs an application-level idempotency key, or a delay-and-retry
strategy (but replay will break this). There's some detail on all this in
the review. End result is that a server-side application that was never
designed to reach duplicates may suddenly be getting exactly one duplicate
(that's all the attack allows, if servers reject duplicate 0-RTT).

What is actually needed here, I think, is client-side signaling. Careful
clients need to be made aware of the original 0-RTT failure.

So for example, an SDK that writes to an eventually consistent data store
may treat any 0-RTT failure as a hard failure, and *not* proceed to sending
the request over 1-RTT. Instead it might wait its retry period, do a poll,
and only then retry the request. If the TLS implementation signals the
original  0-RTT failure to the client, as if it were a connection error,
everything is backwards compatible again. Well mostly; to be properly
defensive, the client's retry time or polling interval needs to be greater
than the potential replay window, because only then can it reason about
whether the original request succeeded or not. If there is a strict maximum
replay window, then this behavior is enforceable in a TLS implementation:
by delaying the original failure notification to the client application by
that amount.

Of course browsers won't do this, and that's ok. Browsers have decided that
aggressive retries is best for their application space. But careful clients
/need/ this; and it's not just about backwards compatibility. It is a
fundamental first-principles requirement for something that uses an
eventually consistent data store. We can say don't use 0-RTT, but that's
not practical, for reasons also in the review.

So if we want to fully mitigate DKG attacks, I think it is useful to hard
cap the replay, say that it MUST be at most 10 seconds. And then worst
case, a client that needs to be careful can wait 10 seconds. Note that the
TLS implementation can do this on the client's behalf, by inserting a
delay. Of course that means that for these kinds of applications, this
means that 0-RTT delivers speed most of the time, but may occasionally slow
things down by 10 seconds. I think that's an ok trade-off to make for
backwards compatibility.

But it also has implications for middle-boxes: a TLS reverse proxy needs to
either not use 0-RTT on the "backend" side, or it needs to use it in very
careful way; accepting 0-RTT from the original client, only if the backend
also accepts a 0-RTT section from the proxy. This is to avoid the case
where the client can't reason about the potential for a replay between the
proxy and the backend. It's doable, but gnarly, and slows 0-RTT acceptance
down to the round trip between the client and the backend, via the proxy.

That's one reason why the review suggests something else too:  just lock
careful applications out, but in a mechanistic way rather than a "good
intentions" way, by having TLS implementations *intentionally* duplicate
0-RTT sections.

O.k. so all of that the above might be a bit hairy: but I want to take away
from it at this stage is that splitting the early_data and application_data
at application level isn't particularly helpful; the server-side can't
really use this information anyway, because of the Exactly-Once problem.
Client side signaling does help though, and 

Re: [TLS] Idempotency and the application developer

2017-05-04 Thread Colm MacCárthaigh
On Thu, May 4, 2017 at 4:35 PM, Watson Ladd  wrote:

> Dear all,
>
> Applications have always had to deal with the occasional replay,
> whether from an impatient user or a broken connection at exactly the
> wrong time.


Unfortunately this isn't the case :( Not all applications are user-driven
in that way, and some are very careful about how they retry. In the review
I go through the example of an update to an eventually consistent data
store. In that case, the application will typically try to update and if
that update times-out, it will wait a time period, check for a conflict or
a success, and only then retry.  At no point is a replay, normal or
expected.

One could argue that Zero-RTT is not for applications like this, but why
not? They certainly do benefit from the speed-up. And so in the review I go
through reasons why it is likely that they will use it anyway, ranging from
how subtle and hard the review is, to inadvertent enabling of the feature
on something like an L4 TLS proxy which has no awareness of the underlying
protocol.

But they've generally been rare, so human-in-the-loop
> responses work. Order the same book twice? Just return one of them,
> and if you get an overdraft fee, ouch, we're sorry, but nothing we can
> do.
>
> 0-RTT is opt-in per protocol, and what we think of per application.
> But it isn't opt-in for web application developers. Once browsers and
> servers start shipping, 0-RTT will turn on by accident or deliberately
> at various places in the stack.
>

+1!


> At the same time idempotency patterns using unique IDs might require
> nonidempotent backend requests. But this is an easier problem than if
> we had nonidempotent brower requests: backends are much more
> controlled.
>
> If you are willing to buffer 0-RTT until completion before going to
> the thing that makes the response, you can handle this problem for the
> responsemaker. This will work for most applications I can think of,
> and you need to handle large, drawn out requests anyway. This sounds
> like it would work as a solution, but I am sure there are details I
> haven't discovered yet.
>

I think you're right; and we could enforce in TLS by encrypting 0-RTT under
a key that isn't transmitted until 1-RTT. But this would also defeat the
point of the speed-up; especially for CDNs. It really does speed things up
for them to be able to send the request to the origin right away.


> In conclusion I think there is some thought that needs to go into
> handling 0-RTT on the web, but it is manageable. I don't know about
> other protocols, but they don't have the same kinds of problem as the
> web does with lots of request passing around. Given the benefits here,
> I think people will really want 0-RTT, and we are going to have to
> figure out how to live with it.
>

+1 I think it's manageable too. Just have servers check for dupes :)

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-04 Thread Colm MacCárthaigh
On Thu, May 4, 2017 at 12:12 PM, Erik Nygren  wrote:

> On Wed, May 3, 2017 at 11:13 PM, Eric Rescorla  wrote:
>
>>
>> 1. A SHOULD-level requirement for server-side 0-RTT defense, explaining
>> both session-cache and strike register styles and the merits of each.
>>
>
> I don't believe this is technically viable for the large-scale server
> operators most interested in 0-RTT.
>

I think it is (and work at one of the biggest) ... but if even it weren't,
that would just imply that we can't have 0-RTT at all, not that it's ok to
ship an insecure version.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-04 Thread Colm MacCárthaigh
On Thu, May 4, 2017 at 12:39 PM, Nico Williams 
wrote:

> The SHOULD should say that the server-side needs to apply a replay cache
> OR fallback onto a full exchange when the 0-rtt data payload involves a
> non-idempotent operation.
>

I don't mean to be dismissive with this but TLS stands for "Transport Layer
Security". The transport layer just isn't aware of what the operations are,
and whether then can be idempotent (99% of the time, the answer is "no").
Only the application can tell, but this violation of layers is what leads
to so many problems. I don't think it's workable.


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-04 Thread Colm MacCárthaigh
On Thu, May 4, 2017 at 11:29 AM, Andrei Popov 
wrote:

>
>- Providers already work hard to maximize user affinity to a data
>center for other operational reasons; re-routing is relatively rare and
>quickly repaired by issuing a new ticket.
>
> Understood, but isn’t an attacker going to be able to re-route at will?
>

Yes, but I don't see the significance.  If the attacker reroutes the user,
or replays a ticket, to a different data center - the ticket won't work,
it'll degrade gracefully to a regular connection.  Of course the attacker
succeeded in slowing the user down, but that's possible anyway.

Maybe you're thinking of a strike register that shares a global namespace?
That would be an implementation error; tickets should be scoped to the
location they are issued from, and checked against its strike register (or
not used at all).

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-04 Thread Colm MacCárthaigh
On Thu, May 4, 2017 at 11:22 AM, Andrei Popov 
wrote:

>
>- I don't think we'll have a problem implementing a single use cache,
>strike register, we have similar systems for other services, at higher
>volumes.
>
> … and these things work across geographically distributed datacenters,
> without negating the latency benefits of 0-RTT?
>

They won't work across geographically distributed data centers, no, but I
don't think that's a significant problem. Providers already work hard to
maximize user affinity to a data center for other operational reasons;
re-routing is relatively rare and quickly repaired by issuing a new ticket.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-04 Thread Colm MacCárthaigh
On Thu, May 4, 2017 at 10:07 AM, Andrei Popov 
wrote:

> IMHO what we have is a facility in TLS 1.3 that:
> 1. Requires extraordinary effort on the server side to mitigate replay
> (for all but the smallest deployments);
> 2. Offers no way for the client to determine whether the server is
> mitigating replay (before replay becomes possible);
>

I'm less worried about these problems. There's a nice property that the
operators of large deployments tend to also be technically sophisticated. I
don't think we'll have a problem implementing a single use cache, strike
register, we have similar systems for other services, at higher volumes.
It's a cost of being big, and that's ok.

It will be possible to tell whether a service permits replays; by trying
them. If the service permits them, that's a pretty clear CVE, and the usual
incentives work as well as they usually do.


> 3. Is trivial to enable on the client and improves connection latency;
> 4. Eliminates a nonce that other protocols (used to) rely on.
>
> While it is true that there are cases where this facility is beneficial,
> there is no doubt that it will be widely misused, in both applications and
> protocols.
>

This doesn't need to be an anchor around the neck of the whole feature.
0-RTT is still an awesome speed benefit - if servers prevent replays, I
think we have a very good balance.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 8:13 PM, Eric Rescorla  wrote:

> I made some proposals yesterday
> (https://www.ietf.org/mail-archive/web/tls/current/msg23088.html).
>
> Specifically:
> 1. A SHOULD-level requirement for server-side 0-RTT defense, explaining
> both session-cache and strike register styles and the merits of each.
>
> 2. Document 0-RTT greasing in draft-ietf-tls-grease
>
> 3. Adopt PR#448 (or some variant) so that session-id style implementations
> provide PFS.
>
> 4. I would add to this that we recommend that proxy/CDN implementations
> signal which data is 0-RTT and which is 1-RTT to the back-end (this was in
> Colm's original message).
>

This all sounds great to me. I'm not sure that we need (4.) if we have
(1.).  I think with (1.) - recombobulating to a single stream might even be
best overall, to reduce application complexity, and it seems to be what
most implementors are actually doing.

I know that leaves the DKG attack, but from a client and servers
perspective that attack is basically identical to a server timeout, and
it's something that systems likely have some fault tolerance around. It's
not /new/ broken-ness.


> Based on Colm's response, I think these largely hits the points he made
> in his original message.
>
> There's already a PR for #3 and I'll have PRs for #1 and #4 tomorrow.
> What would be most helpful to me as Editor would be if people could review
> these PRs and/or suggest other specific changes that we should make
> to the document.
>

Will do! Many thanks.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 6:59 PM, Viktor Dukhovni <ietf-d...@dukhovni.org>
wrote:

>
> > On May 3, 2017, at 9:39 PM, Colm MacCárthaigh <c...@allcosts.net> wrote:
> >
> > As it happens, DNS queries are not idempotent.  Queries have
> side-effects,
>
> This is sufficiently misleading to be false.


What I'm trying to get at is that idempotency is hard. Even the simplest
things that seem idempotent often are not. It's really really hard to do a
deep review. And that's if people even know to perform the review.

,

> Many providers throttle DNS queries (and TLS is intended as a mechanism
> > to help prevent the ordinary spoof ability of DNS queries).
>
> Again the client is unauthenticated, throttling is by IP address, there's
> no need to repeat the same payload, indeed that's less effective since
> throttling is biased towards queries for non-existent names, ...
>

It's not always by IP address. Anti-DDOS is much more nuanced in my
experience, often take the QNAME into account.

>
> Throttling is mostly for UDP, for lack of BCP-38 implementation.  DNS
> over TLS *is* a good candidate for 0-RTT.  [ I would have chosen a more
> simple protocol for DNS security than TLS, but given that DNS over TLS
> seems to be moving forward, 0-RTT makes sense. ]
>

+1 to that too!

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 6:19 PM, Viktor Dukhovni 
wrote:

> One obvious use case for 0-RTT is DNS queries.  The query protocol is
> idempotent, and latency matters.  So for DNS over TLS, 0-RTT would be
> a good fit.   TLS session caches are not attractive on the DNS server
> given the enormous query volumes, but STEKs would be a good fit.
>

As it happens, DNS queries are not idempotent.  Queries have side-effects,
for example Bind9 will rotate an RRset by one increment on each query.
Many providers charge by the DNS query. Many providers throttle DNS queries
(and TLS is intended as a mechanism to help prevent the ordinary spoof
ability of DNS queries).


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 5:51 PM, Viktor Dukhovni <ietf-d...@dukhovni.org>
wrote:
>
> > On May 3, 2017, at 6:29 PM, Colm MacCárthaigh <c...@allcosts.net> wrote:
> >
> > Pervasive forward secrecy is one of the central security goals of modern
> > cryptography,
>
> Sure, given a sensible definition of forward-secrecy.  And near instanenous
> deletion of session keys is not part of such a definition.
>

I want to re-run DH for every bit transferred! of course I'm kidding, and
take your point. There's a balance to strike.


> > STEKs aren't compatible with this goal. They may be acceptable
> operationally
> > today, but I think they must be borrowed time if the central goal is
> meaningful.
>
> We need to recall what thread-model is behind the desire for
> forward-secrecy.
> It is I think not controversial to say that forward is concerned with
> limiting
> the damage from end-point (especially server end-point) compromise, in
> which
> confidentiality of long-term keys (still held on the endpoint) is lost.
>

In the case of STEKs, the key may also be compromised without compromising
the endpoint. With multiple endpoints, there is usually some kind of
central STEK distribution system to synchronize the keys. And they key may
also be compromised due to weak crypto on the tickets themselves. So we
have to add those too to the threat model. But caches have similar
challenges, which might even be harder.


> Consider the fact that such compromise is unlikely to be detected and
> remediated
> instantaneously.  Indeed not only will the adversary have access for some
> time
> to sessions initiated post-compromise, he will also be able to impersonate
> the
> real server after the server is "secured" because certificate revocation
> will
> take days (or until the certificate expires) to take effect and more
> sessions
> can be compromised via MiTM attacks after the keys stolen.
>
> The incremental exposure of some fraction of a hour of sessions shortly
> before
> compromise pales in comparison with the much longer post-compromise
> exposure
> before the service is again secure.
>
> STEKs are simpler to design and secure than a complex session cache. This
> simplicity has security benefits.  Properly implemented STEKS have a
> lifetime
> that is much shorter than any plausible compromise exposure time window.
> Given
> this, it makes sense to not consider potential STEK compromise as loss of
> forward
> secrecy.
>
> Forward secrecy guards against disclosure of truly long-term key material,
> with
> a lifetime of weeks to years.  Once we get down to hours or minutes, it is
> quite
> sensible to take a coarser view of time, with similar exposure for sessions
> "just before" and "just after" compromise.
>
> Maximally sensitive traffic can take the penalty of avoiding session
> caching
> of any kind.
>

That does all makes sense, it just hasn't really been how TLS is deployed.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 5:25 PM, Martin Thomson 
wrote:

> I was responding to an overly broad statement you made.  In the
> discussion you also talk about timing side-channels and other ways in
> which information can leak.  Nothing we do at the TLS layer will
> prevent those from being created in applications.
>

Sure, but things we're doing at the TLS layer can make it  much worse, as
in this case. I don't think we can make attacks easier.


> Also, it might pay to remember that this is part of a larger context.
> Applications routinely retry and replay; if they didn't, users would.
>

In the larger context, not all HTTP calls are coming from user actions, or
from clients that retry in that way. Some clients need to be careful,
precisely to achieve idempotency or safety. The review details the reasons
why, and also why it is impractical for actors to separate out these cases
and simple "not use" 0-RTT, due to how layers work and systems
interoperate.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 3:56 PM, Martin Thomson <martin.thom...@gmail.com>
wrote:

> On 3 May 2017 at 22:45, Colm MacCárthaigh <c...@allcosts.net> wrote:
> > This is easy to say; the TLS layer is the right place. It is not
> practical
> > for applications to defend themselves, especially from timing attacks.
>
> If you care about these attacks as much as it appears, then you can't
> reasonably take this position.


An analogy may help here. TLS has always offered anti-replay, it is one of
our core security guarantees, and the assumption that it is there is subtly
baked in to many applications.

Suppose we wanted to remove a different security guarantee: integrity
protection/tamper-proofing. Suppose we decided to offer a fast "No MAC"
mode for streamed uploads, it is faster, more stream-y (no need to wait for
each record to be complete), and TLS is really mostly about secrecy. But to
be "safe" we decide that it's only on optionally and sometimes, and we
always have to tell the application which data may have been tampered with.

The application can use its own MACing, or FECing, or whatever, and really
it should have been doing this from the days of plaintext HTTP, right!?
It's all the application's responsibility - we say, they can deal with it
[*]

Except MACing is hard, and they run in to Lucky13 all over again, or just
use MD5 because they don't know any better, and so on.  Or worst of all: a
middle box decides to enable it on the front-end, with no knowledge to the
backend about what's happening.

In my view, this arrangement would plainly be crazy. We wouldn't consider
it. Instead, we provide protection for them using layering principles, and
TLS provides protections that TLS experts are good at.

And yet we do consider all of this for replay. I think it's because
collectively we haven't seen just how hard a problem idempotency and
application-level side-effects are and we are less familiar with the risks.
In the review, I've included 5 real attacks to illustrate that problems are
real. Several of them I can't see how to mitigate them at application
level, and I've been checking with experts on distributed systems. How
would you mitigate the throttling problem (attack 3) or the timing problem
(attack 4) on say a JVM object cache?


> We've historically done a lot to
> secure applications at a single point, and we're almost at the end of
> what we can reasonably do for them at this layer.  We need to think
> more hollistically and acknowledge that applications need to take some
> responsibility for their own security.
>

No we don't. Servers can prevent replay.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 3:04 PM, Viktor Dukhovni <ietf-d...@dukhovni.org>
wrote:

>
> On May 3, 2017, at 4:27 PM, Colm MacCárthaigh <c...@allcosts.net> wrote:
>
> > On Wed, May 3, 2017 at 1:20 PM, Viktor Dukhovni <ietf-d...@dukhovni.org>
> wrote:
> >
> >> The kind of application whose security requirements preclude use of RFC
> 5077
> >> session tickets can and should likely also avoid both 0-RTT and session
> >> resumption entirely.
> >
> > I don't agree with this. Why should users of mobile devices, to pick one
> example,
> > have to choose between speed and the extra risk of data disclosure for
> their requests
> > and passwords?
>
> Their security requirements don't preclude use of RFC 5077 session tickets,
> they've been using them all along, and will continue to do so for the
> majority
> of destinations that will take quite some time to upgrade to TLS 1.3.
>
> As you might recall, I don't agree that STEKs are fundamentally less secure
> than remote session caches, I rather suspect that sensibly implemented
> STEKs
> are safer.
>

I'm not sure how any of that is relevant. Mobile users, like everyone else,
appreciate speed; they'll want to use 0-RTT. I'm sure they also appreciate
their data being less at risk, so forward secrecy is good for them too. I'm
saying they should be able to get both. Again, you can combine the security
properties of a STEK with a cache if you want, by encrypting the cache
entries the same way you would use a STEK.


>
> [ I do concede that the current OpenSSL implementation leaves too much of
> the
> responsibility of doing STEKs right to applications, and I will endeavour
> to
> fix that.  It is not especially difficult to implement a default-on key
> rotation
> mechanism. ]
>

Just an aside related to that; it can be useful to fuzz ticket lifetimes a
bit so all of the tickets from a STEK don't expire at exactly the same
time. That can lead to a lot of painful renegotiations happening at once.

>> Second-guessing the server's design by looking at ticket sizes seems
> rather
> >> contrived.
> >
> > It's not a second guess. If the ticket size is smaller than the RPSK,
> then it
> > provably can not have been self-encrypted. But I agree that it says
> nothing
> > about the server-side security. They might be posting the keys to
> twitter.
>
> As you yourself concede it is not the size that matters. It is a very
> marginal indication of the security of the remote session. The server may
> wish to decorate

the session id with additional data (say which data-centre issued the
> session)
> and fail your test, and yet be doing the type of session caching you're
> asking
> for.  The client should either trust the server to cache sessions
> correctly (this
> would be the default client behaviour in most cases), or it can choose to
> forgo
> session re-use.
>

True.


> Feel free to malign bad implementations of STEKs, but the basic mechanism
> is
> sound.
>

"sound" goes too far. Pervasive forward secrecy is one of the central
security goals of modern cryptography, some schemes go so far as to bake in
very frequent key agreement and ratcheting. STEKs aren't compatible with
this goal. They may be acceptable operationally today, but I think they
must be borrowed time if the central goal is meaningful.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 2:20 PM, Nico Williams  wrote:

> It's what Kerberos has been doing for decades.  RFC4120 (before that
> RFC1510).
>

I'll take your word for it!


> > > Type 2.1 - Ticket intended for 0-RTT, does include the ticket age
> (maybe
> > > > not in the ticket itself, but somewhere in the handshake), can only
> be
> > > used
> > > > once.
> > >
> > > No.  Give advice.  Do not remove these features.
> >
> > I think the can only be used once for 0-RTT needs to be firm. Otherwise
> > 0-RTT mode is insecure.
>
> I don't agree: the application may not care.
>

No, it's still insecure ... because it may matter to the application, and
worse still the application owner may not even realize that. The existence
of some rare environments where one can truly, deeply, understand the
idempotency and side-effect problems and fully reason about their
implications does not invalidate that. For security, we must assume the
worst, not hope for the best.

Also: rejecting duplicates is safe in both environments. The main downside
is the cost to operators, but I'm not sympathetic to an argument that costs
should be cut by pushing significant risk downstream.


>
> > > > Type 2.2 - Same as 2.1, but required to be smaller than RPSK in
> size, to
> > > > prevent self-encryption.
> > >
> > > I don't grok this.
> > >
> >
> > Self-encrypting tickets require STEKs and all of their problems. [...]
>
> Can you elaborate?  (I don't follow TLS WG that closely.  I'm from
> KITTEN WG.)
>

Sure ... https://www.ietf.org/mail-archive/web/tls/current/msg23100.html

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 1:20 PM, Viktor Dukhovni 
wrote:

> The kind of application whose security requirements preclude use of RFC
> 5077
> session tickets can and should likely also avoid both 0-RTT and session
> resumption entirely.


I don't agree with this. Why should users of mobile devices, to pick one
example, have to choose between speed and the extra risk of data disclosure
for their requests and passwords?

Second-guessing the server's design by looking at ticket sizes seems rather
> contrived.
>

It's not a second guess. If the ticket size is smaller than the RPSK, then
it provably can not have been self-encrypted. But I agree that it says
nothing about the server-side security. They might be posting the keys to
twitter.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 12:35 PM, Nico Williams 
wrote:

> No, please don't remove the obfuscated ticket age.  Either make it
> encrypted or leave it as-is.
>

If it is to be encrypted, say AEAD'd, it needs to be encrypted under a key
that the client and server share; because the client needs to modify the
age before sending. Moving the age to its own message, post Client-Hello,
could solve that - so it comes out of the ticket.

That scheme is a bit weird and circular: the server would have to "use" the
ticket to derive a key that decrypts the age, to then decide if it should
"really use" the ticket for 0-RTT data. That seems pretty convoluted to me
- could be a fairly complicated security proof.

> Type 2.1 - Ticket intended for 0-RTT, does include the ticket age (maybe
> > not in the ticket itself, but somewhere in the handshake), can only be
> used
> > once.
>
> No.  Give advice.  Do not remove these features.
>

I think the can only be used once for 0-RTT needs to be firm. Otherwise
0-RTT mode is insecure.


> > Type 2.2 - Same as 2.1, but required to be smaller than RPSK in size, to
> > prevent self-encryption.
>
> I don't grok this.
>

Self-encrypting tickets require STEKs and all of their problems. A client
could say "Don't send me a STEK-based ticket", and this can be enforced by
requiring those tickets to be smaller than the RPSK in size.  The client at
least knows that the ticket couldn't possibly have been self-encrypted.
Though obviously it says nothing about how good a job the server is doing
of keeping the keys secret.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 11:29 AM, Nico Williams 
wrote:

> On Wed, May 03, 2017 at 12:10:12PM -0400, Viktor Dukhovni wrote:
> > > On May 3, 2017, at 12:01 PM, Salz, Rich  wrote:
> > > The protocol design should avoid setting traps for the unwary.
> >
> > No, that responsibility falls on libraries.  STEKs are not a trap for the
> > unweary.  Libraries that support static session tickets by default can be
> > viewed as such a trap.  So the onus to fix this is on us (OpenSSL team)
> > not the TLS protocol.
>
> A big +1 to this.
>
> I think it would terrible if we couldn't have resumption at all because
> one common implementation mishandles old key deletion.
>

With the improvements in 1.3 all of this FS only pertains to 0-RTT data,
not resumption in general. One solution would be to have two, or three
sub-types of ticket exchanges:

Type 1 - same as now, except remove the ticket age, generally intended for
resumption. Can be used multiple times.

Type 2.1 - Ticket intended for 0-RTT, does include the ticket age (maybe
not in the ticket itself, but somewhere in the handshake), can only be used
once.

Type 2.2 - Same as 2.1, but required to be smaller than RPSK in size, to
prevent self-encryption.

Though honestly, I'm not even sure 2.2 is a great idea any more, maybe too
much complexity, and we can just measure the size and enforce things that
way.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-03 Thread Colm MacCárthaigh
On Wed, May 3, 2017 at 2:15 AM, Hannes Tschofenig <hannes.tschofe...@gmx.net
> wrote:
>
> First, talk about tickets and point out that distributing the session
> key for encrypting the ticket content (Session Ticket Encryption Keys
> (STEKs)) is a challenge and raises security concerns. You point to
> Akamai CDN and their huge number of hosts.
>

I used Akamai as a reference as they are probably the worlds largest
deployment of TLS, measured by hosts. But to be clear; Akamai has always
done great work on securing keys and I know it's something they take very
seriously.


> Which approach is best depends on your infrastructure.
>

Here I want to be more opinionated than that. I think that forward secrecy
is important and is more secure, and should always be provided. Its absence
is a security issue, so it's appropriate to highly in a security review.
It's possible in all cases, as shown, and the trade-off is unnecessary from
first principles.

The fundamental problem with STEKs is that there is no forward secrecy, and
in the case of TLS1.3, no FS for critical data. It's an unfortunate
alignment, as it really undoes a lot of the collective good work to deploy
PFS in practice.

Deploying the PFS cipher suites took nudging providers along; and was not
without cost. I think the same thing may happen with STEKs in the
not-too-distant future. People seem to care about getting forward secrecy.
Smart customers with particularly sensitive workloads will insist.

What I could, however, imagine is to add either in the TLS handshake or
> at the application layer something like a timestamp or a sequence number
> so that the server-side can check for replays. This is what has been
> done with other security protocols in the past. Is the TLS layer the
> right place or the application layer? Hard to say.
>

This is easy to say; the TLS layer is the right place. It is not practical
for applications to defend themselves, especially from timing attacks.

You also suggest that 0-RTT supports PFS. The definition of PFS
> unfortunately is not super helpful here when there are so many keys in
> play. The PFS definition talks about the loss of the long-term key. What
> you care about is the potential loss of the STEKs, which is not a
> long-term key.
>

As the crypto-shortcuts paper showed, in some cases servers had STEKs in
place for the duration of the experiment (months) and perhaps years.


> You write: "Any client that attempts to use a ticket multiple times will
> also likely leak the obfuscated_ticket_age value, which is intended to
> be secret." The obfuscated_ticket_age is not secret - it is sent in the
> clear. What is supposed to be kept confidential is the ticket age. I

have been wondering myself what the privacy value of the
> obfuscated_ticket_age, and the ticket age is and I am still not
> convinced that it provides much benefit.
>

Sorry, you're right, it's the ticket_age_add which should remain secret. My
understanding is that as NewSesstionTickets messages occur post-handshake,
that they are encrypted under the session keys, and that value is secret.



>
> Ciao
> Hannes
>
> PS: You talk about OATH in this paragraph:
>
> "
> To avoid simple spoofing risks, many such systems perform throttling
> post-authentication. For example the request may be signed
> cryptographically (see the AWS SIGv4 signing protocol or the OATH
> signing process), that signature is verified prior to throttling. This
> post-authentication property is one reason why such protocols are
> designed to be extremely fast to verify, which often means as much
> cryptography as possible must be pre-computed, making random nonces
> infeasible in many cases.
> "
>
> What you actually mean is OAuth (OATH is a different effort also related
> to earlier IETF work on one-time-passwords). Your reference points to an
> old OAuth specification that has been replaced by OAuth 2.0, which does
> not use the same application layer signing anymore.
>

Thanks for letting me know!


>
> On 05/02/2017 04:44 PM, Colm MacCárthaigh wrote:
> > On Sunday at the TLS:DIV workshop I presented a summary of findings of a
> > security review we did on TLS1.3 0-RTT, as part of implementing 1.3 in
> > s2n. Thanks to feedback in the room I've now tightened up the findings
> > from the review and posted them as an issue on the draft GitHub repo:
> >
> > https://github.com/tlswg/tls13-spec/issues/1001
> >
> > I'll summarize the summary: Naturally the focus was on forward secrecy
> > and replay. On forward secrecy the main finding was that it's not
> > necessary to trade off Forward Secrecy and 0-RTT. A single-use session
> > cache can provide it, and with the modification that ekr has created
> > in https://github.com/tlswg/tls13

Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 8:17 PM, Eric Rescorla  wrote:

> Colm,
>
> Thanks for your review. Interesting stuff.
>

Thanks too for taking the time to read it.


> Scrolling ahead to the recommendations, I see you have:
>
> * Require implementations to robustly prevent Ticket re-use for 0-RTT.
>
> This seems like a good idea. I think it's arguable that we got a bit
> nihilistic about this, and as you say, you can do a pretty good job of
> reducing replay within a given data center using some local
> anti-replay mechanism. I would tend to think that a strong SHOULD is
> what you want here, as a MUST is going to be a lot more like a 6919
> MUST (BUT WE KNOW YOU WON'T).
>

A provider choosing to allow replays to their own applications is on them,
they can own the stack top to bottom, take the risk, have a response team
handle attack events, etc ... That's a legitimate case for "SHOULD", there
are exceptions to everything.

But at the same time, what I think we should care most about is the
interoperability case. So for example, if a TLS library, a web server, or
an upstream CDN, can break the assumptions of a down-stream service or
application, that's where we'll see CVEs cropping up. In those cases, I
think the problem is the upstream component, breaking long-held and fair
assumptions. In those cases, it feels like a "MUST".

I'm not sure how to strike a balance, so chose the more secure option.


> I understand from your document and our previous conversation that you
> believe single-use tokens are easier to implement on the server side,
> but I'm not sure I really follow your argument. If you want to provide
> some short text on that that we can all agree on, then I think we
> could incorporate this as well.
>

It's probably not important to get into a single-use token cache in the
draft itself. That was just intended for implementors. One thing that makes
a single-use cache slightly easier to implement is that reads and writes to
keys need not be sequenced relative to any kind of checkpoint.

With a strike register, some kind of checkpoint process is needed. A simple
one is to start with a clean slate, accept only writes for a time period
equivalent to the replay window, and then go read-write. But that kind of
modality is, I think, required. With a single-use cache, it need only
sequence the reads and writes to individual keys. Though it must store much
much more state, so there are different trade-offs to consider.


> * Partial mitigation for Gilmor attacks: deliberately duplicate 0-RTT
>   data
>
> This seems like some sort of extra-grease, but I'm kinda skeptical
> that anyone is going to do it. Perhaps it would be useful to add it to
> draft-ietf-tls-grease?
>

Grease is a good place for it for sure.


> * Require TLS proxies to operate 0-RTT transitively
>
> I've read this several times but I have to admit I don't understand
> it. Do you think you could rephrase?
>

What's in the draft is that there should be an application profile and that
TLS implementations should provide applications with special functions to
tell apart the early and regular data. The problem is that doesn't work
with TLS proxies (like CDNs) ... because the proxies are re-combulating the
data as a single stream to the origin. So the origin can't tell which data
was replayable.

If 0-RTT replay is tolerated, I think that's a big gnarly problem leading
to CVEs. If we prevent replay by other means, like the strike register,
it's not so big a deal.

What I was suggesting was that such proxies always send an outgoing 0-RTT
section to the origin that exactly lines up with the 0-RTT section that
came in on the front end. It's quite hard to make that all work; the CDN
would have to accept 0-RTT only if the origin also does. But I can't see
another way that really works, especially for Layer 4 accelerators. Again,
if 0-RTT replay is forbidden, not nearly as big a deal.


> The following doesn't appear in your recommendations, but I think you
> also think we should:
>
> * Adopt something like PR#998 to make each PSK_id correspond to a
>   different PSK even when they are issued on the same connection.
>

Big +1 to this.


> For the reasons Viktor indicated, I tend to think this is inadvisable.
>

I've been thinking more about this. What if tickets are single-use when
used with 0-RTT, but otherwise allowed for multiple uses? If that's
acceptable, then could we drop the age from tickets purely used for
resumption. That way the age would only appear on tickets that are used
once.

I think that way we get everything.

* Remove ticket_age
>
> This doesn't seem like a good idea: if you are implementing a strike
> register, you want to use ticket_age to help distinguish "fresh" from
> "unknown state". I.e., the ticket_age + the ticket issue time allows
> you to determine approximately when a CH was sent, and any CH that
> is significantly older or newer than now is forced back into 1-RTT.
>

Yep, I was wrong about this, I get it 

Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 5:51 PM, Viktor Dukhovni 
wrote:

> .Some choices are driven by practical engineering considerations.


> The ticket lifetime is likely to be considerably longer than the
> replay clock window.  The server should indeed reject replays,
> but if it is able to flush the replay cache faster, it may be able
> to handle that job a lot more efficiently under load.
>

Good point! It's also common for caches to implement LRU, where a shorter
lifetime is better.

What is a likely ticket lifetime for a server that supports 0-RTT
> (let's assume an HTTP server)?


I'm going to guess that providers will set it as high as they can ... every
little helps. So 7 days.


> How wide a replay clock window is likely reasonable (to allow for RTT and
> TCP retransmission delays)?
>

I think we have to assume that a replay attempt could come at any time. A
time based mitigation doesn't work.

-- 
Colm
-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 4:49 PM, Peter Gutmann 
wrote:

> Benjamin Kaduk  writes:
>
> >I thought TLS clients were supposed to have even worse clocks (in terms of
> >absolute time) than Kerberos clients.
>
> Many of the devices I work with don't have clocks (at best they have non-
> persistent monotonic counters), so I guess that's true in some sense...
>

This whole problem of needing client-side clocks, and having to obfuscate
an age, goes away if we remove the ticket age entirely.

Hopefully the security review makes a strong case that the age is fairly
useless from a security point of view. Even with the age, an attacker can
still generate millions to billions of replays. Even with very conservative
numbers, e.g. to just one host, the attacker can still certainly generate
tens of thousands of replays within the permitted window.  Better to
require servers to reject duplicates (when used with Zero-RTT), and leave
it at that.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 11:39 AM, Benjamin Kaduk  wrote:

> I thought TLS clients were supposed to have even worse clocks (in terms of
> absolute time) than Kerberos clients.  The current ticket_age scheme only
> requires the client's clock *rate* to be reasonable, not its absolute time.
>

Here I have some data. Over 7 days of examining requests from low power
devices, 1 in 100 devices had a clock drift of at least 2 seconds. One in
1,000 had a drift of at least 43 seconds, and the worst offender had
drifted by years.


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 11:31 AM, Viktor Dukhovni <ietf-d...@dukhovni.org>
wrote:

>
> > On May 2, 2017, at 2:15 PM, Colm MacCárthaigh <c...@allcosts.net> wrote:
> >
> > In that case, I only reason I see to stop using tickets multiple times
> is to protect
> > the obfuscated age. It reads to me like its purpose would just be
> defeated. Is it
> > really that hard for clients to use a 1-for-1 use-a-ticket-get-a-ticket
> approach?
>
> Yes, it is difficult to do 1-for-1.  In postfix there are parallel client
> processes
> reading a shared session cache, and parallel writers updating that cache,
> and without
> major changes to the code, when two writers update the cache back to back
> only one
> ticket (really SSL_SESSION object) is saved.  Under load, many clients
> would not
> find a ticket at all.
>

That makes sense to me. Thanks for the detail.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 11:00 AM, Nico Williams 
wrote:

> I would think that the ticket itself is enough for that when using
> 0-rtt.  I.e., if you don't want connection correlation to be possible,
> you can't use 0-rtt.


I don't think so. If the ticket is encrypted when it issued, I don't follow
how it could be used to correlate the original connection with the 0-RTT
connection.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 11:08 AM, Viktor Dukhovni 
wrote:

> Yes, if the change is narrowly tailored to 0-RTT, *and* if server TLS
> stacks
> don't stop supporting ticket reuse for "normal" (not 0-RTT) sessions, then
> I have no direct concerns with changes that affect 0-RTT alone.
>

Great - I added a small errata comment on the github issue just recording
that too.

In that case, I only reason I see to stop using tickets multiple times is
to protect the obfuscated age. It reads to me like its purpose would just
be defeated. Is it really that hard for clients to use a 1-for-1
use-a-ticket-get-a-ticket approach?

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 10:33 AM, Viktor Dukhovni 
wrote:
>
> I believe that the proposed change is well intentioned but
> counter-productive.
>

Note that the recommendation in the review is:

 'TLS1.3 should require that TLS implementions handling 0-RTT "MUST"
provide a mechanism to prevent duplicate tickets from being used for 0-RTT
data'

it is not quite about the general use of tickets - only as they pertain to
0-RTT data.  My understanding is that 0-RTT is not particularly interesting
for SMTP, so would that be ok?


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Security review of TLS1.3 0-RTT

2017-05-02 Thread Colm MacCárthaigh
On Tue, May 2, 2017 at 10:39 AM, Nico Williams 
wrote:

> With existing APIs, dealing with "pools of meaningfully distinct
> tickets" seems meaningfully non-trivial.
>

I would actually prefer if the client could request N tickets, but was
advised that this was too large a change to the protocol.

> > There's also an observation there that it should really be that
> > > clients "MUST" use tickets only once. Any re-use likely discloses
> > > the obfuscated ticket age, which is intended to be secret. Right now
> > > it's a "SHOULD".
>
> Why should ticket age disclosure be a problem?  How does ticket one-time
> use not do the same?
>

The draft writes that it is to prevent connection correlation attacks.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Current TLS 1.3 state?

2017-04-05 Thread Colm MacCárthaigh
On Wed, Apr 5, 2017 at 2:03 AM, Karthikeyan Bhargavan <
karthik.bharga...@gmail.com> wrote:

>
> What I am less confident about is the secure usage of features like 0-RTT,
> 0.5 RTT, and post-handshake authentication.
> Many researchers have looked at these aspects (and they can correct me if
> I am wrong) but the security guarantees we can prove for these modes is
> much more limited than for the regular 1-RTT handshake. My concern is that
> these features will inspire new usage patterns will emerge for TLS 1.3 that
> have not been adequately studied. I am not sure what we can do about that
> except maybe work harder on the security considerations.
>

I'll be at TLS:DIV and presenting on 0RTT, which is forcing me to finally
nail down all of my concerns in a more concrete and data-driven form. I do
think it's fair to say that 0RTT is known to be insecure, in the sense that
there are real-world attacks it will enable, and I haven't seen any of
those disputed. Wether we should accept those is a matter of trade-offs, I
don't think we should; they seem quite serious and really point the gun at
the feet of the user.  A warning saying 'You shouldn't use these without an
application profile' is already failing in the market; with vendors
deploying 0RTT anyway in unsafe cases. It goes against all of the neat
encapsulation we've been trying to build for secure primitives.

So for now, here's my high level summary:

Replayability issues:

0RTT data is replayable, we know this. It includes a weak clock-based
protection against against replay. If we take the CDN use-case, and
realistic values for latency and clock drift, the mitigation ends up
permitting many billions of replays. This is much much more than similar
issues that have brought up before (e.g. browser retries).

1. Applications must be coded defensively against replay - e.g. they must
be idempotency. But this is hard - and for mutating actions can only truly
be achieved with tight-bound transactions at the application layer.
Enclosing transactions don't work. But 0RTT is being incorporated "below"
the application layer in a way that is very very likely to lead to a
mis-match of expectations.

2. We can say that 0RTT shouldn't be used for non-mutating actions. But
this is bad for two reasons.

Reason "A" is that many services have throttles, even for reads, to prevent
overload and abuse. If an attacker than replay 0RTT data and exhaust a
throttle, this completes a denial of service attack,

Reason "B" is that 0RTT is a very effective way to improve the performance
of uploads. For architectural reasons, it's common for users to send photos
over messaging, and for these photos to be uploaded to storage engines over
TLS, while the URL is shared over the messaging protocol. This should be a
good and valid use of 0RTT, but it certainly mutating and requires
idempotency.

Suggested mitigation for issues 1 and 2.:
The 0RTT section should be unacknowledged at the TLS protocol layer. The
client should send it, but the TLS layer should have no knowledge of
whether or not it was decrypted/accepted. Instead the application should
either tolerate it being re-sent as part of the regular application data,
or should acknowledge it explicitly at the application layer.


Side-channel defensiveness issues:

3. Replaying 0RTT data makes statistical analysis of side-channels far
easier, at the application level and at the crypto level.

At the crypto level, 0RTT opens us up to implementation side-channels that
merely decloak the plaintext (rather than key) in a way we haven't quite
had before. The attacker can feed the cipher text billions of times and
learn things about it. The previous mitigation would help with this.

At the application level, 0RTT makes it much easier to use
application-level side channels. Here's a simple example: User requests
some content from a CDN - what was it? Suppose we have a billion candidate
URLs we want to probe and see what it was. We can make trial downloads of
our candidates from the CDN and see what's warm in the cache; that helps us
narrow it down. But with 0RTT we can also replay the request against
other/cold CDN nodes, and cause them to fetch the target resource, which
makes this kind of attack exponentially easier - to the point that it is
trivial. There are neat combinatorial techniques that make finding the
needle in the haystack easy. That's a pretty big dent in secrecy.

4. 0RTT data loses Forward Secrecy.

This is a big deal because FS is one of our goals for TLS1.3. And it's a
really big deal because 0RTT is where the most valuable parts of a session
are likely be. It's the URLs you're accessing, it's your authentication
tokens, and your credit card number. These all go in the early-phase of
real-world requests precisely because the server needs them to process. If
we don't provide FS for these, but do for the HTML you are downloading, is
that really a wise combination of trade-offs? It seems very strange.

Suggested mitigation 

Re: [TLS] [SUSPECTED URL!]Re: Requiring that (EC)DHE public values be fresh

2017-01-02 Thread Colm MacCárthaigh
On Mon, Jan 2, 2017 at 11:43 AM, Yoav Nir  wrote:

> I’m assuming that the server generates private keys and saves them to a
> file along with the time period that they were used, and another machine in
> a different part of the network records traffic. It’s not so much that the
> clocks need to be accurate, as that they need to be synchronized, and there
> will still be some misalignment because of (variable) latency.
>
> I guess we are making guesses about systems that haven’t been written yet.
>

Logging the actual session keys is a feature that some Enterprise
appliances have today, and that would continue to work in all scenarios
(sadly). It's not that much data to log (far less than a request log for
example). In that case it's often left unindexed and simple tools like grep
are used for ad-hoc decryption requests,which are typically rare enough not
to merit anything better.

For simplicities sake, I'd prefer single-use ECDHE, rather than
time-delimited. Mostly because it's simpler to implement. The current
generation of IOT and other small embedded systems are already at the point
where they can do this kind of thing.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Additional warnings on 0-RTT data

2016-11-24 Thread Colm MacCárthaigh
On Wed, Nov 23, 2016 at 10:44 PM, Christian Huitema <huit...@huitema.net>
wrote:

> On Wednesday, November 23, 2016 7:20 PM, Colm MacCárthaigh wrote:
> >
> > Prior to TLS1.3, replay is not possible, so the risks are new, but the
> end-to-end designers
> > may not  realize to update their threat model and just what is required.
> I'd like to spell
> > that out more than what's where at present.
>
> Uh? Replay was always possible, at the application level. Someone might
> for example click twice on the same URL, opening two tabs, closing one at
> random. And that's without counting on deliberate mischief.
>

Much more than browsers use TLS, and also more than HTTP. There are many
web service APIs that rely on TLS for anti-replay, and do not simple retry
requests. Transaction and commit protocols for example will usually have
unique IDs for each attempt.

But even if this were not the case, there are other material differences
that are still relevant even to browsers. Firstly, an attacker can replay
0-RTT data at a vastly higher rate than they could ever cause a browser to
do anything. Second, they can replay 0-RTT data to arbitrary nodes beyond
what the browser may select. Together these open new attacks, like the
third example I provided.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Colm MacCárthaigh
On Wed, Nov 23, 2016 at 8:40 PM, Martin Thomson <martin.thom...@gmail.com>
wrote:

> On 24 November 2016 at 15:11, Colm MacCárthaigh <c...@allcosts.net> wrote:
> > Do you disagree that the three specific example security issues provided
> are
> > realistic, representative and impactful? If so, what would persuade you
> to
> > change your mind?
>
> These are simply variants on "if someone hits you with a stick, they
> might hurt you", all flow fairly logically from the premise, namely
> that replay is possible (i.e., someone can hit you with a stick).
>

Prior to TLS1.3, replay is not possible, so the risks are new, but the
end-to-end designers may not realize to update their threat model and just
what is required. I'd like to spell that out more than what's where at
present.

The third is interesting, but it's also the most far-fetched of the
> lot (a server might read some bytes, which it later won't read,
> exposing a timing attack).


I need to work on the wording because the nature of the attack must not be
clear. It's really simple. If the 0-RTT data primes the cache, then a
subsequent request will be fast. If not, it will be slow.

If implemented on a CDN for example, the effort required would be nearly
trivial for an attacker. Basically: I replay the 0-RTT data, then probe a
bunch of candidate resources with regular requests. If one of them loads in
20ms and the others load in 100ms, well now I know which resource the 0-RTT
data was for. I can perform this attack against CDN nodes that are quiet,
or remote, and very unlikely to have the resources cached to begin with.


> But that's also corollary material; albeit
> less obvious.  Like I said, I've no objection to expanding a little
> bit on what is possible: all non-idempotent activity, which might be
> logging, load, and some things that are potentially observable on
> subsequent requests, like IO/CPU cache state that might be affected by
> a request.
>

ok, cool :)


> >> I'm of the belief that end-to-end
> >> replay is a property we should be building in to protocols, not just
> >> something a transport layer does for you.  On the web, that's what
> >> happens, and it contributes greatly to overall reliability.
> >
> > The proposal here I think promotes that view; if anything, it nudges
> > protocols/applications to actually support end-to-end replay.
>
> You are encouraging the TLS stack to do this, if not the immediate
> thing that drives it (in our case, that would be the HTTP stack).  If
> the point is to make a statement about the importance of the
> end-to-end principle with respect to application reliability, the TLS
> spec isn't where I'd go for that.
>

I'm not sure where the folks designing the HTTP and other protocols would
get the information from if not the TLS spec. It is TLS that's changing
too. Hardly any harm in duplicating the advice anyway.


> > I think there is a far worse externalization if we don't do this.
> Consider
> > the operations who choose not (or don't know) to add good replay
> protection.
> > They will iterate more quickly and more cheaply than the diligent
> providers
> > who are cautious to add the effective measures, which are expensive and
> > difficult to get right.
>
> OK let's ask a different question: who is going to do this?
>

I am, for one. I don't see 0-RTT as a "cheap" feature. It's a very very
expensive one. To mitigate the kind of issues it brings really requires
atomic transactions. Either the application needs them, or something below
it does. So far I see fundamentally no "out" of that, and once we have
atomic transactions then we either have some kind of multi-phase commit
protocol, distributed consensus, routing of data to master nodes, or some
combination thereof.

The smartest people I've ever met work on these kinds of systems and they
all say it's really really hard and subtle. So when I imagine Zero-RTT
being done correctly, I imagine organizations signing up for all of that,
and it being worth it, because latency matters that much. That's a totally
valid decision.

And in that context, the additional expense of intentionally replaying
0-RTT seems minor and modest. My own tentative plan is to do it at the
implementation level; to have the TLS library occasionally spoof 0-RTT data
sections towards the application. This is the same technique used for
validating non-replayable request-level auth.

I don't see browsers doing anything like what you request; nor do I
> see tools/libs like curl or wget doing it either.  If I'm wrong and
> they do, they believe in predictability so won't add line noise
> without an application asking for it.
>

I hope this isn't the case, but if it is and browsers generally agree that
it would be unimplementable 

Re: [TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Colm MacCárthaigh
On Wed, Nov 23, 2016 at 3:03 PM, Martin Thomson <martin.thom...@gmail.com>
wrote:

> This seems like too much text to me.  Maybe some people would
> appreciate the reminder that replay might cause side-effects to be
> triggered multiple times, or that side effects are just effects and
> those might also be observable.  But I think that those reminders
> could be provided far more succinctly.
>

My main goal is clarity, especially to application builders. In my
experience the implications of replay-ability are frequently overlooked and
it's well worth being abundantly clear.  For most builders, those full
implications have never occurred to them (why would they?), so it's often
not a reminder, but new knowledge. It warrants a big warning label, not
FUD, but honestly scary because it is a very hard problem.

But still, the text can most likely be shortened some, edits welcome.


> The bit that concerns me most is the recommendation to intentionally
> replay early data.  Given that I expect that a deployment that enables
> 0-RTT will tolerate some amount of side-effects globally, and that
> excessive 0-RTT will trigger DoS measures, all you are doing is
> removing some of the safety margin those services operate with


Can I break this into two parts then? First, do you agree that it would be
legitimate for a client, or an implementation (library), to deliberately
replay 0-RTT data? E.g. browsers and TLS libraries MAY implement this as a
safety mechanism, to enforce and audit the server's and application's
ability to handle the challenges of replay-ability correctly.

At a bare minimum I want to be free to include this in our implementation
of TLS, and in the clients that we use. I think there's value in explicitly
documenting this, and that the problem is squarely on the server side if it
is not replay-even-within-time-window-tolerant. Frankly: I want to be able
to point to the spec and it be clear whose fault it is.

Now second, consider a web service API that exists today and is using TLS.
Many such APIs depend entirely on TLS for all anti-replay protection
(though for the record, the SigV4 authentication mechanism we use at AWS
includes its own anti-replay measure). When TLS1.3 comes along it is *very*
tempting for providers to turn it on and use it for those APIs. Everyone
always wants everything to be faster, and it won't be a huge code change.
The problems of replay attacks could be dismissed as unlikely and left
unaddressed, as security issues often are.

So through a very predictable set of circumstances, I think those calls
will be left vulnerable to replay issues and TLS1.3 will absolutely degrade
security in these cases. What I'm suggesting is that we RECOMMEND a
systematic defense: clients and/or implementations should intentionally
generate duplicate data, so that the problem *has* to be addressed at least
in some measure by providers. I far prefer that they be forced to encounter
a low grade of errors early in their testing than to leave a large hole
looming for users to fall into.

This style of "Always always test the attack case in production" is also
just a good practice that keeps anti-bodies strong.

Thanks for reading and the feedback.


On 24 November 2016 at 04:47, Colm MacCárthaigh <c...@allcosts.net> wrote:
> >
> > I've submitted a PR with an expanded warning on the dangers of 0-RTT data
> > for implementors:
> >
> > https://github.com/tlswg/tls13-spec/pull/776/files
> >
> > The text is there, and the TLDR summary is basically that time-based
> > defenses aren't sufficient to mitigate idempotency bugs, so applications
> > need to be aware of the sharp edges and have a plan.  For clarity, I've
> > outlined three example security issues that could arise due to realistic
> and
> > straightforward, but naive, use of 0-RTT. There's been some light
> discussion
> > of each in person and on the list.
> >
> > In the PR I'm "MUST"ing that applications need to include /some/
> mitigation
> > for issues of this class (or else they are obviously insecure). In my
> > experience this class of issue is so pernicious, and easy to overlook,
> that
> > I'm also "RECOMMEND"ing that applications using Zero-RTT data
> > *intentionally* replay 0-RTT data non-deterministically, so as to keep
> > themselves honest.
> >
> > At a bare minimum I think it would be good to make clear that clients and
> > implementations MAY intentionally replay 0-RTT data; to keep servers on
> > their toes. For example a browser could infrequently tack on a dummy
> > connection with repeated 0-RTT data, or an implementation could
> periodically
> > spoof a 0-RTT section to the application. That should never be
> considered a
> > bug. but a featu

[TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Colm MacCárthaigh
I've submitted a PR with an expanded warning on the dangers of 0-RTT data
for implementors:

https://github.com/tlswg/tls13-spec/pull/776/files

The text is there, and the TLDR summary is basically that time-based
defenses aren't sufficient to mitigate idempotency bugs, so applications
need to be aware of the sharp edges and have a plan.  For clarity, I've
outlined three example security issues that could arise due to realistic
and straightforward, but naive, use of 0-RTT. There's been some light
discussion of each in person and on the list.

In the PR I'm "MUST"ing that applications need to include /some/ mitigation
for issues of this class (or else they are obviously insecure). In my
experience this class of issue is so pernicious, and easy to overlook, that
I'm also "RECOMMEND"ing that applications using Zero-RTT data
*intentionally* replay 0-RTT data non-deterministically, so as to keep
themselves honest.

At a bare minimum I think it would be good to make clear that clients and
implementations MAY intentionally replay 0-RTT data; to keep servers on
their toes. For example a browser could infrequently tack on a dummy
connection with repeated 0-RTT data, or an implementation could
periodically spoof a 0-RTT section to the application. That should never be
considered a bug. but a feature. (And to be clear: I want to do this in our
implementation).


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-22 Thread Colm MacCárthaigh
On Thu, Sep 22, 2016 at 4:59 PM, Hugo Krawczyk <h...@ee.technion.ac.il>
wrote:

> On Thu, Sep 22, 2016 at 7:50 PM, Colm MacCárthaigh <c...@allcosts.net>
> wrote:
>
>> On Thu, Sep 22, 2016 at 4:41 PM, Hugo Krawczyk <h...@ee.technion.ac.il>
>> wrote:
>>
>>> If the problem is the use of forward secrecy then there is a simple
>>> solution, don't use it.
>>> That is, you can, as a server, have a fixed key_share for which the
>>> secret exponent becomes the private key exactly as in the RSA case. It does
>>> require some careful analysis, though.
>>>
>>
>> I think that this may be possible for TLS1.3 0-RTT data, but not for
>> other data where an ephemeral key will be generated based also on a
>> parameter that the client chooses.
>>
>
> The key_share contributed by the client is indeed ephemeral and it
> replaces the random key chosen by the client in the RSA-based scheme.
>

Yep, you're right, now I get it. I also now wonder if clients should make a
best effort of detecting duplicate parameters and rejecting them.

-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


  1   2   >