Re: [TLS] Possible timing attack on TLS 1.3 padding mechanism

2018-03-02 Thread Paterson, Kenny
Hi,

> On 2 Mar 2018, at 08:32, Nikos Mavrogiannopoulos <n...@redhat.com> wrote:
> 
>> On Thu, 2018-03-01 at 21:52 +0000, Paterson, Kenny wrote:
>> Hi,
>> 
>> I've been analysing the record protocol spec for TLS 1.3 a bit,
>> specifically the new padding mechanism. I think there's a possible
>> timing attack on a naïve implementation of de-padding. Maybe this is
>> already known to people who've been paying more attention than me!
>> 
>> Recall that the padding mechanism permits an arbitrary number of 00
>> bytes to be added after the plaintext and content type byte, up to
>> the max record size. This data is then encrypted using whichever AEAD
>> scheme is specified in the cipher suite. This padding scheme is quite
>> important for TLS 1.3 because the current AEAD schemes do leak the
>> length of record plaintexts. There should be no padding oracle style
>> attack possible because of the integrity guarantees of the AEAD
>> schemes in use. 
>> 
>> The idea for the timing attack is as follows. 
>> 
>> The natural way to depad (after AEAD decryption) is to remove the 00
>> bytes at the end of the plaintext structure one by one, until a non-
>> 00 byte is encountered. This is then the content type byte. Notice
>> that the amount of time needed to execute this depadding routine
>> would be proportional to the number of padding bytes. If there's some
>> kind of response record for this record, then measuring the time
>> taken from reception of the target record to the appearance of the
>> response record can be used to infer information about the amount of
>> padding, and thereby, the true length of the plaintext (since the
>> length of the padded plaintext is known from the ciphertext length).
>> 
>> The timing differences here would be small. But they could be
>> amplified by various techniques. For example, the cumulative timing
>> difference over many records could allow leakage of the sum of the
>> true plaintext lengths. Think of a client browser fetching a simple
>> webpage from a browser. The page is split over many TLS records, each
>> of which is individually padded, with the next GET request from the
>> client being the "response record". (This is a pretty simplistic view
>> of how a web browser works, I know!). The total timing difference
>> might then be sufficient for webpage fingerprinting, for example. 
>> 
>> I'm not claiming this is a big issue, but maybe something worth
>> thinking about and addressing in the TLS 1.3 spec.
>> 
>> There's at least a couple of ways to avoid the problem:
>> 
>> 1. Do constant-time depadding - by examining every byte in the
>> plaintext structure even after the first non-00 byte is encountered. 
>> 2. Add an explicit padding length field at the end of the plaintext
>> structure, and removing padding without checking its contents. (This
>> should be safe because of the AEAD integrity guarantees.) 
>> 
>> Option 2 is probably a bit invasive at this late stage in the
>> specification process. Maybe a sentence or two on option 1 could be
>> added to the spec.
> 
> Hi,
> It was brought previously to the WG [0], and the bottom line was to push
> for any solution to implementations.
> 

Thanks Nikos - sorry for missing your post from last August. At least I'm now 
only six months behind the curve :-)

> As of the "naïve implementation of de-padding", I wouldn't put like that.
> It is a straightforward method of de-padding after reading the draft, and
> I believe all implementations out there use that method.

Agreed. "Natural" would have been a better choice here. 

Cheers,

Kenny

> 
> regards,
> Nikos
> 
> 
> [0].
> https://www.ietf.org/mail-archive/web/tls/current/msg24365.html
> 
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Possible timing attack on TLS 1.3 padding mechanism

2018-03-01 Thread Paterson, Kenny
Hi Ekr.

Ah that's great, thanks - and I think the text in the Appendix already 
addresses the issues very well.

Sorry for the bandwidth consumption.

Cheers,

Kenny 

-Original Message-
From: Eric Rescorla <e...@rtfm.com>
Date: Thursday, 1 March 2018 at 22:27
To: "Paterson, Kenny" <kenny.pater...@rhul.ac.uk>
Cc: "tls@ietf.org" <tls@ietf.org>
Subject: Re: [TLS] Possible timing attack on TLS 1.3 padding mechanism

Hi Kenny,


Yes, this is something we are aware of. Here's the relevant text from the 
document:

https://tlswg.github.io/tls13-spec/draft-ietf-tls-tls13.html#rfc.appendix.E.3



I don't think we're likely to change the protocol, but if you have some 
proposed text
that you think would be informative above and beyond what we already have, 
please
send it along.


Best,
-Ekr









    
On Thu, Mar 1, 2018 at 1:52 PM, Paterson, Kenny 
<kenny.pater...@rhul.ac.uk> wrote:

Hi,

I've been analysing the record protocol spec for TLS 1.3 a bit, 
specifically the new padding mechanism. I think there's a possible timing 
attack on a naïve implementation of de-padding. Maybe this is already known to 
people who've been paying more attention
 than me!

Recall that the padding mechanism permits an arbitrary number of 00 bytes 
to be added after the plaintext and content type byte, up to the max record 
size. This data is then encrypted using whichever AEAD scheme is specified in 
the cipher suite. This padding
 scheme is quite important for TLS 1.3 because the current AEAD schemes do 
leak the length of record plaintexts. There should be no padding oracle style 
attack possible because of the integrity guarantees of the AEAD schemes in use.

The idea for the timing attack is as follows.

The natural way to depad (after AEAD decryption) is to remove the 00 bytes 
at the end of the plaintext structure one by one, until a non-00 byte is 
encountered. This is then the content type byte. Notice that the amount of time 
needed to execute this depadding
 routine would be proportional to the number of padding bytes. If there's 
some kind of response record for this record, then measuring the time taken 
from reception of the target record to the appearance of the response record 
can be used to infer information
 about the amount of padding, and thereby, the true length of the plaintext 
(since the length of the padded plaintext is known from the ciphertext length).

The timing differences here would be small. But they could be amplified by 
various techniques. For example, the cumulative timing difference over many 
records could allow leakage of the sum of the true plaintext lengths. Think of 
a client browser fetching a
 simple webpage from a browser. The page is split over many TLS records, 
each of which is individually padded, with the next GET request from the client 
being the "response record". (This is a pretty simplistic view of how a web 
browser works, I know!). The
 total timing difference might then be sufficient for webpage 
fingerprinting, for example.

I'm not claiming this is a big issue, but maybe something worth thinking 
about and addressing in the TLS 1.3 spec.

There's at least a couple of ways to avoid the problem:

1. Do constant-time depadding - by examining every byte in the plaintext 
structure even after the first non-00 byte is encountered.
2. Add an explicit padding length field at the end of the plaintext 
structure, and removing padding without checking its contents. (This should be 
safe because of the AEAD integrity guarantees.)

Option 2 is probably a bit invasive at this late stage in the specification 
process. Maybe a sentence or two on option 1 could be added to the spec.

Thoughts?

Cheers,

Kenny




___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls







___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Possible timing attack on TLS 1.3 padding mechanism

2018-03-01 Thread Paterson, Kenny
Hi,

I've been analysing the record protocol spec for TLS 1.3 a bit, specifically 
the new padding mechanism. I think there's a possible timing attack on a naïve 
implementation of de-padding. Maybe this is already known to people who've been 
paying more attention than me!

Recall that the padding mechanism permits an arbitrary number of 00 bytes to be 
added after the plaintext and content type byte, up to the max record size. 
This data is then encrypted using whichever AEAD scheme is specified in the 
cipher suite. This padding scheme is quite important for TLS 1.3 because the 
current AEAD schemes do leak the length of record plaintexts. There should be 
no padding oracle style attack possible because of the integrity guarantees of 
the AEAD schemes in use. 

The idea for the timing attack is as follows. 

The natural way to depad (after AEAD decryption) is to remove the 00 bytes at 
the end of the plaintext structure one by one, until a non-00 byte is 
encountered. This is then the content type byte. Notice that the amount of time 
needed to execute this depadding routine would be proportional to the number of 
padding bytes. If there's some kind of response record for this record, then 
measuring the time taken from reception of the target record to the appearance 
of the response record can be used to infer information about the amount of 
padding, and thereby, the true length of the plaintext (since the length of the 
padded plaintext is known from the ciphertext length).

The timing differences here would be small. But they could be amplified by 
various techniques. For example, the cumulative timing difference over many 
records could allow leakage of the sum of the true plaintext lengths. Think of 
a client browser fetching a simple webpage from a browser. The page is split 
over many TLS records, each of which is individually padded, with the next GET 
request from the client being the "response record". (This is a pretty 
simplistic view of how a web browser works, I know!). The total timing 
difference might then be sufficient for webpage fingerprinting, for example. 

I'm not claiming this is a big issue, but maybe something worth thinking about 
and addressing in the TLS 1.3 spec.

There's at least a couple of ways to avoid the problem:

1. Do constant-time depadding - by examining every byte in the plaintext 
structure even after the first non-00 byte is encountered. 
2. Add an explicit padding length field at the end of the plaintext structure, 
and removing padding without checking its contents. (This should be safe 
because of the AEAD integrity guarantees.) 

Option 2 is probably a bit invasive at this late stage in the specification 
process. Maybe a sentence or two on option 1 could be added to the spec.

Thoughts?

Cheers,

Kenny




___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Possible timing

2018-03-01 Thread Paterson, Kenny


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769).

2017-03-01 Thread Paterson, Kenny
Hi,

On 01/03/2017 14:31, "TLS on behalf of Dang, Quynh (Fed)"
 wrote:
>From: Aaron Zauner 
>Date: Wednesday, March 1, 2017 at 9:24 AM
>To: 'Quynh' 
>Cc: Sean Turner , "" , IRTF
>CFRG 
>Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
>(#765/#769).
>
>
>
>>
>>
>>>On 01 Mar 2017, at 13:18, Dang, Quynh (Fed)  wrote:
>>>From: Aaron Zauner 
>>>Date: Wednesday, March 1, 2017 at 8:11 AM
>>>To: 'Quynh' 
>>>Cc: Sean Turner , "" , IRTF
>>>CFRG 
>>>Subject: Re: [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
>>>(#765/#769).
>On 25 Feb 2017, at 14:28, Dang, Quynh (Fed) 
>wrote:
>Hi Sean, Joe, Eric and all,
>I would like to address my thoughts/suggestions on 2 issues in option
>a.
>1) The data limit should be addressed in term of blocks, not records.
>When the record size is not the full size, some user might not know
>what to do. When the record size is 1 block, the limit of 2^24.5
>blocks (records) is way too low unnecessarily for
> the margin of 2^-60.  In that case, 2^34.5 1-block records is the
>limit which still achieves the margin of 2^-60.
I respectfully disagree. TLS deals in records not in blocks, so in the
end any semantic change here will just confuse implementors, which
isn't a good idea in my opinion.
>>>Over the discussion of the PRs, the preference was blocks.
>>
>>
>>I don't see a clear preference. I see Brian Smith suggested switching to
>>blocks to be more precise in a PR. But in general it seems to me that
>>"Option A" was preferred in this thread anyhow - so these PRs aren't
>>relevant? I'm not sure that text on key-usage
>> limits in blocks in a spec that fundamentally deals in records is less
>>confusing, quite the opposite (at least to me). As I pointed out
>>earlier: I strongly recommend that any changes to the spec are as clear
>>als possible to engineers (non-crypto/math people)
>> -- e.g. why the spec is suddenly dealing in blocks instead of records
>>et cetera. Again; I really don't see any reason to change text here - to
>>me all suggested changes are even more confusing.
>>
>>
>
>
>Hi Aaron,
>
>
>The  technical reasons I explained are reasons for using records. I don’t
>see how that is confusing.
>
>
>If you like records, then the record number = the total blocks / the
>record size in blocks: this is simplest already.
>

That formula does not correctly compute how many records have been sent on
a connection, because the record size in blocks is variable, not constant.
You can modify it to get bounds on the total number of records sent, but
the bounds are sloppy because some records only consume 2 blocks (one for
encryption, one for masking in GHASH) while some consume far more.

It's simpler for an implementation to count how many records have been
sent on a connection  by using the connection's sequence number. This
puts less burden on the implementation/implementer.

Cheers

Kenny 


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-15 Thread Paterson, Kenny
Hi Quynh,

I'm meant to be on vacation, but I'm finding this on-going discussion 
fascinating, so I'm chipping in again.

On 15 Feb 2017, at 21:12, Dang, Quynh (Fed) 
> wrote:

Hi Atul,

I hope you had a happy Valentine!

From: Atul Luykx 
>
Date: Tuesday, February 14, 2017 at 4:52 PM
To: Yoav Nir >
Cc: 'Quynh' >, IRTF CFRG 
>, "tls@ietf.org" 
>
Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs 
(#765/#769)

Why is that 2^48 input blocks rather than 2^34.5 input blocks?
Because he wants to lower the security level.

I respectfully disagree. 2^-32, 2^-33, 2^-57, 2^-60, 2^-112 are practically the 
same: they are practically zero.

I'm not clear what you mean by "practically" here. They're clearly not the same 
as real numbers. And if we are being conservative about security, then the 
extremes in your list are a long way apart.

And, 2^-32 is an absolute chance in this case meaning that all attackers can’t 
improve their chance: no matter how much computational power the attacker has.

A sufficiently powerful adversary could carry out an exhaustive key search for 
GCM's underlying AES key. So I'm not sure what you're claiming here when you 
speak of "absolute chance".

I don’t understand why the number 2^-60 is your special chosen number for this ?

This is a bit subtle, but I'll try to explain in simple terms.

We can conveniently prove a bound of about this size (actually 2^-57) for 
INT-CTXT for a wide range of parameters covering both TLS and DTLS (where many 
verification failures may be permitted). Then, since we're ultimately 
interested in AE security, we would like to (roughly) match this for IND-CPA 
security, to get as good a bound as we can for AE security (the security bounds 
for the two notions sum to give an AE security bound - see page 2 of the "AE 
bounds" note).

In view of the INT-CTXT bound there's no point pushing the IND-CPA bound much 
lower than 2^-60 if the ultimate target is AE security. It just hurts the data 
limits more without significantly improving AE security.

Finally, 2^-60 is not *our* special chosen number. We wrote a note that 
contained a table of values, and it's worth noting that we did not make a 
specific recommendation in our note for which row of the table to select.

(Naturally, though, we'd like security to be as high as possible without making 
rekeying a frequent event. It's a continuing surprise to me that you are 
pushing for an option that actually reduces security when achieving higher 
security does not seem to cause any problems for implementors.)

In your “theory”, 2^-112 would be in “higher” security than 2^-60.

It certainly would, if it were achievable (which it is not for GCM without 
putting some quite extreme limits on data per key).

Cheers,

Kenny

Quynh.


The original text
recommends switching at 2^{34.5} input blocks, corresponding to a
success probability of 2^{-60}, whereas his text recommends switching at
2^{48} blocks, corresponding to a success probability of 2^{-32}.

Atul

On 2017-02-14 11:45, Yoav Nir wrote:
Hi, Quynh
On 14 Feb 2017, at 20:45, Dang, Quynh (Fed) 
>
wrote:
Hi Sean and all,
Beside my suggestion at
https://www.ietf.org/mail-archive/web/tls/current/msg22381.html [1],
I have a second suggestion below.
Just replacing this sentence: "
For AES-GCM, up to 2^24.5 full-size records (about 24 million) may
be
encrypted on a given connection while keeping a safety margin of
approximately 2^-57 for Authenticated Encryption (AE) security.
" in Section 5.5 by this sentence: " For AES-GCM, up to 2^48
(partial or full) input blocks may be encrypted with one key. For
other suggestions and analysis, see the referred paper above."
Regards,
Quynh.
I like the suggestion, but I’m probably missing something pretty
basic about it.
2^24.5 full-size records is 2^24.5 records of 2^14 bytes each, or
(since an AES block is 16 bytes or 2^4 bytes) 2^24.5 records of 2^10
blocks.
Why is that 2^48 input blocks rather than 2^34.5 input blocks?
Thanks
Yoav
Links:
--
[1] https://www.ietf.org/mail-archive/web/tls/current/msg22381.html
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-10 Thread Paterson, Kenny
Hi,

On 10/02/2017 18:56, "Dang, Quynh (Fed)" <quynh.d...@nist.gov> wrote:

>Dear Kenny, 
>
>From: "Paterson, Kenny" <kenny.pater...@rhul.ac.uk>
>Date: Friday, February 10, 2017 at 12:22 PM
>To: 'Quynh' <quynh.d...@nist.gov>, Sean Turner <s...@sn3rd.com>
>Cc: IRTF CFRG <c...@irtf.org>, "<tls@ietf.org>" <tls@ietf.org>
>Subject: Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs
>(#765/#769)
>
>
>
>>Dear Quynh,
>>
>>
>>On 10/02/2017 12:48, "Dang, Quynh (Fed)" <quynh.d...@nist.gov> wrote:
>>
>>
>>>Hi Kenny, 
>>>
>>>
>>>>Hi,
>>>>
>>>>
>>>>
>>>>
>>>>My preference is to go with the existing text, option a).
>>>>
>>>>
>>>>
>>>>
>>>>From the github discussion, I think option c) involves a less
>>>>conservative
>>>>security bound (success probability for IND-CPA attacker bounded by
>>>>2^{-32} instead of 2^{-60}). I can live with that, but the WG should be
>>>>aware of the weaker security guarantees it provides.
>>>>
>>>>
>>>>
>>>>
>>>>I do not understand option b). It seems to rely on an analysis of
>>>>collisions of ciphertext blocks rather than the established security
>>>>proof
>>>>for AES-GCM.
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>My suggestion was based on counting.  I analyzed AES-GCM in TLS 1.3  as
>>>being a counter-mode encryption and each counter is a 96-bit nonce ||
>>>32-bit counter. I don’t know if there is another kind of proof that is
>>>more precise than that.
>>
>>
>>Thanks for explaining. I think, then, that what you are doing is (in
>>effect) accounting for the PRP/PRF switching lemma that is used (in a
>>standard way) as part of the IND-CPA security proof of AES-GCM. One can
>>obtain a greater degree of precision by using the proven bounds for
>>IND-CPA security of AES-GCM. These incorporate the "security loss" coming
>>from the PRP/PRF switching lemma. The current best form of these bounds
>>is
>>due to Iwata et al.. This is precisely what we analyse in the note at
>>http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf - specifically, see
>>equations (5) - (7) on page 6 of that note.
>>
>
>I reviewed the paper more than once. I highly value the work. I suggested
>to reference  your paper in the text.  I think the result in your paper
>is the same with what is being suggested when the collision probability
>allowed is 2^(-32).

Thanks for this feedback. I guess my confusion arises from wondering what
you mean by collision probability and why you care about it. There are no
collisions in the block cipher's outputs per se, because AES is a
permutation for each choice of key. And collisions in the ciphertext
blocks output by AES-GCM are irrelevant to its formal security analysis.

On the other hand, when in the proof of IND-CPA security of AES-GCM one
switches from a random permutation (which is how we model AES) to a random
function (which is what we need to argue in the end that the plaintext is
masked by a one-time pad, giving indistinguishability), then one needs to
deal with the probability that collisions occur in the function's outputs
but not in the permutation's. This ends up being the main contribution to
the security bound in the proof for IND-CPA security.

Is that what you are getting at?

If so, then we are on the same page, and what remains is to decide whether
a 2^{-32} bound is a good enough security margin.

Regards,

Kenny


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-10 Thread Paterson, Kenny
Dear Quynh,

On 10/02/2017 12:48, "Dang, Quynh (Fed)"  wrote:

>Hi Kenny, 
>
>>Hi,
>>
>>
>>My preference is to go with the existing text, option a).
>>
>>
>>From the github discussion, I think option c) involves a less
>>conservative
>>security bound (success probability for IND-CPA attacker bounded by
>>2^{-32} instead of 2^{-60}). I can live with that, but the WG should be
>>aware of the weaker security guarantees it provides.
>>
>>
>>I do not understand option b). It seems to rely on an analysis of
>>collisions of ciphertext blocks rather than the established security
>>proof
>>for AES-GCM.
>>
>>
>
>
>My suggestion was based on counting.  I analyzed AES-GCM in TLS 1.3  as
>being a counter-mode encryption and each counter is a 96-bit nonce ||
>32-bit counter. I don’t know if there is another kind of proof that is
>more precise than that.

Thanks for explaining. I think, then, that what you are doing is (in
effect) accounting for the PRP/PRF switching lemma that is used (in a
standard way) as part of the IND-CPA security proof of AES-GCM. One can
obtain a greater degree of precision by using the proven bounds for
IND-CPA security of AES-GCM. These incorporate the "security loss" coming
from the PRP/PRF switching lemma. The current best form of these bounds is
due to Iwata et al.. This is precisely what we analyse in the note at
http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf - specifically, see
equations (5) - (7) on page 6 of that note.

Regards,

Kenny 

>
>
>Regards,
>Quynh. 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Closing out tls1.3 "Limits on key usage" PRs (#765/#769)

2017-02-10 Thread Paterson, Kenny
Hi,

My preference is to go with the existing text, option a).

>From the github discussion, I think option c) involves a less conservative
security bound (success probability for IND-CPA attacker bounded by
2^{-32} instead of 2^{-60}). I can live with that, but the WG should be
aware of the weaker security guarantees it provides.

I do not understand option b). It seems to rely on an analysis of
collisions of ciphertext blocks rather than the established security proof
for AES-GCM.

Regards,

Kenny

On 10/02/2017 05:44, "Cfrg on behalf of Martin Thomson"
 wrote:

>On 10 February 2017 at 16:07, Sean Turner  wrote:
>> a) Close these two PRs and go with the existing text [0]
>> b) Adopt PR#765 [1]
>> c) Adopt PR#769 [2]
>
>
>a) I'm happy enough with the current text (I've implemented that any
>it's relatively easy).
>
>I could live with c, but I'm opposed to b. It just doesn't make sense.
>It's not obviously wrong any more, but the way it is written it is
>very confusing and easily open to misinterpretation.
>
>___
>Cfrg mailing list
>c...@irtf.org
>https://www.irtf.org/mailman/listinfo/cfrg

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-22 Thread Paterson, Kenny
Hi Andrew,

My view concerning your request: no. 

Rationale: We're trying to build a more secure internet.

Meta-level comment:

You're a bit late to the party. We're metaphorically speaking at the stage of 
emptying the ash trays and hunting for the not quite empty beer cans. 

More exactly, we are at draft 15 and RSA key transport disappeared from the 
spec about a dozen drafts ago. I know the banking industry is usually a bit 
slow off the mark, but this takes the biscuit. 

Cheers,

Kenny 

> On 22 Sep 2016, at 20:27, BITS Security  wrote:
> 
> To:  IETF TLS 1.3 Working Group Members
> 
> My name is Andrew Kennedy and I work at BITS, the technology policy division 
> of the Financial Services Roundtable (http://www.fsroundtable.org/bits).  My 
> organization represents approximately 100 of the top 150 US-based financial 
> services companies including banks, insurance, consumer finance, and asset 
> management firms.  
> 
> I manage the Technology Cybersecurity Program, a CISO-driven forum to 
> investigate emerging technologies; integrate capabilities into member 
> operations; and advocate member, sector, cross-sector, and private-public 
> collaboration.
> 
> While I am aware and on the whole supportive of the significant contributions 
> to internet security this important working group has made in the last few 
> years I recently learned of a proposed change that would affect many of my 
> organization's member institutions:  the deprecation of RSA key exchange.
> 
> Deprecation of the RSA key exchange in TLS 1.3 will cause significant 
> problems for financial institutions, almost all of whom are running TLS 
> internally and have significant, security-critical investments in out-of-band 
> TLS decryption. 
> 
> Like many enterprises, financial institutions depend upon the ability to 
> decrypt TLS traffic to implement data loss protection, intrusion detection 
> and prevention, malware detection, packet capture and analysis, and DDoS 
> mitigation.  Unlike some other businesses, financial institutions also rely 
> upon TLS traffic decryption to implement fraud monitoring and surveillance of 
> supervised employees.  The products which support these capabilities will 
> need to be replaced or substantially redesigned at significant cost and loss 
> of scalability to continue to support the functionality financial 
> institutions and their regulators require.
> 
> The impact on supervision will be particularly severe.  Financial 
> institutions are required by law to store communications of certain employees 
> (including broker/dealers) in a form that ensures that they can be retrieved 
> and read in case an investigation into improper behavior is initiated.  The 
> regulations which require retention of supervised employee communications 
> initially focused on physical and electronic mail, but now extend to many 
> other forms of communication including instant message, social media, and 
> collaboration applications.  All of these communications channels are 
> protected using TLS.
> 
> The impact on network diagnostics and troubleshooting will also be serious.  
> TLS decryption of network packet traces is required when troubleshooting 
> difficult problems in order to follow a transaction through multiple layers 
> of infrastructure and isolate the fault domain.   The pervasive visibility 
> offered by out-of-band TLS decryption can't be replaced by MITM 
> infrastructure or by endpoint diagnostics.  The result of losing this TLS 
> visibility will be unacceptable outage times as support groups resort to 
> guesswork on difficult problems.
> 
> Although TLS 1.3 has been designed to meet the evolving security needs of the 
> Internet, it is vital to recognize that TLS is also being run extensively 
> inside the firewall by private enterprises, particularly those that are 
> heavily regulated.  Furthermore, as more applications move off of the desktop 
> and into web browsers and mobile applications, dependence on TLS is 
> increasing. 
> 
> Eventually, either security vulnerabilities in TLS 1.2, deprecation of TLS 
> 1.2 by major browser vendors, or changes to regulatory standards will force 
> these enterprises - including financial institutions - to upgrade to TLS 1.3. 
>  It is vital to financial institutions and to their customers and regulators 
> that these institutions be able to maintain both security and regulatory 
> compliance during and after the transition from TLS 1.2 to TLS 1.3.
> 
> At the current time viable TLS 1.3-compliant solutions to problems like DLP, 
> NIDS/NIPS, PCAP, DDoS mitigation, malware detection, and monitoring of 
> regulated employee communications appear to be immature or nonexistent.  
> There are serious cost, scalability, and security concerns with all of the 
> currently proposed alternatives to the existing out-of-band TLS decryption 
> architecture: 
> 
> -  End point monitoring: This technique does not replace the 

Re: [TLS] Randomization of nonces

2016-08-15 Thread Paterson, Kenny
Sadly, you can't implement XGCM using an existing AES-GCM API, because of the 
way the MAC (which is keyed) is computed over the ciphertext in the standard 
GCM scheme.

This does not contradict what you wrote, but may be a barrier to adoption.

Cheers

Kenny

On 15 Aug 2016, at 16:40, Watson Ladd 
> wrote:


Dear TLS list,
Sitting in Santa Barbara I have just learned that our nonce randomization does 
slightly better then GCM in the multiuser setting. However, XGCM would produce 
even better security.

XGCM is GCM with masking applied to blocks before and after each encryption. It 
can be implemented on top counter mode and GHASH easily.

As an alternative we could use 256 bit keys.

Sincerely,
Watson Ladd

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-13 Thread Paterson, Kenny
Hi

On 13/07/2016 11:55, "Dang, Quynh (Fed)" <quynh.d...@nist.gov> wrote:

>Good morning Kenny,
>
>On 7/12/16, 3:03 PM, "Paterson, Kenny" <kenny.pater...@rhul.ac.uk> wrote:
>
>>Hi,



>>Could you define "safe", please? Safe for what? For whom?
>>
>>Again, why are you choosing 2^-32 for your security bound? Why not 2^-40
>>or even 2^-24? What's your rationale? Is it just finger in the air, or do
>>you have a threat analysis, or ...?
>
>I said it is safe because the chance of 1 in 4,294,967,296 practically
>does not happen. I am not interested in talking about other numbers and
>other questions.

OK, then I think we're done here.

>>> I don¹t
>>> recommend to run another function/protocol when there are no needs for
>>>it.
>>> I don¹t see any particular reasons for mentioning single key in the
>>> indistinguishability attack here.
>>> 
>>
>>Then please read a little further into the note that presents the
>>analysis: a conservative but generic approach dictates that, when the
>>attacker has multiple keys to attack, we should multiply the security
>>bounds by the number of target keys.
>>
>>A better analysis for AES-GCM may eventually be forthcoming but we don't
>>have it yet. 
>>
>>>> Then do you have a
>>>> specific concern about the security of rekeying? I could see various
>>>>ways
>>>> in which it might go wrong if not designed carefully.
>>>> 
>>>> Or are you directly linking a fundamental security question to an
>>>> operational one, by which I mean: are you saying we should trade
>>>>security
>>>> for avoiding the "cost" of rekeying for some notion of "cost"? If so,
>>>>can
>>>> you quantify the cost for the use cases that matter to you?
>>
>>I'd love to have your answer to these questions. I didn't see one yet.
>>What is the cost metric you're using and how does it quantity for your
>>use cases?
>
>Again, I am not interested in other questions. I suggested the number
>about 2^38 records because it is a safe data bound because Eric put in his
>tls 1.3 draft the number 2^24.5 which is unnecessarily small.

Again, look like we're done here.


>Your paper is a nice one which gives users good information about choices.

Thanks, I'm glad you found it useful.

Cheers

Kenny 

>
>>
>>Cheers,
>>
>>Kenny
>>
>>>> 
>>>> Cheers,
>>>> 
>>>> Kenny
>>> 
>>> Regards,
>>> Quynh.
>
>Regards,
>Quynh. 
>>> 
>>> 
>>> 
>>> 
>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Paterson, Kenny
Yup, that's crypto, folks. 

These are the kinds of numbers we should be worrying about for a protocol that 
will be deployed for decades to billions of people and devices. 

> On 12 Jul 2016, at 19:06, Scott Fluhrer (sfluhrer) <sfluh...@cisco.com> wrote:
> 
> 
>> -Original Message-----
>> From: Paterson, Kenny [mailto:kenny.pater...@rhul.ac.uk]
>> Sent: Tuesday, July 12, 2016 1:17 PM
>> To: Dang, Quynh (Fed); Scott Fluhrer (sfluhrer); Eric Rescorla; tls@ietf.org
>> Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt
>> 
>> Hi
>> 
>>> On 12/07/2016 18:04, "Dang, Quynh (Fed)" <quynh.d...@nist.gov> wrote:
>>> 
>>> Hi Kenny,
>>> 
>>>> On 7/12/16, 12:33 PM, "Paterson, Kenny" <kenny.pater...@rhul.ac.uk>
>>> wrote:
>>> 
>>>> Finally, you write "to come to the 2^38 record limit, they assume that
>>>> each record is the maximum 2^14 bytes". For clarity, we did not
>>>> recommend a limit of 2^38 records. That's Quynh's preferred number,
>>>> and is unsupported by our analysis.
>>> 
>>> What is problem with my suggestion even with the record size being the
>>> maximum value?
>> 
>> There may be no problem with your suggestion. I was simply trying to make it
>> clear that 2^38 records was your suggestion for the record limit and not 
>> ours.
>> Indeed, if one reads our note carefully, one will find that we do not make 
>> any
>> specific recommendations. We consider the decision to be one for the WG;
>> our preferred role is to supply the analysis and help interpret it if people
>> want that. Part of that involves correcting possible misconceptions and
>> misinterpretations before they get out of hand.
>> 
>> Now 2^38 does come out of our analysis if you are willing to accept single 
>> key
>> attack security (in the indistinguishability sense) of 2^{-32}. So in that 
>> limited
>> sense, 2^38 is supported by our analysis. But it is not our recommendation.
>> 
>> But, speaking now in a personal capacity, I consider that security margin to 
>> be
>> too small (i.e. I think that 2^{-32} is too big a success probability).
> 
> To be clear, this probability is that an attacker would be able to take a 
> huge (4+ Petabyte) ciphertext, and a compatibly sized potential (but 
> incorrect) plaintext, and with probability 2^{-32}, be able to determine that 
> this plaintext was not the one used for the ciphertext (and with probability 
> 0.9767..., know nothing about whether his guessed plaintext was 
> correct or not).
> 
> I'm just trying to get people to understand what we're talking about.  This 
> is not "with probability 2^{-32}, he can recover the plaintext"
> 
> 
>> 
>> Regards,
>> 
>> Kenny
> 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Paterson, Kenny
Hi

On 12/07/2016 18:12, "Dang, Quynh (Fed)" <quynh.d...@nist.gov> wrote:

>Hi Kenny, 
>
>On 7/12/16, 1:05 PM, "Paterson, Kenny" <kenny.pater...@rhul.ac.uk> wrote:
>
>>Hi
>>
>>On 12/07/2016 16:12, "Dang, Quynh (Fed)" <quynh.d...@nist.gov> wrote:
>>
>>>Hi Kenny,
>>>
>>>I support the strongest indistinguishability notion mentioned in (*)
>>>above, but in my opinion we should provide good description to the
>>>users.
>>
>>OK, I think now we are at the heart of your argument. You support our
>>choice of security definition and method of analysis after all.
>>
>>And we can agree that good descriptions can only help.
>>
>>>That is why I support the limit around 2^38 records.
>>
>>I don't see how changing 2^24.5 (which is in the current draft) to 2^38
>>provides a better description to users.
>>
>>Are you worried they won't know what a decimal in the exponent means?
>>
>>Or, more seriously, are you saying that 2^{-32} for single key attacks is
>>a big enough security margin? If so, can you say what that's based on?
>
>It would not make sense to ask people to rekey unnecessarily. 1 in 2^32 is
>1 in 4,294,967,296 for the indistinguishability attack.

I would agree that it does not make sense to ask TLS peers to rekey
unnecessarily. I also agree that 1 in 2^32 is
1 in 4,294,967,296. Sure looks like a big, scary number, don't it?

Are you then arguing that 2^{-32} for single key attacks is a big enough
security margin because we want to avoid rekeying? Then do you have a
specific concern about the security of rekeying? I could see various ways
in which it might go wrong if not designed carefully.

Or are you directly linking a fundamental security question to an
operational one, by which I mean: are you saying we should trade security
for avoiding the "cost" of rekeying for some notion of "cost"? If so, can
you quantify the cost for the use cases that matter to you?

Cheers,

Kenny 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Paterson, Kenny
Hi

On 12/07/2016 18:04, "Dang, Quynh (Fed)" <quynh.d...@nist.gov> wrote:

>Hi Kenny, 
>
>On 7/12/16, 12:33 PM, "Paterson, Kenny" <kenny.pater...@rhul.ac.uk> wrote:
>
>>Finally, you write "to come to the 2^38 record limit, they assume that
>>each record is the maximum 2^14 bytes". For clarity, we did not recommend
>>a limit of 2^38 records. That's Quynh's preferred number, and is
>>unsupported by our analysis.
>
>What is problem with my suggestion even with the record size being the
>maximum value?

There may be no problem with your suggestion. I was simply trying to make
it clear that 2^38 records was your suggestion for the record limit and
not ours. Indeed, if one reads our note carefully, one will find that we
do not make any specific recommendations. We consider the decision to be
one for the WG; our preferred role is to supply the analysis and help
interpret it if people want that. Part of that involves correcting
possible misconceptions and misinterpretations before they get out of hand.

Now 2^38 does come out of our analysis if you are willing to accept single
key attack security (in the indistinguishability sense) of 2^{-32}. So in
that limited sense, 2^38 is supported by our analysis. But it is not our
recommendation.

But, speaking now in a personal capacity, I consider that security margin
to be too small (i.e. I think that 2^{-32} is too big a success
probability).

Regards,

Kenny 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Paterson, Kenny
Unfortunately, that's not quite the right interpretation. The bounds one
obtains depend on both the total amount of data encrypted AND the number
of encryption queries the adversary is allowed to make to AES-GCM under
the (single) target key.

We assumed each record was 2^14 bytes in size to simplify the ensuing
analysis, and to enable us to focus on how the security bounds then depend
on the number of records encrypted. See equation (5) and Table 2 in the
note at 

http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf.

In short, the security bound does not necessarily hold for ANY 2^52
encrypted data bytes. For example, if the attacker encrypted 2^52 records
of size 1 (!) then equation (5) would tell us almost nothing useful at all
about security.

Finally, you write "to come to the 2^38 record limit, they assume that
each record is the maximum 2^14 bytes". For clarity, we did not recommend
a limit of 2^38 records. That's Quynh's preferred number, and is
unsupported by our analysis.

Cheers,

Kenny 


On 12/07/2016 16:45, "Scott Fluhrer (sfluhrer)" <sfluh...@cisco.com> wrote:

>Actually, a more correct way of viewing the limit would be 2^52 encrypted
>data bytes. To come to the 2^38 record limit, they assume that each
>record is the maximum 2^14 bytes.  Of course, at a 1Gbps rate, it'd take
>over a year to encrypt that much data...
>
>> -Original Message-
>> From: TLS [mailto:tls-boun...@ietf.org] On Behalf Of Dang, Quynh (Fed)
>> Sent: Tuesday, July 12, 2016 11:12 AM
>> To: Paterson, Kenny; Dang, Quynh (Fed); Eric Rescorla; tls@ietf.org
>> Subject: Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt
>> 
>> Hi Kenny,
>> 
>> The indistinguishability-based security notion in the paper is a
>>stronger
>> security notion than the (old) traditional confidentiality notion.
>> 
>> 
>> (*) Indistinguishability notion (framework) guarantees no other attacks
>>can
>> be better than the indistinguishability bound. Intuitively, you can¹t
>>attack if
>> you can¹t even tell two things are different or not. So, being able to
>>say two
>> things are different or not is the minimal condition to lead to any
>>attack.
>> 
>> The traditional confidentiality definition is that knowing only the
>>ciphertexts,
>> the attacker can¹t know any content of the corresponding plaintexts
>>with a
>> greater probability than some value and this value depends on the
>>particular
>> cipher. Of course, the maximum amount of data must not be more than
>> some limit under a given key which also depends on the cipher.
>> 
>> For example, with counter mode AES_128, Let¹s say encrypting 2^70 input
>> blocks with a single key. With the 2^70 ciphertext blocks alone (each
>>block is
>> 128 bits), I don¹t think one can find out any content of any of the
>>plaintexts.
>> The chance for knowing any block of the plaintexts is
>> 1/(2^128) in this case.
>> 
>> I support the strongest indistinguishability notion mentioned in (*)
>>above,
>> but in my opinion we should provide good description to the users.
>> That is why I support the limit around 2^38 records.
>> 
>> Regards,
>> Quynh.
>> 
>> On 7/12/16, 10:03 AM, "Paterson, Kenny" <kenny.pater...@rhul.ac.uk>
>> wrote:
>> 
>> >Hi Quynh,
>> >
>> >This indistinguishability-based security notion is the confidentiality
>> >notion that is by now generally accepted in the crypto community.
>> >Meeting it is sufficient to guarantee security against many other forms
>> >of attack on confidentiality, which is one of the main reasons we use
>>it.
>> >
>> >You say that an attack in the sense implied by breaking this notion
>> >does not break confidentiality. Can you explain what you mean by
>> >"confidentiality", in a precise way? I can then try to tell you whether
>> >this notion will imply yours.
>> >
>> >Regards
>> >
>> >Kenny
>> >
>> >On 12/07/2016 14:04, "TLS on behalf of Dang, Quynh (Fed)"
>> ><tls-boun...@ietf.org on behalf of quynh.d...@nist.gov> wrote:
>> >
>> >>Hi Eric and all,
>> >>
>> >>
>> >>In my opinion, we should give better information about data limit for
>> >>AES_GCM in TLS 1.3 instead of what is current in the draft 14.
>> >>
>> >>
>> >>In this paper: http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf,  what
>> >>is called confidentiality attack is the known plaintext
>> >>differentiality attack 

Re: [TLS] New draft: draft-ietf-tls-tls13-14.txt

2016-07-12 Thread Paterson, Kenny
Hi Quynh,

This indistinguishability-based security notion is the confidentiality
notion that is by now generally accepted in the crypto community. Meeting
it is sufficient to guarantee security against many other forms of attack
on confidentiality, which is one of the main reasons we use it.

You say that an attack in the sense implied by breaking this notion does
not break confidentiality. Can you explain what you mean by
"confidentiality", in a precise way? I can then try to tell you whether
this notion will imply yours.

Regards

Kenny 

On 12/07/2016 14:04, "TLS on behalf of Dang, Quynh (Fed)"
 wrote:

>Hi Eric and all, 
>
>
>In my opinion, we should give better information about data limit for
>AES_GCM in TLS 1.3 instead of what is current in the draft 14.
>
>
>In this paper: http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf,  what is
>called confidentiality attack is the known plaintext differentiality
>attack where
> the attacker has/chooses two plaintexts, send them to the AES-encryption
>oracle.  The oracle encrypts one of them, then sends the ciphertext to
>the attacker.  After seeing the ciphertext, the attacker has some success
>probability of telling which plaintext
> was encrypted and this success probability is in the column called
>“Attack Success Probability” in Table 1.  This attack does not break
>confidentiality. 
>
>
>If the attack above breaks one of security goal(s) of your individual
>system, then making success probability of that attack at 2^(-32) max is
>enough. In that case, the Max number of records is around 2^38.
>
>
>
>
>Regards,
>Quynh. 
>
>
>
>
>
>
>Date: Monday, July 11, 2016 at 3:08 PM
>To: "tls@ietf.org" 
>Subject: [TLS] New draft: draft-ietf-tls-tls13-14.txt
>
>
>
>Folks,
>
>
>I've just submitted draft-ietf-tls-tls13-14.txt and it should
>show up on the draft repository shortly. In the meantime you
>can find the editor's copy in the usual location at:
>
>
>  http://tlswg.github.io/tls13-spec/
>
>
>The major changes in this document are:
>
>
>* A big restructure to make it read better. I moved the Overview
>  to the beginning and then put the document in a more logical
>  order starting with the handshake and then the record and
>  alerts.
>
>
>* Totally rewrote the section which used to be called "Security
>  Analysis" and is now called "Overview of Security Properties".
>  This section is still kind of a hard hat area, so PRs welcome.
>  In particular, I know I need to beef up the citations for the
>  record layer section.
>
>
>* Removed the 0-RTT EncryptedExtensions and moved ticket_age
>  into the ClientHello. This quasi-reverts a change in -13 that
>  made implementation of 0-RTT kind of a pain.
>
>
>As usual, comments welcome.
>-Ekr
>
>
>
>
>
>
>* Allow cookies to be longer (*)
>
>
>* Remove the "context" from EarlyDataIndication as it was undefined
>  and nobody used it (*)
>
>
>* Remove 0-RTT EncryptedExtensions and replace the ticket_age extension
>  with an obfuscated version. Also necessitates a change to
>  NewSessionTicket (*).
>
>
>* Move the downgrade sentinel to the end of ServerHello.Random
>  to accomodate tlsdate (*).
>
>
>* Define ecdsa_sha1 (*).
>
>
>* Allow resumption even after fatal alerts. This matches current
>  practice.
>
>
>* Remove non-closure warning alerts. Require treating unknown alerts as
>  fatal.
>
>
>* Make the rules for accepting 0-RTT less restrictive.
>
>
>* Clarify 0-RTT backward-compatibility rules.
>
>
>* Clarify how 0-RTT and PSK identities interact.
>
>
>* Add a section describing the data limits for each cipher.
>
>
>* Major editorial restructuring.
>
>
>* Replace the Security Analysis section with a WIP draft.
>
>
>(*) indicates changes to the wire protocol which may require
>implementations
>to update.
>
>
>
>
>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Consensus call for keys used in handshake and data messages

2016-06-17 Thread Paterson, Kenny
Hi Ilari,

On 14/06/2016 20:01, "TLS on behalf of Ilari Liusvaara"
 wrote:

>I too haven't seen an argument (or am I able to construct one
>myself) on why using the same key causes more issues than
>"more difficult for cryptographers" (without assumptions known
>to be false or cause severe problems no matter what).
>
>
>Such arguments could include e.g. crypto screw (no proof of
>exploitability needed), implementability, narrowing works-vs-
>correct gap, etc...
>
>
>About every other issue I could come up with, it seems to be just
>as bad with separate keys and public content types (except those
>ones that are just worse with public content types of course).
>

Since no-one else replied: it's a detailed technical issue about
constructing proofs of security. At a very high level, and at the risk of
over-simplifying, the more "key separation" you have, the easier it is to
get them to go through.

Maybe someone else who is more into the details than me can chime in with
the next-level explanation.

Cheers

Kenny 

>
>
>-Ilari
>
>___
>TLS mailing list
>TLS@ietf.org
>https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Consensus call for keys used in handshake and data messages

2016-06-17 Thread Paterson, Kenny
Hi Ilari,

On 15/06/2016 17:23, "TLS on behalf of Ilari Liusvaara"
 wrote:

>On Wed, Jun 15, 2016 at 09:44:18AM -0400, Daniel Kahn Gillmor wrote:
>> On Wed 2016-06-15 04:44:59 -0400, Yoav Nir wrote:
>> 
>> To be clear, we're being asked to trade these things off against each
>> other here, but there are other options which were ruled out in the
>> prior framing of the question which don't rule either of them out.
>> 
>> In particular, if we're willing to pay the cost of a slightly more
>> complex key schedule (and an increased TLS record size), we could have
>> "packet header" keys which protect the content-type itself for all
>> non-cleartext TLS records.  If we do that, these keys might as well also
>> be used to protect the TLS record size itself.  This would result in an
>> opaque data stream (though obviously record size would still leak in
>> DTLS, and timing and framing is still likely to leak the record size in
>> the lowest-latency TLS applications).
>
>Does this need to enlarge TLS record size? Why doesn't encrypting the
>content-type/length and then authenticating those off main MAC work
>(that's how SSH with CHACHA20-POLY1305 does things)? I presume
>problems from header-flipping (tho in TLS that will kill the
>connection if you try...)

Yes, this can be made to work in the style adopted for ChaCha20-Poly1305
in SSH. 

However, because the record length is now determined by data that is
encrypted, and you need to know its value in order to "receive" enough
bytes to have obtained the record MAC which comes at the end of the
record, and because the record MAC can't now be checked before you make
use of the length field  you need to be a bit careful.

But it can be proved secure when using certain AEAD schemes as the basis,
and in a suitable security model that allows for decryption in part
depending on data that was acted on before it was unauthenticated and for
delivery of records in a fragmented fashion. Just don't use CBC mode for
the encryption :-)  (A more serious point: this kind of thing would not be
secure using a generic EtM-style AEAD scheme as the building block.)

In fact, if you're careful enough with the analysis, you can improve a bit
on the ChaCha20-Poly1305 construction in SSH: it currently uses a 64-byte
key, with 32 bytes being used to create one ChaCha20 context for
encrypting the length field and another 32 bytes being used to create a
second ChaCha20 context for encrypting the rest. This is not necessary if
you construct the ChaCha20 nonces/IVs in a slightly different way - a
single ChaCha20 context suffices. The same ought to be true in the
slightly different TLS setting, and also for an AES-GCM-based construction.

Happy to follow-up with discussion of more details if people seriously
want to consider this kind of construction for TLS 1.3. It's not what's
currently on the table, but maybe it should be...

Cheers

Kenny 

>
>Also, in DTLS, there could be issues switching the encryption on
>(but then, looks like DTLS 1.3 has other unsolved problems
>currently..)
>
>
>-Ilari
>
>
>___
>TLS mailing list
>TLS@ietf.org
>https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Technical Errata Reported] RFC5288 (4694)

2016-05-16 Thread Paterson, Kenny
Hi

On 16/05/2016 11:15, "Aaron Zauner" <a...@azet.org> wrote:

>Hi Kenny,
>
>> On 16 May 2016, at 16:48, Paterson, Kenny <kenny.pater...@rhul.ac.uk>
>>wrote:
>> 
>> Maybe the confusion is this: in your authenticity attack, you do recover
>> the GHASH key, and the effect is catastrophic. In the confidentiality
>> attack, one can recover plaintexts for the records with repeated nonces,
>> but not the encryption key. The effect may be bad - but it's perhaps not
>> as catastrophic in practice as the authenticity attack.
>
>Ah, I see. Yes and we do not consider this in our paper at all. Maybe we
>should? Not sure how practical this is.

Good to get this cleared up. Yes, it's eminently practical to recover the
two plaintexts from their XOR assuming you have a good language model
(e.g. one can use a Markov model with a suitable memory length; this would
work for HTTP records, natural language, etc). To code it all up is not
trivial - I currently set it as a final year project for our undergrad
students, for example. The paper by Mason et al from CCS 2006 gives a nice
account of the whole business.

>
>> Think about it this way: for your injection attack, you need to recover
>> the CTR keystream - otherwise you couldn't properly AES-GCM-encrypt your
>> chosen plaintext record for the injection. But if you recovered the
>> keystream as part of your attack, then you've also recovered the
>>plaintext
>> for the original record.
>> 
>> Or maybe in your injection attack you were assuming you already *knew*
>>the
>> plaintext? That would make sense, I guess - a lot easier then to recover
>> the keystream than doing the "undoing the XOR" attack needed to recover
>> P_1 and P_2 from P_1 XOR P_2.
>
>The first step of our attack involves attacker controlled content. So yes
>(phishing, unauthenticated HTTP, selective company DPI etc.). In our
>example we use a local proxy to carry out the attack. I hope I can post a
>full version of the actual paper and PoC to this thread soon.

OK, makes sense now.

Cheers

Kenny 

>
>Aaron

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Technical Errata Reported] RFC5288 (4694)

2016-05-16 Thread Paterson, Kenny
Hi

On 16/05/2016 10:37, "Aaron Zauner" <a...@azet.org> wrote:

>Hi Kenny,
>
>> On 16 May 2016, at 16:18, Paterson, Kenny <kenny.pater...@rhul.ac.uk>
>>wrote:
>> 
>> Hi Aaron,
>> 
>> If AES-GCM ever generates two ciphertexts using the same key and the
>>same
>> 96-bit nonce, then the underlying CTR-mode keystreams will be the same.
>> XORing the ciphertexts together then produces the XOR of the plaintexts,
>> from which the two individual plaintexts can be recovered (usually) with
>> high probability using standard techniques (see the paper by Mason et al
>> at CCS 2006 for a full account of this step).
>> 
>> In the TLS context, this means using the same 64-bit nonce_explicit in a
>> given connection - because then opaque salt will be the same 32-bit
>>value.
>> 
>> This condition is detectable by an adversary because the nonce_explicit
>> part is sent on the wire (the clue is in the name!).
>> 
>> You don't need to know the full 96-bit nonce to carry out the attack.
>
>Yes, I understood that, of course. But:
>
>> Once you've recovered a plaintext, you can also recover the
>>corresponding
>> CTR-mode keystream. Together with the integrity key, this now enables
>> packet forgery attacks for arbitrary plaintexts (of length limited by
>>that
>> of the known keystream).
>
>Right. Joux's attack doesn't recover a plaintext of the actual TLS
>session, we attack GHASH in this case and factor possible candidate
>polynomials of the /authentication key/. In this context I assume
>'confidentiality compromise' with: somebody can recover plaintext from
>captured TLS records. At least in our attack this isn't the case. We're
>merely able to inject malicious content. Am I amiss? Or am I just
>confused about nomenclature?

I think you are amiss.

Maybe the confusion is this: in your authenticity attack, you do recover
the GHASH key, and the effect is catastrophic. In the confidentiality
attack, one can recover plaintexts for the records with repeated nonces,
but not the encryption key. The effect may be bad - but it's perhaps not
as catastrophic in practice as the authenticity attack.

Think about it this way: for your injection attack, you need to recover
the CTR keystream - otherwise you couldn't properly AES-GCM-encrypt your
chosen plaintext record for the injection. But if you recovered the
keystream as part of your attack, then you've also recovered the plaintext
for the original record.

Or maybe in your injection attack you were assuming you already *knew* the
plaintext? That would make sense, I guess - a lot easier then to recover
the keystream than doing the "undoing the XOR" attack needed to recover
P_1 and P_2 from P_1 XOR P_2.

Cheers

Kenny  


>
>Thank you,
>Aaron

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Technical Errata Reported] RFC5288 (4694)

2016-05-16 Thread Paterson, Kenny
Hi Aaron,

If AES-GCM ever generates two ciphertexts using the same key and the same
96-bit nonce, then the underlying CTR-mode keystreams will be the same.
XORing the ciphertexts together then produces the XOR of the plaintexts,
from which the two individual plaintexts can be recovered (usually) with
high probability using standard techniques (see the paper by Mason et al
at CCS 2006 for a full account of this step).

In the TLS context, this means using the same 64-bit nonce_explicit in a
given connection - because then opaque salt will be the same 32-bit value.

This condition is detectable by an adversary because the nonce_explicit
part is sent on the wire (the clue is in the name!).

You don't need to know the full 96-bit nonce to carry out the attack.

Once you've recovered a plaintext, you can also recover the corresponding
CTR-mode keystream. Together with the integrity key, this now enables
packet forgery attacks for arbitrary plaintexts (of length limited by that
of the known keystream).

IIRC, we discussed this by e-mail some months back...

Regards,

Kenny



On 16/05/2016 10:04, "TLS on behalf of Aaron Zauner"  wrote:

>Hi,
>
>In the TLS case, RFC5288 defines the following IV construction (Section
>3):
>
>```
> struct {
>opaque salt[4];
>opaque nonce_explicit[8];
> } GCMNonce;
>
>
>   The salt is the "implicit" part of the nonce and is not sent in the
>   packet.  Instead, the salt is generated as part of the handshake
>   process: it is either the client_write_IV (when the client is
>   sending) or the server_write_IV (when the server is sending).  The
>   salt length (SecurityParameters.fixed_iv_length) is 4 octets.
>```
>
>As you can see the salt is is implicitly derived from the *_write_IV. We
>have no influence on this part of the IV construction, whereas the
>`nonce_explicit` is generated by the implementer. I don't see a way how
>we could XOR some records and compromise confidentiality, we've checked,
>believe me. If somebody can come up with an attack though, that'd be nice.
>
>On the catastrophic part: I'd like to keep it around. I don't think it
>deserves a name like a hurricane, but catastrophic is pretty spot on in
>this regard.
>
>w.r.t. nonce/n-nonce: either we keep the parentheses with "number used
>once" around or we change it to n-once as suggested by Tony and
>beautifully pronounced by Adam Langley :)
>
>Aaron

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Include Speck block cipher?

2016-03-21 Thread Paterson, Kenny
Hi

I think Rich Salz already said exactly what CFRG would say:

> If someone wants to see SPECK adopted by IETF protocols, the first thing
>that will have to happen is papers analyzing it.

There's some analysis already, but not that much.

Regards,

Kenny 




On 21/03/2016 14:27, "TLS on behalf of Sean Turner"  wrote:

>If we’re going to get into the cryptanalysis of SPECK then this thread
>should move off the TLS list and possibly to the CFRG list.
>
>spt
>
>> On Mar 21, 2016, at 10:07, Efthymios Iosifides 
>>wrote:
>> 
>> >I don't see any compelling argument for the inclusion of SPECK? Not
>>only would the affiliation with NSA give the >TLS-WG a bad rep. in the
>>public, more importantly, it makes one of our main problems worse:
>>combinatorial explosion >of possible cipher-suites in TLS. This problem
>>is so bad that it needs multiple blog posts, an effort by Mozilla and
>>>bettercrypto.org to get sys-admins to configure their services.
>> 
>> 
>> Hi all.
>> 
>> The reputation aspect is not necessarily and strictly correlated with
>>it's provenance, but with it's actual security and performance. And the
>>SPECK we shall note that performs quite well. Also we shall not forget
>>that even the infamous AES has been approved by the NSA before the
>>widespread use of it. In any case i wouldn't like for us to stand on the
>>popular press. On the other hand we shall evaluate if the SPECK could be
>>actually used. For example, the fact that it lacks extensive
>>cryptanalysis is a serious argument for not using it today, but what
>>about the future specifications. On top to that what if we could prove
>>that the SPECK can have better performance than other algos without
>>sacrificing the security.
>> 
>> 
>> BRs,
>> Efthimios Iosifides
>> 
>> 2016-03-18 19:49 GMT+02:00 Aaron Zauner :
>> Hi,
>> 
>> > On 17 Mar 2016, at 07:35, Efthymios Iosifides 
>>wrote:
>> >
>> > Hello all.
>> >
>> > I have just found on the ietf archives an email discussion about the
>>inclusion of the SPECK Cipher
>> > in the tls standards.
>> > It's reference is below
>>:https://www.ietf.org/mail-archive/web/tls/current/msg13824.html
>> >
>> > Even though that this cipher originates from the NSA one cannot find
>>a whitepaper that describes it's full cryptanalysis. In the above
>>discussion Mr. Strömbergson somehow perfunctorily presents two
>>whitepapers that describe the SPECK's cryptanalysis. Although we shall
>>keep in mind that these papers describe a limited round cryptanalysis.
>>Also we shall not forget that a similar cryptanalysis has taken place
>>for the famous AES. Therefore i personally do not see any actual
>>arguments apart from the facts that concerns the algorithm's  provenance
>>for not including it in a future tls specification. In conclusion even
>>by this day the SPECK cipher has not been yet fully cryptanalyzed
>>succesfully.
>> 
>> I don't see any compelling argument for the inclusion of SPECK? Not
>>only would the affiliation with NSA give the TLS-WG a bad rep. in the
>>public, more importantly, it makes one of our main problems worse:
>>combinatorial explosion of possible cipher-suites in TLS. This problem
>>is so bad that it needs multiple blog posts, an effort by Mozilla and
>>bettercrypto.org to get sys-admins to configure their services.
>> 
>> Aaron
>> 
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>
>___
>TLS mailing list
>TLS@ietf.org
>https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.2 Long-term Support Profile draft posted

2016-03-19 Thread Paterson, Kenny
Hi

On 16/03/2016 15:02, "TLS on behalf of Watson Ladd"  wrote:

>On Wed, Mar 16, 2016 at 5:36 AM, Peter Gutmann
> wrote:
>> After a number of, uh, gentle reminders from people who have been
>>waiting for
>> this, I've finally got around to posting the TLS-LTS draft I mentioned
>>a while
>> back.  It's now available as:
>>
>> http://www.ietf.org/id/draft-gutmann-tls-lts-00.txt
>>
>> Abstract:
>>
>>This document specifies a profile of TLS 1.2 for long-term support,
>>one that represents what's already deployed for TLS 1.2 but with the
>>security holes and bugs fixed.  This represents a stable, known-good
>>profile that can be deployed now to systems that can't can't roll out
>>patches every month or two when the next attack on TLS is published.
>>
>> Several people have already commented on it off-list while it was being
>> written, it's now open for general comments...
>
>Several comments:



>The analysis of TLS 1.3 is just wrong. TLS 1.3 has been far more
>extensively analyzed then TLS 1.2. It's almost like you don't believe
>cryptography exists: that is a body of knowledge that can demonstrate
>that protocols are secure, and which has been applied to the draft.

This is patently untrue. There is a vast body of research analysing TLS
1.2 and earlier. A good survey article is here:

https://eprint.iacr.org/2013/049


(but even this is quite out of date in several respects). The literature
for TLS 1.3 is growing, but is an order of magnitude smaller in size. It
is pretty much represented in its entirety by the list of presentations at
the recent TRON workshop:

http://www.internetsociety.org/events/ndss-symposium-2016/tls-13-ready-or-n
ot-tron-workshop-programme


As far as I know, the only complete analysis so far is this one:

http://tls13tamarin.github.io/TLS13Tamarin/


(full disclosure: two of my PhD students are involved). However, even
there, the analysis is symbolic and does not include 0-RTT (IIRC).

Maybe you'd care to revise your bold statement above?

Cheers

Kenny 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.2 Long-term Support Profile draft posted

2016-03-19 Thread Paterson, Kenny
Hi

On 16/03/2016 18:44, "Watson Ladd" <watsonbl...@gmail.com> wrote:

>On Wed, Mar 16, 2016 at 11:22 AM, Paterson, Kenny
><kenny.pater...@rhul.ac.uk> wrote:
>> Hi
>>
>> On 16/03/2016 15:02, "TLS on behalf of Watson Ladd"
>><tls-boun...@ietf.org
>> on behalf of watsonbl...@gmail.com> wrote:
>>
>> 
>>
>>>The analysis of TLS 1.3 is just wrong. TLS 1.3 has been far more
>>>extensively analyzed then TLS 1.2. It's almost like you don't believe
>>>cryptography exists: that is a body of knowledge that can demonstrate
>>>that protocols are secure, and which has been applied to the draft.
>>
>> This is patently untrue. There is a vast body of research analysing TLS
>> 1.2 and earlier. A good survey article is here:
>>
>> https://eprint.iacr.org/2013/049
>
>There's a vast literature, but much of it makes simplifying
>assumptions or doesn't address the complete protocol.

Correct, but that does not make it irrelevant or valueless. Or are you
actually saying that it does? Quite a sweeping presumption; see
immediately below.

>The first really
>complete analysis was miTLS AFAIK.

Yes, and even there the analysis was done step by step, spread out over a
series of papers which gradually built up the complexity of the code-base
being handled. And, in parallel, various other groups were doing hand
proofs of abstractions of the core protocol. And I believe it's fair to
say - from having discussed it extensively with the people involved - that
the miTLS final analysis benefitted a lot from the experience gained by
the teams doing the hand proofs, going right back to a paper in 2002 by
Jonsson and Kaliski Jr.

My point is that the TLS 1.2 "final" analysis represented by the miTLS
work was the culmination of a long line of research involving many people
and influenced by many sources.

> Furthermore, a lot of the barriers
>to analysis in TLS 1.2 got removed in TLS 1.3.

Unfortunately, some of them may be coming back again. But again, this has
nothing to do with the argument you were making.

>The question is not how
>many papers are written, but how much the papers can say about the
>protocol as implemented. And from that perspective TLS 1.3's Tamarin
>model is a fairly important step, where the equivalent steps in TLS
>1.2 got reached only much later.

The timing is entirely irrelevant to the argument you were making.

I agree though that it's about the depth and reach of the analysis. And
from this perspective, I'd say that TLS 1.3 is still way behind TLS 1.2,
despite the very nice analyses done by Sam and Thyla (and their
collaborators), by Hugo & Hoeteck, and by Felix & co.

I could go further, but I expect that, by now, only you and I are actually
reading this.

>It's true 0-RTT isn't included: so don't do it. But I think if we
>subset (not add additional implementation requirements) TLS 1.3
>appropriately we end up with a long-term profile that's more useable
>than if we subset TLS 1.2, and definitely more than adding to the set
>of mechanisms. I think claims that TLS 1.3 outside of 0-RTT is likely
>to have crypto weaknesses due to newness are vastly overstated.

I didn't make that claim.

Cheers

Kenny

>-- 
>"Man is born free, but everywhere he is in chains".
>--Rousseau.

"If I have seen further it is by standing on the shoulders of Giants"
-- Newton, in a letter to Robert Hooke


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Proposal: don't change keys between handshake and application layer

2016-02-20 Thread Paterson, Kenny
Hi

My 2c below...

On 20/02/2016 18:53, "TLS on behalf of Cedric Fournet"
 wrote:
>
> 
>Besides, in our analysis of the handshake, we get precisely the same
>“fresh, never-used secret” property you are advocating, with or
> without the simplification, each time the handshake provides keys to the
>record layer. These points are well-specified and delimited. They just do
>not coincide with the end of the handshake. So I don’t see any
>fundamental difference in term of generic-security.
> 
>We are left with scenarios whereby the record layer, given both the
>handshake and application-data traffic keys, somehow leaks only
> one of the two. I don’t mind keeping the key change if someone is
>concerned about it.

Like Hugo, I am also concerned about removing the key change. Let me try
to explain why.

I would emphasise the point that while we *can* provide formal security
analyses in the situation where the application key is used during the
handshake itself, it is made significantly more complex.

In particular, it means we cannot simply combine security results arising
from separate analyses of the Handshake Protocol and of the Record
Protocol to obtain security guarantees for the composition of these two
protocols. So for example, if we chose to refine our modelling of the
Record Protocol in some way, then we would need to re-do the analysis of
TLS (Handshake + Record Protocols) as a monolithic effort. That's hard
work and error prone. This situation would not arise if the application
keys were NOT used in the handshake.

This is not merely a conjectured issue. As one example, several papers
(including [1,2]) in the "provable security" paradigm have used the ACCE
framework as a security model within which to conduct analysis of
fragments of previous versions of TLS. ACCE was specifically designed to
deal with keys that are used across the Handshake and the Record
protocols. But ACCE views the Record Protocol in a particular way -
essentially as a stateful encryption scheme that processes "atomic"
messages. This modelling does not take into full account the streaming
nature of the TLS Record Protocol - in fact, what the Record Protocol
actually guarantees is significantly different from what is implied by the
ACCE modelling of it - see [3] for details, and also the cookie cutter
attack from the Triple Handshakes paper for an example of what can go
wrong.

One implication of this is that the results of [1,2] for existing versions
of TLS really need to be reworked in a modified version of the ACCE
framework that better reflects the streaming nature of the Record
Protocol. That need for rework would have been avoided had TLS not used
application keys in the Handshake Protocol, because the compositional
guarantees would have enabled us to "plug and play" with our results.

The same pertains in TLS 1.3 going forward: keeping the strict key
separation - no use of the application keys in the Handshake - will make
future - and on-going - analyses of TLS 1.3 easier. This is
notwithstanding the fact that several research teams, including Cedric's
and [1,2] below, have managed to produce analyses that do handle the use
of the application key in the Handshake.

Regards,

Kenny 

[1] Tibor Jager, Florian Kohlar, Sven Schäge, Jörg Schwenk. On the
Security of TLS-DHE in the Standard Model. CRYPTO 2012.
[2] Hugo Krawczyk, Kenneth G. Paterson, Hoeteck Wee. On the Security of
the TLS Protocol: A Systematic Analysis. CRYPTO (1) 2013.
[3] Marc Fischlin, Felix Günther, Giorgia Azzurra Marson, Kenneth G.
Paterson: Data Is a Stream: Security of Stream-Based Channels. CRYPTO (2)
2015.


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-15 Thread Paterson, Kenny
RC4 does not rekey per application layer fragment in TLS. The same key is used 
for the duration of a connection. 

Other protocols using RC4 do rekey per packet, eg WEP and WPA/TKIP. 

Cheers

Kenny

> On 16 Dec 2015, at 16:37, Ryan Carboni  wrote:
> 
> How often does TLS rekey anyway? I know RC4 rekeys per packet, but I've read 
> and searched a fair amount of documentation, and haven't found anything on 
> the subject. Perhaps I'm looking for the wrong terms or through the wrong 
> documents.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls