Re: [TLS] Treatment of (legacy_record_)version field [was Re: (strict) decoding of legacy_record_version?]

2016-11-23 Thread Andreas Walz
>>> Eric Rescorla  11/23/16 2:18 PM >>>
> In general, it should ignore it. It's going to become increasingly common to 
> have this be a version you don't support given the recommendation to use
> 0301 and the ongoing deprecation of TLS 1.0. I think it would be fine to 
> sanity
> check the major version, but I'm not sure what would be gained by requiring 
> this. 

The one benefit of checking at least the record's major version I see is that 
one
preserves means for the future to signal a record format incompatible with the
current one. Otherwise, this field is given away for arbitrary and 
non-standardized
(ab)use ...

 
Thanks and Cheers,
Andi

___

Andreas Walz
Research Engineer
Institute of reliable Embedded Systems and Communication Electronics (ivESK)
Offenburg University of Applied Sciences, 77652 Offenburg, Germany





>>> Benjamin Kaduk  11/10/16 5:22 PM >>>
 On 11/08/2016 06:25 PM, Martin Thomson wrote:
On 9 November 2016 at 05:59, Brian Smith  
wrote:
This isn't a pervasively shared goal, though. It's good to let 
the browsers
police things if they want, but I think a lot of implementations would
prefer to avoid doing work that isn't necessary for interop or security.
  If you permit someone to enforce it, then that is sufficient.  I 
don't
think that we should ever force someone to enforce these sorts of
things (as you say, sometimes strict enforcement isn't cheap or even
desirable).
  
 Agreed.  We should probably change the text a bit, though, as right 
now readers can get two different readings depending on whether they go for 
a strict decode_error (or illegal_parameter?) since the struct doesn't 
match the definition, or follow the "MUST be ignored for all purposes".
 
 -Ben
 

 
___
 TLS mailing list
 TLS@ietf.org
 https://www.ietf.org/mailman/listinfo/tls
 




 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Christian Huitema
On Wednesday, November 23, 2016 7:20 PM, Colm MacCárthaigh wrote:
>
> Prior to TLS1.3, replay is not possible, so the risks are new, but the 
> end-to-end designers
> may not  realize to update their threat model and just what is required. I'd 
> like to spell 
> that out more than what's where at present. 

Uh? Replay was always possible, at the application level. Someone might for 
example click twice on the same URL, opening two tabs, closing one at random. 
And that's without counting on deliberate mischief.

-- Christian Huitema



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Colm MacCárthaigh
On Wed, Nov 23, 2016 at 8:40 PM, Martin Thomson 
wrote:

> On 24 November 2016 at 15:11, Colm MacCárthaigh  wrote:
> > Do you disagree that the three specific example security issues provided
> are
> > realistic, representative and impactful? If so, what would persuade you
> to
> > change your mind?
>
> These are simply variants on "if someone hits you with a stick, they
> might hurt you", all flow fairly logically from the premise, namely
> that replay is possible (i.e., someone can hit you with a stick).
>

Prior to TLS1.3, replay is not possible, so the risks are new, but the
end-to-end designers may not realize to update their threat model and just
what is required. I'd like to spell that out more than what's where at
present.

The third is interesting, but it's also the most far-fetched of the
> lot (a server might read some bytes, which it later won't read,
> exposing a timing attack).


I need to work on the wording because the nature of the attack must not be
clear. It's really simple. If the 0-RTT data primes the cache, then a
subsequent request will be fast. If not, it will be slow.

If implemented on a CDN for example, the effort required would be nearly
trivial for an attacker. Basically: I replay the 0-RTT data, then probe a
bunch of candidate resources with regular requests. If one of them loads in
20ms and the others load in 100ms, well now I know which resource the 0-RTT
data was for. I can perform this attack against CDN nodes that are quiet,
or remote, and very unlikely to have the resources cached to begin with.


> But that's also corollary material; albeit
> less obvious.  Like I said, I've no objection to expanding a little
> bit on what is possible: all non-idempotent activity, which might be
> logging, load, and some things that are potentially observable on
> subsequent requests, like IO/CPU cache state that might be affected by
> a request.
>

ok, cool :)


> >> I'm of the belief that end-to-end
> >> replay is a property we should be building in to protocols, not just
> >> something a transport layer does for you.  On the web, that's what
> >> happens, and it contributes greatly to overall reliability.
> >
> > The proposal here I think promotes that view; if anything, it nudges
> > protocols/applications to actually support end-to-end replay.
>
> You are encouraging the TLS stack to do this, if not the immediate
> thing that drives it (in our case, that would be the HTTP stack).  If
> the point is to make a statement about the importance of the
> end-to-end principle with respect to application reliability, the TLS
> spec isn't where I'd go for that.
>

I'm not sure where the folks designing the HTTP and other protocols would
get the information from if not the TLS spec. It is TLS that's changing
too. Hardly any harm in duplicating the advice anyway.


> > I think there is a far worse externalization if we don't do this.
> Consider
> > the operations who choose not (or don't know) to add good replay
> protection.
> > They will iterate more quickly and more cheaply than the diligent
> providers
> > who are cautious to add the effective measures, which are expensive and
> > difficult to get right.
>
> OK let's ask a different question: who is going to do this?
>

I am, for one. I don't see 0-RTT as a "cheap" feature. It's a very very
expensive one. To mitigate the kind of issues it brings really requires
atomic transactions. Either the application needs them, or something below
it does. So far I see fundamentally no "out" of that, and once we have
atomic transactions then we either have some kind of multi-phase commit
protocol, distributed consensus, routing of data to master nodes, or some
combination thereof.

The smartest people I've ever met work on these kinds of systems and they
all say it's really really hard and subtle. So when I imagine Zero-RTT
being done correctly, I imagine organizations signing up for all of that,
and it being worth it, because latency matters that much. That's a totally
valid decision.

And in that context, the additional expense of intentionally replaying
0-RTT seems minor and modest. My own tentative plan is to do it at the
implementation level; to have the TLS library occasionally spoof 0-RTT data
sections towards the application. This is the same technique used for
validating non-replayable request-level auth.

I don't see browsers doing anything like what you request; nor do I
> see tools/libs like curl or wget doing it either.  If I'm wrong and
> they do, they believe in predictability so won't add line noise
> without an application asking for it.
>

I hope this isn't the case, but if it is and browsers generally agree that
it would be unimplementable and impractical, then I think 0-RTT should be
removed unless we can come up with other effective mitigations. Otherwise
it's predictable that we'll see deployments that don't bother to solve the
hard problems and are vulnerable to the issues I've 

Re: [TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Martin Thomson
On 24 November 2016 at 15:11, Colm MacCárthaigh  wrote:
> Do you disagree that the three specific example security issues provided are
> realistic, representative and impactful? If so, what would persuade you to
> change your mind?

These are simply variants on "if someone hits you with a stick, they
might hurt you", all flow fairly logically from the premise, namely
that replay is possible (i.e., someone can hit you with a stick).

The third is interesting, but it's also the most far-fetched of the
lot (a server might read some bytes, which it later won't read,
exposing a timing attack).  But that's also corollary material; albeit
less obvious.  Like I said, I've no objection to expanding a little
bit on what is possible: all non-idempotent activity, which might be
logging, load, and some things that are potentially observable on
subsequent requests, like IO/CPU cache state that might be affected by
a request.

>> I'm of the belief that end-to-end
>> replay is a property we should be building in to protocols, not just
>> something a transport layer does for you.  On the web, that's what
>> happens, and it contributes greatly to overall reliability.
>
> The proposal here I think promotes that view; if anything, it nudges
> protocols/applications to actually support end-to-end replay.

You are encouraging the TLS stack to do this, if not the immediate
thing that drives it (in our case, that would be the HTTP stack).  If
the point is to make a statement about the importance of the
end-to-end principle with respect to application reliability, the TLS
spec isn't where I'd go for that.

> The problems of 0-RTT are disproportionately under-estimated. I've provided
> what I think are three concrete and realistic security issues. If we
> disagree on those, let's draw that out, because my motivation is to mitigate
> those new issues that are introduced by TLS1.3.
>
>> What I object to here is the externalizing that this represents.  Now if I
>> have the audacity to
>> deploy 0-RTT, I have to tolerate some amount of extra trash traffic
>> from legitimate clients?
>
>
> I think there is a far worse externalization if we don't do this. Consider
> the operations who choose not (or don't know) to add good replay protection.
> They will iterate more quickly and more cheaply than the diligent providers
> who are cautious to add the effective measures, which are expensive and
> difficult to get right.

OK let's ask a different question: who is going to do this?

It's a non-trivial thing you ask for.  This involves a new connection
setup just to send a few packets, and probably a timer so that you can
wait long enough for the server to explode.  Do you expect to replay
before the real attempt?  Because that's even less likely to happen.

Connections aren't cheap, bandwidth as well.  And the time requirement
for building such a feature would be better spent elsewhere, not to
mention the ongoing maintenance.

I don't see browsers doing anything like what you request; nor do I
see tools/libs like curl or wget doing it either.  If I'm wrong and
they do, they believe in predictability so won't add line noise
without an application asking for it.

If few enough people do this, what makes you think that such a tiny
amount of replay would make any difference?  Unless you replay with
high probability, then the odd error will be dismissed as a transient.
This is especially true because most of these sorts of exploitable
errors will happen adjacent to some sort of network glitch (the
requests at the start of most connections - on the web at least - are
pretty lame).

Then you all you have done is increased the global rate of "have you
turned it off and on again?", which is - after all - yet another
opportunity for replay.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Martin Thomson
On 24 November 2016 at 13:18, Colm MacCárthaigh  wrote:
> Can I break this into two parts then? First, do you agree that it would be
> legitimate for a client, or an implementation (library), to deliberately
> replay 0-RTT data? E.g. browsers and TLS libraries MAY implement this as a
> safety mechanism, to enforce and audit the server's and application's
> ability to handle the challenges of replay-ability correctly.

OK, let's be clear: I don't agree that the level of paranoia
surrounding 0-RTT is warranted.  I'm of the belief that end-to-end
replay is a property we should be building in to protocols, not just
something a transport layer does for you.  On the web, that's what
happens, and it contributes greatly to overall reliability.

The reaction to perceived problems in 0-RTT is disproportionate.  You
are asking for a license to replay here at some arbitrary layer of the
stack.  That's not principled, it's just on the basis that you don't
like 0-RTT and want to innoculate other people's software against the
ill effects it might create.  What I object to here is the
externalizing that this represents.  Now if I have the audacity to
deploy 0-RTT, I have to tolerate some amount of extra trash traffic
from legitimate clients?

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Colm MacCárthaigh
On Wed, Nov 23, 2016 at 3:03 PM, Martin Thomson 
wrote:

> This seems like too much text to me.  Maybe some people would
> appreciate the reminder that replay might cause side-effects to be
> triggered multiple times, or that side effects are just effects and
> those might also be observable.  But I think that those reminders
> could be provided far more succinctly.
>

My main goal is clarity, especially to application builders. In my
experience the implications of replay-ability are frequently overlooked and
it's well worth being abundantly clear.  For most builders, those full
implications have never occurred to them (why would they?), so it's often
not a reminder, but new knowledge. It warrants a big warning label, not
FUD, but honestly scary because it is a very hard problem.

But still, the text can most likely be shortened some, edits welcome.


> The bit that concerns me most is the recommendation to intentionally
> replay early data.  Given that I expect that a deployment that enables
> 0-RTT will tolerate some amount of side-effects globally, and that
> excessive 0-RTT will trigger DoS measures, all you are doing is
> removing some of the safety margin those services operate with


Can I break this into two parts then? First, do you agree that it would be
legitimate for a client, or an implementation (library), to deliberately
replay 0-RTT data? E.g. browsers and TLS libraries MAY implement this as a
safety mechanism, to enforce and audit the server's and application's
ability to handle the challenges of replay-ability correctly.

At a bare minimum I want to be free to include this in our implementation
of TLS, and in the clients that we use. I think there's value in explicitly
documenting this, and that the problem is squarely on the server side if it
is not replay-even-within-time-window-tolerant. Frankly: I want to be able
to point to the spec and it be clear whose fault it is.

Now second, consider a web service API that exists today and is using TLS.
Many such APIs depend entirely on TLS for all anti-replay protection
(though for the record, the SigV4 authentication mechanism we use at AWS
includes its own anti-replay measure). When TLS1.3 comes along it is *very*
tempting for providers to turn it on and use it for those APIs. Everyone
always wants everything to be faster, and it won't be a huge code change.
The problems of replay attacks could be dismissed as unlikely and left
unaddressed, as security issues often are.

So through a very predictable set of circumstances, I think those calls
will be left vulnerable to replay issues and TLS1.3 will absolutely degrade
security in these cases. What I'm suggesting is that we RECOMMEND a
systematic defense: clients and/or implementations should intentionally
generate duplicate data, so that the problem *has* to be addressed at least
in some measure by providers. I far prefer that they be forced to encounter
a low grade of errors early in their testing than to leave a large hole
looming for users to fall into.

This style of "Always always test the attack case in production" is also
just a good practice that keeps anti-bodies strong.

Thanks for reading and the feedback.


On 24 November 2016 at 04:47, Colm MacCárthaigh  wrote:
> >
> > I've submitted a PR with an expanded warning on the dangers of 0-RTT data
> > for implementors:
> >
> > https://github.com/tlswg/tls13-spec/pull/776/files
> >
> > The text is there, and the TLDR summary is basically that time-based
> > defenses aren't sufficient to mitigate idempotency bugs, so applications
> > need to be aware of the sharp edges and have a plan.  For clarity, I've
> > outlined three example security issues that could arise due to realistic
> and
> > straightforward, but naive, use of 0-RTT. There's been some light
> discussion
> > of each in person and on the list.
> >
> > In the PR I'm "MUST"ing that applications need to include /some/
> mitigation
> > for issues of this class (or else they are obviously insecure). In my
> > experience this class of issue is so pernicious, and easy to overlook,
> that
> > I'm also "RECOMMEND"ing that applications using Zero-RTT data
> > *intentionally* replay 0-RTT data non-deterministically, so as to keep
> > themselves honest.
> >
> > At a bare minimum I think it would be good to make clear that clients and
> > implementations MAY intentionally replay 0-RTT data; to keep servers on
> > their toes. For example a browser could infrequently tack on a dummy
> > connection with repeated 0-RTT data, or an implementation could
> periodically
> > spoof a 0-RTT section to the application. That should never be
> considered a
> > bug. but a feature. (And to be clear: I want to do this in our
> > implementation).
> >
> >
> > --
> > Colm
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
> >
>



-- 
Colm

Re: [TLS] SNI and Resumption/0-RTT

2016-11-23 Thread Martin Thomson
On 24 November 2016 at 09:46, Victor Vasiliev  wrote:
> For concreteness, I wrote a pull request that describes (2):
> 

At this point, I think that it would be more sensible to write a
separate extension draft.  There are a bunch of additional
considerations that would be easier to write down in a dedicated
document.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Martin Thomson
This seems like too much text to me.  Maybe some people would
appreciate the reminder that replay might cause side-effects to be
triggered multiple times, or that side effects are just effects and
those might also be observable.  But I think that those reminders
could be provided far more succinctly.

The bit that concerns me most is the recommendation to intentionally
replay early data.  Given that I expect that a deployment that enables
0-RTT will tolerate some amount of side-effects globally, and that
excessive 0-RTT will trigger DoS measures, all you are doing is
removing some of the safety margin those services operate with.

On 24 November 2016 at 04:47, Colm MacCárthaigh  wrote:
>
> I've submitted a PR with an expanded warning on the dangers of 0-RTT data
> for implementors:
>
> https://github.com/tlswg/tls13-spec/pull/776/files
>
> The text is there, and the TLDR summary is basically that time-based
> defenses aren't sufficient to mitigate idempotency bugs, so applications
> need to be aware of the sharp edges and have a plan.  For clarity, I've
> outlined three example security issues that could arise due to realistic and
> straightforward, but naive, use of 0-RTT. There's been some light discussion
> of each in person and on the list.
>
> In the PR I'm "MUST"ing that applications need to include /some/ mitigation
> for issues of this class (or else they are obviously insecure). In my
> experience this class of issue is so pernicious, and easy to overlook, that
> I'm also "RECOMMEND"ing that applications using Zero-RTT data
> *intentionally* replay 0-RTT data non-deterministically, so as to keep
> themselves honest.
>
> At a bare minimum I think it would be good to make clear that clients and
> implementations MAY intentionally replay 0-RTT data; to keep servers on
> their toes. For example a browser could infrequently tack on a dummy
> connection with repeated 0-RTT data, or an implementation could periodically
> spoof a 0-RTT section to the application. That should never be considered a
> bug. but a feature. (And to be clear: I want to do this in our
> implementation).
>
>
> --
> Colm
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] SNI and Resumption/0-RTT

2016-11-23 Thread Victor Vasiliev
It seems that this issue has not been so far successfully resolved, and to
the
best of my knowledge it has not been discussed during the meeting in Seoul.
 I
still believe that this is a valuable feature, and our experience with 0-RTT
handshake deployment in QUIC has indicated that it's basically required to
make
0-RTT work for YouTube videos.

So far, we have considered three options in this thread:
  (1) Retain RFC 6066 restriction on SNI,
  (2) Allow resumption for different SNI provided it was negotiated in the
  ticket,
  (3) Always allow resumption for different SNI.
In both (2) and (3), the certificate for the original ticket issuer would
have
to be valid for the new server name, of course.

I believe that (2) will result in a higher resumption success rate than (3),
since for the servers that do not support shared resumption key, (3) would
result in losing session tickets to unsuccessful resumption.

For concreteness, I wrote a pull request that describes (2):

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Watson Ladd
On Nov 23, 2016 10:22 AM, "Jeremy Harris"  wrote:
>
> On 23/11/16 08:50, Yoav Nir wrote:
> > As long as you run over a network that has a smallish MTU, you’re going
to incur the packetization costs anyway, either in your code or in
operating system code. If you have a 1.44 GB file you want to send, it’s
going to take a million IP packets either way and 100 million AES block
operations.
>
> Actually, no.  Everybody offloads ether-frame packetization and TCP
> re-segmentation to the NIC, talking 64kB TCP segments across the NIC/OS
> boundary.

Who is 'everybody'?

Let's look at the cost more exactly. We always have to copy from the
storage to the network. Packetization copies a tiny bit more data on each
packet. Maybe if you have special PCI DMA between devices, but that is rare.

>
> Precisely because of the packetization cost.
> --
> Jeremy
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Jeremy Harris
On 23/11/16 08:50, Yoav Nir wrote:
> As long as you run over a network that has a smallish MTU, you’re going to 
> incur the packetization costs anyway, either in your code or in operating 
> system code. If you have a 1.44 GB file you want to send, it’s going to take 
> a million IP packets either way and 100 million AES block operations. 

Actually, no.  Everybody offloads ether-frame packetization and TCP
re-segmentation to the NIC, talking 64kB TCP segments across the NIC/OS
boundary.

Precisely because of the packetization cost.
-- 
Jeremy

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-rfc4492bis

2016-11-23 Thread Sean Turner

>> - Section 1
>> "This is illustrated in the following table, based on [Lenstra_Verheul],
>> which gives approximate comparable key sizes for symmetric- and
>> asymmetric-key cryptosystems based on the best-known algorithms for
>> attacking them."
>> 
>> The key sizes for DH/DSA/RSA does not seem to be based on the
>> Lenstra-Verheuls equations which gives much higher key sizes for
>> DH/DSA/RSA.
>> 
>> The DH/DSA/RSA key sizes seem to be based on NIST recommendations. I
>> suggest either:
>> 
>> A) Fully based the table on NIST recommendation, which means keeping
>> DH/DSA/RSA as is but simplifying ECC to 2 * Symmetric.
>> B) Update the DH/DSA/RSA key sizes based on state-of-the-art. But then I
>> would say that this is not [Lenstra_Verheul], but rather [RFC3766],
>> [Lenstra 2004], [ECRYPT 2012]. I think these three all use the same
>> equation.
>> C) Just remove DH/DSA/RSA as the draft is about ECC.
> 
> I’m inclined to get rid of this table and all the text from “This is 
> illustrated…” entirely. ECC is by now in wide use. We don’t need to “sell” it 
> any more. so unless someone would like to make a PR with better text, I will 
> just get rid of it.

You could be more draconian and start the draft with the paragraph:

  This document describes additions to TLS to support ECC ….

Because you’re right we don’t really need to do much selling here.

spt
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Additional warnings on 0-RTT data

2016-11-23 Thread Colm MacCárthaigh
I've submitted a PR with an expanded warning on the dangers of 0-RTT data
for implementors:

https://github.com/tlswg/tls13-spec/pull/776/files

The text is there, and the TLDR summary is basically that time-based
defenses aren't sufficient to mitigate idempotency bugs, so applications
need to be aware of the sharp edges and have a plan.  For clarity, I've
outlined three example security issues that could arise due to realistic
and straightforward, but naive, use of 0-RTT. There's been some light
discussion of each in person and on the list.

In the PR I'm "MUST"ing that applications need to include /some/ mitigation
for issues of this class (or else they are obviously insecure). In my
experience this class of issue is so pernicious, and easy to overlook, that
I'm also "RECOMMEND"ing that applications using Zero-RTT data
*intentionally* replay 0-RTT data non-deterministically, so as to keep
themselves honest.

At a bare minimum I think it would be good to make clear that clients and
implementations MAY intentionally replay 0-RTT data; to keep servers on
their toes. For example a browser could infrequently tack on a dummy
connection with repeated 0-RTT data, or an implementation could
periodically spoof a 0-RTT section to the application. That should never be
considered a bug. but a feature. (And to be clear: I want to do this in our
implementation).


-- 
Colm
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft 18 review : Message order

2016-11-23 Thread Olivier Levillain
Hi,

> On Tue, Nov 22, 2016 at 11:08 AM, Olivier Levillain <
> olivier.levill...@ssi.gouv.fr> wrote:
>
>> = Message order =
>>
>> I believe the message P.27 section 4 is important, but not
>> sufficient. As already expressed on the list, a formal automaton
>> should be provided in the spec.
>>
>> I think Ekr said there was some work in progress in this area.  Is
>> this a goal for the final specification?
>>
> Yes, I will put some sort of more complete state machine in the final spec
> (probably in -19)

I have been working on my black board on the simplest case (pure TLS, no
PSK support), and I stumble upon the late client authentication issue
that was presented in the thread starting with
https://www.ietf.org/mail-archive/web/tls/current/msg21500.html. I agree
with the thread that NST and KU are easy to deal with.

As stated in the thread, as it is currently specified, the mechanism is
kind of a pandora's box :
 - the client must keep the handshake transcript (or its running /
forkable hash) because of the Finished message that must eventually be sent
 - the server can send multiple CertificateRequest, and the client must,
for each, keep the content of the message (for the label, and also for
the hash context required for the Finished message) => the unboundedness
problem
 - there is no way to reject the request gracefully.

As I understand the thread, one idea was to say that the mechanism is
forbidden by default, unless it is activated by some profile. However,
as stated in
https://www.ietf.org/mail-archive/web/tls/current/msg21561.html, since
there are voices to allow it in the HTTP profile, this is problematic
because HTTPS can be used in constrained environments where we would
like to avoid late client auth. I believe the late client auth mechanism
should explicitly be negotiated, either at the TLS level with an
extension, or at the application level via an API callback (not a
profile) and an upper layer nego.


Sorry to dig this up (and sorry for the re-discovery explained at
length), but where does the WG stand on this issue? PR 680 is currently
parked, but I believe the current situation is not acceptable from the
client's point of view, and I fail to see whether unbounded parallel
certificate requests would be used in practice or not.

Best regards,
olivier

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-rfc4492bis

2016-11-23 Thread Yaron Sheffer


I’m not even sure what my position is on this. Specifying the use of a
context here goes against the recommendation in the CFRG draft:

  Contexts SHOULD NOT be used opportunistically, as that kind of use
  is very error-prone.  If contexts are used, one SHOULD require all
  signature schemes available for use in that purpose support
  contexts.

If someone knows why this recommendation was made, that would be great.


Basically, then those other methods would be a weak point for attack.



But we are trying to migrate away from the old methods, into the new 
methods. The fewer servers use the old methods, the better off we are, 
right? So we expect the attack surface to gradually be reduced over the 
coming years.





Then there's the serious deployment problems with contexts, as those
don't fit into any standard notion of signature various libraries have.



These are new algorithms that are still not widely deployed. The fact 
that several open source libraries (still) don't support a certain 
function parameter is not a very strong reason not to require it.


Thanks,
Yaron

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-rfc4492bis

2016-11-23 Thread Ilari Liusvaara
On Wed, Nov 23, 2016 at 03:39:38PM +0200, Yoav Nir wrote:
> 
> > On 23 Nov 2016, at 12:22, John Mattsson  wrote:
> > 
> > On 2016-11-21, 06:31, "TLS on behalf of Yaron Sheffer"
> >  on behalf of 
> > yaronf.i...@gmail.com > wrote:
> > 
> >> So the key schedule changed and therefore we think cross-version attacks
> >> are impossible. Have we also analyzed other protocols to ensure that
> >> cross protocol attacks, e.g. with SSH or IPsec, are out of the question?
> >> 
> >> Put differently, algorithm designers gave us a cheap, easy to use tool
> >> to avoid a class of potential attacks. Why are we insisting on not using
> >> it?
> > 
> > Unless someone points out any major disadvantages with using a context, I
> > agree with Yaron.
> 
> I’m not even sure what my position is on this. Specifying the use of a
> context here goes against the recommendation in the CFRG draft:
> 
>   Contexts SHOULD NOT be used opportunistically, as that kind of use
>   is very error-prone.  If contexts are used, one SHOULD require all
>   signature schemes available for use in that purpose support
>   contexts.
> 
> If someone knows why this recommendation was made, that would be great.

Basically, then those other methods would be a weak point for attack.



Then there's the serious deployment problems with contexts, as those
don't fit into any standard notion of signature various libraries have.


-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Michael Tuexen

> On 23 Nov 2016, at 09:50, Yoav Nir  wrote:
> 
> 
> On 23 Nov 2016, at 10:30, Nikos Mavrogiannopoulos  wrote:
> 
>> On Wed, 2016-11-23 at 10:05 +0200, Yoav Nir wrote:
>>> Hi, Nikos
>>> 
>>> On 23 Nov 2016, at 9:06, Nikos Mavrogiannopoulos 
>>> wrote:
>>> 
 
 Hi,
  Up to the current draft of TLS1.3 the record layer is restricted
 to
 sending 2^14 or less. Is the 2^14 number something we want to
 preserve?
 16kb used to be a lot, but today if one wants to do fast data
 transfers
 most likely he would prefer to use larger blocks. Given that the
 length
 field allows for sizes up to 2^16, shouldn't the draft allow for
 2^16-
 1024 as maximum?
>>> 
>>> I am not opposed to this, but looking at real browsers and servers, 
>>> we see that they tend to set the size of records to fit IP packets. 
>> 
>> IP packets can carry up to 64kb of data. I believe you may be referring
>> to ethernet MTU sizes.
> 
> I’m referring to the IP packets that they actually use, and that is set by 
> TCP to fit the PMTU, which is <= the ethernet MTU.  In practice it is 1500 
> bytes for most network paths.
> 
>> That to my understanding is a way to reduce
>> latency in contrast to cpu costs. An increase to packet size targets
>> bandwidth rather than latency (speed).
> 
> Sure, but running ‘openssl speed’ on either aes-128-cbc or hmac or sha256 
> (there’s no test for AES-GCM or ChaCha-poly) you get smallish differences in 
> terms of kilobytes per second between 1024-byte buffers and 8192-byte 
> buffers. And the difference going to be even smaller going to 16KB buffers, 
> let alone 64KB buffers.
> 
>>> The gains from increasing the size of records from the ~1460 bytes
>>> that fit in a packet to nearly 64KB are not all that great, and the
>>> gains from increasing records from 16 KB to 64KB are almost
>>> negligible. At that size the block encryption dominates the CPU time.
>> 
>> Do you have measurements to support that? I'm quite surprized by such a
>> general statement because packetization itself is a non-negligible cost
>> especially when encryption is fast (i.e., in most modern CPUs with
>> dedicated instructions).
> 
> As long as you run over a network that has a smallish MTU, you’re going to 
> incur the packetization costs anyway, either in your code or in operating 
> system code. If you have a 1.44 GB file you want to send, it’s going to take 
> a million IP packets either way and 100 million AES block operations. 
> 
> Whether you do the encryption is a million 1440-byte records or 100,000 
> 14,400-byte records makes a difference of only a few percent.
> 
> Measurement that we’ve made bear this out. They’re with IPsec, so it’s 
> fragmentation rather than packetization, but I don’t think that should make 
> much of a difference.
> 
> Again, I’m not opposed to this. A few percent is a worthy gain.
OK. However it would be useful to have the possibility to use a larger record 
than 16 KB. This is not relevant
when using TLS over TCP for which the application does not expect record 
boundary preservation. For DTLS the
story is different. The application expects record boundary preservation. 
Therefore the maximum DTLS record
size does limit the maximum message size the application can transfer. In the 
case of DTLS over UDP, this
is not so critical as long as you avoid IP fragmentation and have MTUs smaller 
than 16 KB. However, in the
case of DTLS over SCTP this imposes a limitation. SCTP does segmentation and 
reassembly and therefore the
MTU limit does not apply. Diameter is a protocol possibly using DTLS/SCTP as 
its transport and I was
told that the 16 KB limit is a real restriction in some cases, relaxing it to 
64 KB would be fine for them.

So I'm not saying we need to change the default to 64 KB, we can put it 
wherever you find appropriate, as
long as both sides can negotiate a larger limit.
For example, one could allow the Maximum Fragment Length Negotiation to 
negotiate a value larger then 16KB.
You would only need to extend the enum in
https://tools.ietf.org/html/rfc6066#section-4
to allow values larger than 2^14.

Best regards
Michael
> 
> Yoav
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-rfc4492bis

2016-11-23 Thread Yoav Nir

> On 23 Nov 2016, at 12:22, John Mattsson  wrote:
> 
> On 2016-11-21, 06:31, "TLS on behalf of Yaron Sheffer"
>  on behalf of 
> yaronf.i...@gmail.com > wrote:
> 
>> So the key schedule changed and therefore we think cross-version attacks
>> are impossible. Have we also analyzed other protocols to ensure that
>> cross protocol attacks, e.g. with SSH or IPsec, are out of the question?
>> 
>> Put differently, algorithm designers gave us a cheap, easy to use tool
>> to avoid a class of potential attacks. Why are we insisting on not using
>> it?
> 
> Unless someone points out any major disadvantages with using a context, I
> agree with Yaron.

I’m not even sure what my position is on this. Specifying the use of a context 
here goes against the recommendation in the CFRG draft:

  Contexts SHOULD NOT be used opportunistically, as that kind of use
  is very error-prone.  If contexts are used, one SHOULD require all
  signature schemes available for use in that purpose support
  contexts.

If someone knows why this recommendation was made, that would be great.

However, three working groups are currently faced with this same decision: TLS, 
IPsecME and Curdle. I think it would be weird if these three groups came up 
with different answers to what is essentially the same question. At least for 
TLS and IKE there are no operational differences either. 

So Curdle, I’ve been told, is leaning towards empty context for Ed448 and no 
OID for Ed25519ctx. IPsecME has a thread similar to this one (with similar 
participants…)

Yoav

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft 18 review : Hello Retry Request and supported groups cache

2016-11-23 Thread Eric Rescorla
On Wed, Nov 23, 2016 at 5:25 AM, Olivier Levillain <
olivier.levill...@ssi.gouv.fr> wrote:

> >> There were actually two points in my message:
> >>  - I was not convinced by this way of signalling a preference without
> >> enforcing it, but I understand that, if we keep supported_groups, it
> >> does not cost much and the client can safely ignore the server sent
> >> extension;
> >>  - however, I found strange that the specification stated that the
> >> client could update its view when seeing this extension, but that it was
> >> not stated in the case of an HRR where updating its views of the
> >> servers' preference would clearly be useful for the future. I only
> >> proposed to add the same text "The client MAY update its view of the
> >> server's preference when receiving an HRR, to avoid the extra round trip
> >> in future encounters".
> >>
> > This is is unsafe, because the HRR is unauthenticated. We could update it
> > after the handshake completes, but I think this is obvious enough that it
> > doesn't
> > need to be stated.
>
> Unless I am mistaken, EncryptedExtensions is not authenticated either
> (even if it is sent in the same flight as the authentication messages),
> so updating the client cache can not be done immediately after
> interpreting the supported_groups extension.
>

That is intended to be covered by the following text:

   by the client.  Clients MUST NOT act upon any information found in
   "supported_groups" prior to successful completion of the handshake,
   but MAY use the information learned from a successfully completed
   handshake to change what groups they use in their "key_share"
   extension in subsequent connections.

My point about HRR is that the client can generally update its opinion
based on what it successfully completed the handshake with. This can
happen either with HRR or when it offered >1 shares and the server
took one. The special case is when supported_versions indicates a
key type you didn't use. So I think if we wanted text around this it would
go somewhere that covered the general case. I'm not sure it's needed
but if the WG feels like we should add some text, I can...

-Ekr


> However, if you believe this does not need to be stated, I am fine with
> that.
>
> olivier
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-rfc4492bis

2016-11-23 Thread Yoav Nir
Hi, John

Thanks for the review.  See my responses below:

> On 23 Nov 2016, at 12:15, John Mattsson  wrote:
> 
> I have not read the processing parts in detail. Here are comments on the
> first and last sections of the document.
> 
> Cheers,
> John
> 
> - Somewhere
> I suggest adding the following sentence from TLS 1.3:
> "Applications SHOULD also enforce minimum and maximum key sizes.  For
> example, certification paths containing keys or signatures weaker than
> 2048-bit RSA or 224-bit ECC are not appropriate for secure applications."
> 
> This is as true for TLS 1.0, 1.1, and 1.2 as for TLS 1.3

I can add a sentence like that, but the draft does even better: It deprecates 
all curves smaller than 256-bit ECC. Now it’s true that this draft is about 
TLS, not PKIX, so this only forces the public key in the EE certificate to be 
at least 256-bit, but in general CA keys are no weaker than EE keys. I will add 
such a sentence , though.

> - Section 1
> "Compared to currently prevalent cryptosystems such as RSA, ECC offers"
> 
> I think this should be updated, ECDHE is already very well-spread.

Yes, this is copied from 4492.  How about: “Compared to previous cryptosystems 
such as RSA and finite-field diffie-hellman, ECC offers…"

> - Section 1
> "This is illustrated in the following table, based on [Lenstra_Verheul],
> which gives approximate comparable key sizes for symmetric- and
> asymmetric-key cryptosystems based on the best-known algorithms for
> attacking them."
> 
> The key sizes for DH/DSA/RSA does not seem to be based on the
> Lenstra-Verheuls equations which gives much higher key sizes for
> DH/DSA/RSA.
> 
> The DH/DSA/RSA key sizes seem to be based on NIST recommendations. I
> suggest either:
> 
> A) Fully based the table on NIST recommendation, which means keeping
> DH/DSA/RSA as is but simplifying ECC to 2 * Symmetric.
> B) Update the DH/DSA/RSA key sizes based on state-of-the-art. But then I
> would say that this is not [Lenstra_Verheul], but rather [RFC3766],
> [Lenstra 2004], [ECRYPT 2012]. I think these three all use the same
> equation.
> C) Just remove DH/DSA/RSA as the draft is about ECC.

I’m inclined to get rid of this table and all the text from “This is 
illustrated…” entirely. ECC is by now in wide use. We don’t need to “sell” it 
any more. so unless someone would like to make a PR with better text, I will 
just get rid of it.

> 
> - Section 2
> "However, the computational cost incurred by a server is higher for
> ECDHE_RSA than for the traditional RSA key exchange, which does not
> provide forward secrecy."
> 
> True, but I think it gives the wrong impression. I think the additional
> cost is negligible. I don't have any TLS benchmarks but for i5-6600,
> bench.cr.yp.to gives:
> 
> ronald3072 decrypt  8776864 cycles
> 
> ronald3072 sign 8781842 cycles
> curve25519 keygen163200 cycles
> curve25519 compute   165816 cycles
> 
> That's only 3.8% overhead for crypto, and much less if calculated for the
> whole handshake. I suggest removing the sentence, or writing that the
> additional server cost for forward secrecy is negligible and well worth it.

Makes sense. How about:
OLD:
   However, the computational cost incurred by a server is higher
NEW:
   The computational cost incurred by a server is marginally higher
> 
> - Section 6
> "TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256"
> 
> Should this be "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"?
> I don't think the document should recommend support of cipher suites
> without forward secrecy.

Definitely. It would be a very poor show to deprecate this ciphersuite (along 
with all other ECDH_RSA and ECDH_ECDSA ciphersuites), declare that all 
ciphersuites have forward secrecy, and then make this (sort of ) MTI.  Will fix.

> 
> - Section 7
> "documemts" -> “documents"

OK

> 
> - Section 7
> "NEED TO ADD A PARAGRAPH HERE ABOUT WHY X25519/X448 ARE PREFERRABLE TO
> NIST CURVES."
> 
> I suggest using the excellent explanation in RFC7748 as a basis.
> Suggestion:
> 
> "Modern curves such X25519 and X448 [RFC7748] are preferable as they lend
> themselves to constant-time implementation and an exception-free scalar
> multiplication that is resistant to a wide range of side-channel attacks,
> including timing and cache attacks. For more information see [SafeCurves]."
> 
> [SafeCurves]  https://safecurves.cr.yp.to/

Cool.  Will add.

> - Section 7
> "Substantial additional information on elliptic curve choice can be found
> in [IEEE.P1363.1998], [ANSI.X9-62.2005], and [FIPS.186-4]".
> 
> I think it is better to refer people to [SafeCurves].

In addition, OK. I don’t think we should remove everything else, especially 
when unlike 7748 this draft is specifying NIST curves, and the choice for them 
was not in [SafeCurves]

> - Section 7
> The paragraph about structure gives the impression that "F_p with p
> random" is the optimal choice. I think this should be expanded with more
> information explaining why we do not 

Re: [TLS] Draft 18 review : Hello Retry Request and supported groups cache

2016-11-23 Thread Olivier Levillain
>> There were actually two points in my message:
>>  - I was not convinced by this way of signalling a preference without
>> enforcing it, but I understand that, if we keep supported_groups, it
>> does not cost much and the client can safely ignore the server sent
>> extension;
>>  - however, I found strange that the specification stated that the
>> client could update its view when seeing this extension, but that it was
>> not stated in the case of an HRR where updating its views of the
>> servers' preference would clearly be useful for the future. I only
>> proposed to add the same text "The client MAY update its view of the
>> server's preference when receiving an HRR, to avoid the extra round trip
>> in future encounters".
>>
> This is is unsafe, because the HRR is unauthenticated. We could update it
> after the handshake completes, but I think this is obvious enough that it
> doesn't
> need to be stated.

Unless I am mistaken, EncryptedExtensions is not authenticated either
(even if it is sent in the same flight as the authentication messages),
so updating the client cache can not be done immediately after
interpreting the supported_groups extension.

However, if you believe this does not need to be stated, I am fine with
that.

olivier

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Treatment of (legacy_record_)version field [was Re: (strict) decoding of legacy_record_version?]

2016-11-23 Thread Eric Rescorla
On Wed, Nov 23, 2016 at 3:39 AM, Andreas Walz 
wrote:

> Dear all,
>
> bringing up this thread again 
>
> In the course of studying the way TLS implementations treat the "version"
> (or "legacy_record_version") field in the record header, we were wondering
> (please excuse if we missed some arguments here from past discussions):
>
> (1) What is an implementation (in particular when receiving the first
> bytes over a new connection) supposed to do if the record's version field
> signals a protocol version the implementation does not support? I
> understand that, at this stage, enforcing a specific value (e.g. 0x0301
> according to the TLSv1.3 draft) is detrimental to interoperability.
> However, if that field bears any meaning (in either TLSv1.3 or previous
> versions), what is it? I would expect this field is supposed to allow
> signaling a potentially non-backward compatible record format
> (inauspiciously interfering with a receiver disregarding the record
> version). Provided this field isn't treated as an enum, what about
> checking/enforcing at least the major version as BoringSSL does (as far as
> I know)? In any case, I would propose to be very clear about this in the
> text (my sense was that there is some work in progress, but I couldn't find
> anything). In implementations ( interpretations.
>

In general, it should ignore it. It's going to become increasingly common
to have this be a version you don't support given the recommendation to use
0301 and the ongoing deprecation of TLS 1.0. I think it would be fine to
sanity check the major version, but I'm not sure what would be gained by
requiring this.



> (2) What is an implementation (up to TLSv1.2, as the TLSv1.3 spec is
> rather clear about that) supposed to use for the record's protocol version
> field before a version has been agreed upon (e.g. when sending an alert
> after receiving an unparsable ClientHello)? My best guess would be to set
> it to the lowest (TLS) protocol version that uses the same record format
> (probably 0x0301). However, we observe several servers which, in such
> cases, answer with an alert with weird record protocol version values, e.g.
> 0x.]
>

Yes, this seems like a reasonable procedure. Not sure how to tell TLS 1.2
impls what to do at this point, though.

-Ekr


>
> Thanks and Cheers,
> Andi
>
> ___
>
> Andreas Walz
> Research Engineer
> Institute of reliable Embedded Systems and Communication Electronics
> (ivESK)
> Offenburg University of Applied Sciences, 77652 Offenburg, Germany
>
>
>
> >>> Benjamin Kaduk  11/10/16 5:22 PM >>>
> On 11/08/2016 06:25 PM, Martin Thomson wrote:
>
> On 9 November 2016 at 05:59, Brian Smith  
>  wrote:
>
> This isn't a pervasively shared goal, though. It's good to let the browsers
> police things if they want, but I think a lot of implementations would
> prefer to avoid doing work that isn't necessary for interop or security.
>
> If you permit someone to enforce it, then that is sufficient.  I don't
> think that we should ever force someone to enforce these sorts of
> things (as you say, sometimes strict enforcement isn't cheap or even
> desirable).
>
>
> Agreed.  We should probably change the text a bit, though, as right now
> readers can get two different readings depending on whether they go for a
> strict decode_error (or illegal_parameter?) since the struct doesn't match
> the definition, or follow the "MUST be ignored for all purposes".
>
> -Ben
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft 18 review : Hello Retry Request and supported groups cache

2016-11-23 Thread Eric Rescorla
On Wed, Nov 23, 2016 at 12:19 AM, Olivier Levillain <
olivier.levill...@ssi.gouv.fr> wrote:

> Hi,
>
> >> Being able to send supported_groups does allow a server to choose to
> make
> >> a tradeoff between an extra round trip on the current connection and its
> >> own group preferences. One example where a server might want to do this
> is
> >> where it believes that X25519 is likely a more future-proof group and
> would
> >> prefer that, but still believes the client's choice of P256 is suitable
> for
> >> this connection. Always requiring an extra round trip might end up being
> >> expensive depending on the situation so some servers might prefer to
> avoid
> >> sending an HRR for a slightly more preferred group.
> >>
> >> I think that requiring the client to maintain state from the
> >> supported_groups puts undue requirements on the client, since not all
> >> clients keep state between connections, and those that do usually only
> keep
> >> session state for resumption.
> >>
> > This matches my view as well.
> >
> > I agree that the client should not be require to keep state. I do not
> > believe that the draft requires this, but if someone thinks otherwise,
> > please send a PR to fix.
> >
>
> There were actually two points in my message:
>  - I was not convinced by this way of signalling a preference without
> enforcing it, but I understand that, if we keep supported_groups, it
> does not cost much and the client can safely ignore the server sent
> extension;
>  - however, I found strange that the specification stated that the
> client could update its view when seeing this extension, but that it was
> not stated in the case of an HRR where updating its views of the
> servers' preference would clearly be useful for the future. I only
> proposed to add the same text "The client MAY update its view of the
> server's preference when receiving an HRR, to avoid the extra round trip
> in future encounters".
>

This is is unsafe, because the HRR is unauthenticated. We could update it
after the handshake completes, but I think this is obvious enough that it
doesn't
need to be stated.

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Treatment of (legacy_record_)version field [was Re: (strict) decoding of legacy_record_version?]

2016-11-23 Thread Andreas Walz
Dear all,

bringing up this thread again 

In the course
 of studying the way TLS implementations treat the "version" (or 
"legacy_record_version") field in the record header, we were wondering 
(please excuse if we missed some arguments here from past discussions):

(1)
 What is an implementation (in particular when receiving the first bytes
 over a new connection) supposed to do if the record's version field 
signals a protocol version the implementation does not support? I 
understand that, at this stage, enforcing a specific value (e.g. 0x0301 
according to the TLSv1.3 draft) is detrimental to interoperability. 
However, if that field bears any meaning (in either TLSv1.3 or previous 
versions), what is it? I would expect this field is supposed to allow 
signaling a potentially non-backward compatible record format 
(inauspiciously interfering with a receiver disregarding the record 
version). Provided this field isn't treated as an enum, what about 
checking/enforcing at least the major version as BoringSSL does (as far 
as I know)? In any case, I would propose to be very clear about this in 
the text (my sense was that there is some work in progress, but I 
couldn't find anything). In implementations (>> Benjamin Kaduk  11/10/16 5:22 PM >>>
 On 11/08/2016 06:25 PM, Martin Thomson wrote:
On 9 November 2016 at 05:59, Brian Smith  
wrote:
This isn't a pervasively shared goal, though. It's good to let 
the browsers
police things if they want, but I think a lot of implementations would
prefer to avoid doing work that isn't necessary for interop or security.
  If you permit someone to enforce it, then that is sufficient.  I 
don't
think that we should ever force someone to enforce these sorts of
things (as you say, sometimes strict enforcement isn't cheap or even
desirable).
  
 Agreed.  We should probably change the text a bit, though, as right 
now readers can get two different readings depending on whether they go for 
a strict decode_error (or illegal_parameter?) since the struct doesn't 
match the definition, or follow the "MUST be ignored for all purposes".
 
 -Ben
 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-rfc4492bis

2016-11-23 Thread John Mattsson
On 2016-11-21, 06:31, "TLS on behalf of Yaron Sheffer"
 wrote:

>So the key schedule changed and therefore we think cross-version attacks
>are impossible. Have we also analyzed other protocols to ensure that
>cross protocol attacks, e.g. with SSH or IPsec, are out of the question?
>
>Put differently, algorithm designers gave us a cheap, easy to use tool
>to avoid a class of potential attacks. Why are we insisting on not using
>it?

Unless someone points out any major disadvantages with using a context, I
agree with Yaron.


>
>Thanks,
>   Yaron
>
>On 20/11/16 17:33, Salz, Rich wrote:
>>> For those who missed CURDLE, could you please briefly explain why we
>>>don't
>>> need signature context in non-TLS areas.
>>
>> The one place we were concerned about attacks was in pre-hash
>>signatures, and we made those a MUST NOT.  And yes, your'e right, it's
>>not relevant to TLS.
>>
>>> So why are we now saying that contexts are not needed even for TLS?
>>
>> I think because the key schedule changed.
>>
>> --
>> Senior Architect, Akamai Technologies
>> Member, OpenSSL Dev Team
>> IM: richs...@jabber.at Twitter: RichSalz
>>
>>
>
>___
>TLS mailing list
>TLS@ietf.org
>https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft 18 review : Message splitting and interleaving

2016-11-23 Thread Olivier Levillain
Le 23/11/2016 00:24, Eric Rescorla a écrit :
> On Tue, Nov 22, 2016 at 2:18 PM, David Benjamin 
> wrote:
>
>> On Tue, Nov 22, 2016 at 2:05 PM Olivier Levillain <
>> olivier.levill...@ssi.gouv.fr> wrote:
>>
>>> Hi list,
>>>
>>> I am sorry for the very late answer concerning draft 18, but we
>>> (ANSSI) have several remarks after proof-reading the current
>>> specification.
>>>
>>> We are sorry for the multiple long messages.
>>>
>>> If the WG is interested by some of our concerns/proposals, we would be
>>> glad to propose some PRs.
>>>
>>>
>>> = Message splitting and interleaving =
>>>
>>> How to split and interleave subprotocols in TLS has not been clearly
>>> specified in the past, and it would be useful to be crystal clear on
>>> this point.
>>>
>>> In the specification, the subject is first presented in 4.5 (P.61):
>>>Handshake messages sent after the handshake MUST NOT be interleaved
>>>with other record types.  That is, if a message is split over two or
>>>more handshake records, there MUST NOT be any other records between
>>>them.
>>> But I am wondering why this text only concern "messages after the
>>> handshake".  It should always be the case!

I don't know what is your take on this first point, but I think I will
propose something in the PR.

I will try to add/move some text so the information is all present in
section 5.

>> +1. We (BoringSSL) and I believe NSS already forbid this for alerts at all
>> versions.
>>
>> This rule is actually implied by TLS 1.3 already, because we've gotten rid
>> of warning alerts. All alerts terminate the connection, except for
>> end_of_early_data, and end_of_early_data immediately signals a key change.
>> So it is not legal to send two alerts in the same epoch, much less in the
>> same record. (Being explicit about this is good, of course.)
>>
> This seems fine. I would take a PR for this.

Will do.


>  - incidentally, the default behaviour would apply to Heartbeat, as
>>>was the intent of the specification;
>>>  - ApplicationData should be considered as a stream with possibly
>>>0-length records
>>>  - Handshake messages should either come as a sequence of multiple
>>>*entire* messages, or as a fraction of *one* message.  That is,
>>>the number of HS messages inside one record should either be a
>>>round number or strictly less than 1.
>>>
>> What simplifications were you expecting out of this? It seems to me this
>> would be a nuisance to both enforce as a receiver and honor as a sender.
>>
>> Our implementation doesn't try to pack handshake messages into records,
>> but I believe NSS does. NSS folks should confirm, but I expect such
>> implementations just buffer the messages up and flush when the buffer
>> exceeds the records size. That means all kinds of splits are plausible:
>>
>>   [EncryptedExtensions Certifi]
>>   [cateRequest Certificate Cer]
>>   [tificateVerify Finished]
>>
> Yeah, that's how this works in NSS.
>
> I'm not seeing a real benefit in prohibiting this behavior.

The expected benefit was that it was a way to enforce the rule from
section 4.6 (P.64), stating that a Handshake message should not span key
changes. That is why the receiver already had to do some work, and my
proposal was to restrict it even more. Yet I see your point that from
the sender's point of view, this is more complex.

After re-reading the 4.6 part, I find it sufficient. Sorry for the noise
about this.


olivier

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Nikos Mavrogiannopoulos
Maybe a solution would be a better maximum fragment length extension
which allows the size can be negotiated in a more fine-grained way, as
pointed in:
https://www.ietf.org/mail-archive/web/tls/current/msg12472.html

I also found these requests asking for larger packet sizes.
https://www.ietf.org/mail-archive/web/tls/current/msg13569.html
https://mailarchive.ietf.org/arch/msg/tls/K_6Ug4GtAxbrQJy2CFFFoxxtJ60

On Wed, 2016-11-23 at 00:46 -0800, Judson Wilson wrote:
> I worry about the buffer sizes required on embedded devices.
> Hopefully the other endpoint would be programmed to limit record
> sizes, but is that something we want to rely on?  This could be a
> parameter agreed upon during the handshake, but that seems bad.
> 
> 
> On Wed, Nov 23, 2016 at 12:41 AM, Nikos Mavrogiannopoulos  t.com> wrote:
> > On Wed, 2016-11-23 at 00:39 -0800, Judson Wilson wrote:
> > > Can you send multiple records in one data transfer to achieve
> > > whatever gains are desired? 
> > 
> > The packetization cost still remains even if you do that. However,
> > the
> > question is how does the 2^14 limit comes from, and why TLS 1.3
> > should
> > keep it?
> > 
> > regards,
> > Nikos

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Re-use and export of DH shares

2016-11-23 Thread Yoav Nir
And if it wasn’t clear, this *is* a WGLC comment on the TLS 1.3 draft based on 
discussion in Tuesday’s session in Seoul.

Yoav

> On 20 Nov 2016, at 12:21, Yoav Nir  wrote:
> 
> Hi.
> 
> I’ve created a PR for TLS 1.3
> https://github.com/tlswg/tls13-spec/pull/768
> 
> It adds a subsection to the Security Considerations section. It discusses key 
> reuse (do it carefully or do it not).
> It has the "don't do this or this grape juice might ferment" weasel words 
> suggested by someone at the meeting.
> 
> Yoav
> 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft 18 review : 0-RTT

2016-11-23 Thread Olivier Levillain
Le 23/11/2016 01:28, Martin Thomson a écrit :
> On 23 November 2016 at 06:07, Olivier Levillain
>  wrote:
>> In 4.2.8 (P.47), the server receiving early_data "can behave in one of
>> two ways"... followed by three cases.  Beside the typo, the first case
>> could be phrased differently.  Actually, it reads
>>
>>-  Ignore the extension and return no response.  This indicates that
>>   the server has ignored any early data and an ordinary 1-RTT
>>   handshake is required.
>>
>> Since an ordinary 1-RTT handshake will require the server to actually
>> send a response (the ServerHello), it might be better to put it this
>> way:
>>
>>-  Ignore the extension and return a standard 1-RTT ServerHello.
>>   This indicates that the server has ignored any early data and
>>   an ordinary 1-RTT handshake is required.
> Here's a PR: https://github.com/tlswg/tls13-spec/pull/773
>
> I've gone a little bit further than what Olivier suggests and pointed
> out in each of these that the server is required to ignore early data.

Thank you for the rewrite.

olivier

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft 18 review : Extensions and message reuse

2016-11-23 Thread Olivier Levillain
Le 22/11/2016 22:19, Eric Rescorla a écrit :
> On Tue, Nov 22, 2016 at 11:02 AM, Olivier Levillain <
> olivier.levill...@ssi.gouv.fr> wrote:
>> The first example of such a problem is the signature_scheme and
>> signature_algorithms enums.  In practice, the signature_algorithm must
>> contain information valid for TLS 1.2 and TLS 1.3, but the exact
>> meaning is not the same in both cases.  It would be cleaner to have a
>> new, separate extension called signature_scheme, defining the new
>> enum.  This way, a TLS 1.2/1.3 client implementations would send both
>> extensions.  We would spend 10 bytes on the wire, but it is not that
>> important for the first message which can be kept reasonably long
>> (and which is sometimes padded via the corresponding extension...)
>>
> We certainly could do this, but I don't think it's the right answer, for
> two reasons:
>
> 1. From a protocol perspective, some of the things you could say with
> signature_algorithms weren't coherent. For instance: I support ECDSA_SHA512
> but only with P256 (via supported_curves). So it's actually a retroactive
> improvement to TLS 1.2 to use the code points this way. NSS, at least
> interprets these the same way in TLS 1.2 and TLS 1.3. I believe BoringSSL
> does as well.
> 2. From an implementation perspective, it's easier to just have a single
> external facing surface, which should be the more sensible one, namely
> signature schemes.

I understand your point, but there were different curves with the same
security parameter in TLS 1.2, and advertising some combinations in a
TLS 1.2 / 1.3 (supporting Brainpool curves but not the NIST equivalent)
implementation might be a pain. Considering the current deployment, we
certainly can live with that.

We also were wondering why not reuse the values for hash algorithms for
PSS: if we use 0x04XX for PSS-sha256 and 0x05XX for PSS-sha384, etc. it
would align with existing TLS 1.2 behaviour, and make it easier to
include PSS support in these implementations, as is recommended by the
specification.

>> Another example is the key_share extension: it is in fact made of
>> three different extensions, depending on the message where it is
>> included.  It would be simpler for a (almost-)stateless parser* to
>> either use three different extensions or to use the same content each
>> time.  In the second case, the extension would each time consist of a
>> list of (group + optional keyshare).  The advantages of such a unified
>> keyshare extension would be:
>>  - stateless parsing*
>>  - no reuse of supported_groups which can be merged with key_share
>>  - no more validation complexity since higher-level checks were
>>already required (the groups sent by HRR did not came with a key
>>share for example).
>> The major con one might see is that the server can not advertise its
>> supported groups in an encrypted manner anymore (since
>> supported_groups is not used anymore).  However, we do believe
>> simplifying the implementation is worth it.
>>
>> * Of course the parser still has to use the extension id to know how
>>   to parse the value, but it would not need to have broader context
>>   (the message where the extension is included).  In other words, we
>>   would like select syntax in structs to only use local information.
>>
>> I can try and propose a PR for each extension if there is an interest
>> in the WG.
>>
> It seems to me you are making two points here:
>
> 1. That it would be good to unify supported_groups and key_share
> 2. That we should use separate code points wherever extensions differ
> between the messages they appear (this is far from the only one).
>
> We've had some conversation about the first approach, which seems like kind
> of a stylistic/taste issue whether to have the "I can do" and "here is" in
> one
> place or two, and on balance I think it's better to not change this at this
> point
> without a really compelling reason.

I have no compelling reason, but I don't remember why the WG chose to
stick with two extensions where there was a working design with only one.

> I can see the appeal of having one parser per extension ID, but it isn't
> actually that much easier from an implementation perspective. You already
> need a mapping table from extension ID to handler and that table
> also needs to be able to detect common problems (e.g., extension reuse
> or extensions appearing in the wrong place) and it's not hard at that
> point to also have it point to message-specific parsers (this is how NSS
> mostly works). ISTM that the alternative you propose would either result
> in a proliferation of extensions (which confuses the matching rules)
> or having extensions which are syntactically valid but not legal in
> their context (e.g., >1 key share from the server). This pushes these
> checks further away from the decoder, and makes things which would
> otherwise be syntax errors (e.g., a "pre_shared_key" extension from
> the server which contains keys) into semantic errors 

Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Yoav Nir

On 23 Nov 2016, at 10:30, Nikos Mavrogiannopoulos  wrote:

> On Wed, 2016-11-23 at 10:05 +0200, Yoav Nir wrote:
>> Hi, Nikos
>> 
>> On 23 Nov 2016, at 9:06, Nikos Mavrogiannopoulos 
>> wrote:
>> 
>>> 
>>> Hi,
>>>  Up to the current draft of TLS1.3 the record layer is restricted
>>> to
>>> sending 2^14 or less. Is the 2^14 number something we want to
>>> preserve?
>>> 16kb used to be a lot, but today if one wants to do fast data
>>> transfers
>>> most likely he would prefer to use larger blocks. Given that the
>>> length
>>> field allows for sizes up to 2^16, shouldn't the draft allow for
>>> 2^16-
>>> 1024 as maximum?
>> 
>> I am not opposed to this, but looking at real browsers and servers, 
>> we see that they tend to set the size of records to fit IP packets. 
> 
> IP packets can carry up to 64kb of data. I believe you may be referring
> to ethernet MTU sizes.

I’m referring to the IP packets that they actually use, and that is set by TCP 
to fit the PMTU, which is <= the ethernet MTU.  In practice it is 1500 bytes 
for most network paths.

> That to my understanding is a way to reduce
> latency in contrast to cpu costs. An increase to packet size targets
> bandwidth rather than latency (speed).

Sure, but running ‘openssl speed’ on either aes-128-cbc or hmac or sha256 
(there’s no test for AES-GCM or ChaCha-poly) you get smallish differences in 
terms of kilobytes per second between 1024-byte buffers and 8192-byte buffers. 
And the difference going to be even smaller going to 16KB buffers, let alone 
64KB buffers.

>> The gains from increasing the size of records from the ~1460 bytes
>> that fit in a packet to nearly 64KB are not all that great, and the
>> gains from increasing records from 16 KB to 64KB are almost
>> negligible. At that size the block encryption dominates the CPU time.
> 
> Do you have measurements to support that? I'm quite surprized by such a
> general statement because packetization itself is a non-negligible cost
> especially when encryption is fast (i.e., in most modern CPUs with
> dedicated instructions).

As long as you run over a network that has a smallish MTU, you’re going to 
incur the packetization costs anyway, either in your code or in operating 
system code. If you have a 1.44 GB file you want to send, it’s going to take a 
million IP packets either way and 100 million AES block operations. 

Whether you do the encryption is a million 1440-byte records or 100,000 
14,400-byte records makes a difference of only a few percent.

Measurement that we’ve made bear this out. They’re with IPsec, so it’s 
fragmentation rather than packetization, but I don’t think that should make 
much of a difference.

Again, I’m not opposed to this. A few percent is a worthy gain.

Yoav

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Judson Wilson
I worry about the buffer sizes required on embedded devices. Hopefully the
other endpoint would be programmed to limit record sizes, but is that
something we want to rely on?  This could be a parameter agreed upon during
the handshake, but that seems bad.


On Wed, Nov 23, 2016 at 12:41 AM, Nikos Mavrogiannopoulos 
wrote:

> On Wed, 2016-11-23 at 00:39 -0800, Judson Wilson wrote:
> > Can you send multiple records in one data transfer to achieve
> > whatever gains are desired?
>
> The packetization cost still remains even if you do that. However, the
> question is how does the 2^14 limit comes from, and why TLS 1.3 should
> keep it?
>
> regards,
> Nikos
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Judson Wilson
Can you send multiple records in one data transfer to achieve whatever
gains are desired?

On Wed, Nov 23, 2016 at 12:30 AM, Nikos Mavrogiannopoulos 
wrote:

> On Wed, 2016-11-23 at 10:05 +0200, Yoav Nir wrote:
> > Hi, Nikos
> >
> > On 23 Nov 2016, at 9:06, Nikos Mavrogiannopoulos 
> > wrote:
> >
> > >
> > > Hi,
> > >  Up to the current draft of TLS1.3 the record layer is restricted
> > > to
> > > sending 2^14 or less. Is the 2^14 number something we want to
> > > preserve?
> > > 16kb used to be a lot, but today if one wants to do fast data
> > > transfers
> > > most likely he would prefer to use larger blocks. Given that the
> > > length
> > > field allows for sizes up to 2^16, shouldn't the draft allow for
> > > 2^16-
> > > 1024 as maximum?
> >
> > I am not opposed to this, but looking at real browsers and servers,
> > we see that they tend to set the size of records to fit IP packets.
>
> IP packets can carry up to 64kb of data. I believe you may be referring
> to ethernet MTU sizes. That to my understanding is a way to reduce
> latency in contrast to cpu costs. An increase to packet size targets
> bandwidth rather than latency (speed).
>
> > The gains from increasing the size of records from the ~1460 bytes
> > that fit in a packet to nearly 64KB are not all that great, and the
> > gains from increasing records from 16 KB to 64KB are almost
> > negligible. At that size the block encryption dominates the CPU time.
>
> Do you have measurements to support that? I'm quite surprized by such a
> general statement because packetization itself is a non-negligible cost
> especially when encryption is fast (i.e., in most modern CPUs with
> dedicated instructions).
>
> regards,
> Nikos
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft 18 review: Downgrade protection

2016-11-23 Thread Olivier Levillain
Hi,

> Thanks for your comments. I do want this section to be clear.
>
> It would be very helpful if you formatted this as a PR. That would make it
> easier to understand the changes in this text.

The text has been proposed as PR 775
(https://github.com/tlswg/tls13-spec/pull/775).

olivier

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Nikos Mavrogiannopoulos
On Wed, 2016-11-23 at 10:05 +0200, Yoav Nir wrote:
> Hi, Nikos
> 
> On 23 Nov 2016, at 9:06, Nikos Mavrogiannopoulos 
> wrote:
> 
> > 
> > Hi,
> >  Up to the current draft of TLS1.3 the record layer is restricted
> > to
> > sending 2^14 or less. Is the 2^14 number something we want to
> > preserve?
> > 16kb used to be a lot, but today if one wants to do fast data
> > transfers
> > most likely he would prefer to use larger blocks. Given that the
> > length
> > field allows for sizes up to 2^16, shouldn't the draft allow for
> > 2^16-
> > 1024 as maximum?
> 
> I am not opposed to this, but looking at real browsers and servers, 
> we see that they tend to set the size of records to fit IP packets. 

IP packets can carry up to 64kb of data. I believe you may be referring
to ethernet MTU sizes. That to my understanding is a way to reduce
latency in contrast to cpu costs. An increase to packet size targets
bandwidth rather than latency (speed).

> The gains from increasing the size of records from the ~1460 bytes
> that fit in a packet to nearly 64KB are not all that great, and the
> gains from increasing records from 16 KB to 64KB are almost
> negligible. At that size the block encryption dominates the CPU time.

Do you have measurements to support that? I'm quite surprized by such a
general statement because packetization itself is a non-negligible cost
especially when encryption is fast (i.e., in most modern CPUs with
dedicated instructions).

regards,
Nikos

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft 18 review : Hello Retry Request and supported groups cache

2016-11-23 Thread Olivier Levillain
Hi,

>> Being able to send supported_groups does allow a server to choose to make
>> a tradeoff between an extra round trip on the current connection and its
>> own group preferences. One example where a server might want to do this is
>> where it believes that X25519 is likely a more future-proof group and would
>> prefer that, but still believes the client's choice of P256 is suitable for
>> this connection. Always requiring an extra round trip might end up being
>> expensive depending on the situation so some servers might prefer to avoid
>> sending an HRR for a slightly more preferred group.
>>
>> I think that requiring the client to maintain state from the
>> supported_groups puts undue requirements on the client, since not all
>> clients keep state between connections, and those that do usually only keep
>> session state for resumption.
>>
> This matches my view as well.
>
> I agree that the client should not be require to keep state. I do not
> believe that the draft requires this, but if someone thinks otherwise,
> please send a PR to fix.
>

There were actually two points in my message:
 - I was not convinced by this way of signalling a preference without
enforcing it, but I understand that, if we keep supported_groups, it
does not cost much and the client can safely ignore the server sent
extension;
 - however, I found strange that the specification stated that the
client could update its view when seeing this extension, but that it was
not stated in the case of an HRR where updating its views of the
servers' preference would clearly be useful for the future. I only
proposed to add the same text "The client MAY update its view of the
server's preference when receiving an HRR, to avoid the extra round trip
in future encounters".

olivier

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-23 Thread Yoav Nir
Hi, Nikos

On 23 Nov 2016, at 9:06, Nikos Mavrogiannopoulos  wrote:

> Hi,
>  Up to the current draft of TLS1.3 the record layer is restricted to
> sending 2^14 or less. Is the 2^14 number something we want to preserve?
> 16kb used to be a lot, but today if one wants to do fast data transfers
> most likely he would prefer to use larger blocks. Given that the length
> field allows for sizes up to 2^16, shouldn't the draft allow for 2^16-
> 1024 as maximum?

I am not opposed to this, but looking at real browsers and servers, we see that 
they tend to set the size of records to fit IP packets. The gains from 
increasing the size of records from the ~1460 bytes that fit in a packet to 
nearly 64KB are not all that great, and the gains from increasing records from 
16 KB to 64KB are almost negligible. At that size the block encryption 
dominates the CPU time.

Yoav

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls