Re: [TLS] Getting started, clock not set yet

2022-08-17 Thread Kyle Rose
On Wed, Aug 17, 2022 at 11:34 AM Peter Gutmann 
wrote:

> Kyle Rose  writes:
>
> >IMO, the two requirements "Prohibit upgrades" and "Leverage
> general-purpose
> >network protocols with large attack surfaces" are in direct conflict.
>
> Only if you implement them with large attack surfaces, for which again see
> my
> earlier comments.
>

A large attack surface can't be avoided with the MTI for these protocols.
And if you don't implement what's required, don't complain when it doesn't
interop. 路‍♂️

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Getting started, clock not set yet

2022-08-17 Thread Kyle Rose
On Wed, Aug 17, 2022 at 11:10 AM Peter Gutmann 
wrote:

> See my earlier comments on this.
>

Honestly, it sounds like these devices maybe shouldn't be using internet
technologies that were designed with certain assumptions about
extensibility in mind. With such strong constraints not only on behavior
but on implementation, it really seems like the right thing to do is to
shrink-wrap every interface around exactly what you need and avoid all
unnecessary complexity. That means no TLS, no X.509, no IP, etc. IMO, the
two requirements "Prohibit upgrades" and "Leverage general-purpose network
protocols with large attack surfaces" are in direct conflict.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Getting started, clock not set yet

2022-08-15 Thread Kyle Rose
On Sun, Aug 14, 2022 at 5:25 PM Hal Murray  wrote:

> Thanks.
>
> > It's been a few years, but IIRC my thinking was that the degree of trust
> > required in the Roughtime servers' long-term public keys is very low:
> you're
> > trusting them only for one server's assertion of the current time, not
> for
> > general web traffic; and if you ask enough servers, the likelihood of
> being
> > tricked into trusting a bad timestamp is very low even over long time
> > periods.
>
> I've been assuming self-signed certificates with long lifetimes -- one per
> server.
>

It's a testament to how certificate infrastructure has evolved that one
year is now considered "long lived". :-) I was responding to an earlier
hypothetical about a device sitting on a shelf for ten years, and trying to
figure out how one could bootstrap the PKI after that time without explicit
intervention. I'm starting to think I was trying too hard to address a
(possibly contrived) edge case that really deserves a simpler solution:
manual re-bootstrapping.

> In other words, much of the security of the scheme is in the practical
> > difficulty of mounting a successful attack even if the keys have been
> > compromised. NTS doesn't even attempt to address this kind of attack
> vector.
>
> Is there a first order difference between NTS using self signed
> certificates
> and Roughtime?
>

Miroslav's answer is probably the right one: Roughtime gives you the
ability not only to detect bad timestamps but to prove it to others. Upon
reflection, that doesn't seem especially useful in this context because
you're already talking about devices that have appeared out of nowhere
after a long period of inactivity.

There have been semi-endless debates about how many NTP servers to use.  (I
> haven't seen one recently.)  With 3 servers, 2 can outvote 1 bad guy. With
> 4
> servers, you still have 3 if one is down.  ...  Adding security
> complicates
> that discussion.  You have to add deliberate malfeasance to the list of
> things
> that can go wrong.  And things can change over 10 years.
>
> Are there any good papers or web pages discussing the security of TLS?
>

Did you mean NTP here? The security of TLS has been discussed far and wide.
:-)

One quirk on my 10 year problem.  If the boxes are sitting on a shelf, it's
> at
> least possible to open them up and update firmware.  It would be
> expensive,
> but it is another branch of the cost-benefit tree.
>

And this was my first bit of advice to Peter: if it's out of service that
long, it probably has known vulnerabilities, so you should probably upgrade
the firmware before reattaching it to a network or it's likely to get
pwn3d. That firmware update would come with updated CAs, and a notion of
the current time (to bootstrap the first TLS handshakes, after which
trusted sources can provide updated timestamps) if the device has no RTC of
its own.

I wish I had some more context for this area of embedded devices. For
example:

 * Why is an RTC more expensive (along whatever axis you choose) than a NIC
(wifi or ethernet)?
 * What classes of devices would reasonably sit on a shelf for ten years
and subsequently prove useful without being updated?
 * If it's been sitting on a shelf for ten years, why is reattaching it to
the network easy, while plugging it into an upgrade klosk first and *then*
reattaching it to the network is hard?

After this thread, I'm now trying to figure out why this whole endeavor
isn't contrived.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Getting started, clock not set yet

2022-08-14 Thread Kyle Rose
On Sat, Aug 13, 2022 at 11:16 PM Hal Murray  wrote:

> > IIRC, this is one of the main arguments for advancing Roughtime:
>
> I took a look at draft 06.  I don't see how it helps.  Am I missing
> something?
>
> Here is the key section:
>
> 6.4 Validity of Response
>   A client MUST check the following properties when it receives a
>   response. We assume the long-term server public key is known to the
>   client through other means.
>
> If I can distribute valid long-term keys, I can use them to sign the
> certificates for NTS-KE servers and don't need Roughtime to get started.
>

It's been a few years, but IIRC my thinking was that the degree of trust
required in the Roughtime servers' long-term public keys is very low:
you're trusting them only for one server's assertion of the current time,
not for general web traffic; and if you ask enough servers, the likelihood
of being tricked into trusting a bad timestamp is very low even over long
time periods.

Such an attack would require both access to a large number of long-term
private keys whose public keys are embedded in the client attack target, as
well as the ability to intercept traffic intended for each of these servers
at whatever moment the client initiates the Roughtime protocol (which
probably implies a long-term undetected network presence). This is clearly
a higher bar than simply trusting a web PKI certificate signed some
indeterminate time ago without respecting the expiration date and without
being able to update CRLs on startup (which also poses trust anchor turtles
all the way down).

In other words, much of the security of the scheme is in the practical
difficulty of mounting a successful attack even if the keys have been
compromised. NTS doesn't even attempt to address this kind of attack vector.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Getting started, clock not set yet

2022-08-11 Thread Kyle Rose
On Wed, Aug 10, 2022 at 10:13 AM Peter Gutmann 
wrote:

> So we're down to mostly non-web-PKI devices and/or the ten year problem, of
> which I've encountered the latter several times with gear that sits on a
> shelf
> for years and then when it's time to provision it all the certificates have
> long since expired, which is another reason why you ignore expiry dates
> (or at
> least you ignore them after you get hit by the first major outage caused by
> this because until then no-one realised that it was an issue, a ticking
> time-
> bomb that may take years to detonate).
>

Expired CAs are definitely a problem for PKI participation after such a
delay, but probably one that is dwarfed by the near certain existence of
known vulnerabilities in firmware that hasn't been updated in 10 years. So
it's probably best they remain air-gapped and don't participate in active
networked systems until they've been updated, which would then include new
CA certificates.

That leaves revocation, which alongside ignoring expiry dates is another
> thing
> that's commonly ignored in SCADA, both for the same reason as expiry dates
> are
> ignored, you don't want to DoS yourself, and because in many cases there's
> neither a logical nor a practical basis for revocation or revocation
> checks.
> For example in typical SCADA networks a device is removed by shutting off
> its
> access, not by adding an entry to a CRL somewhere and hoping someone
> notices.
> In fact it's not even clear what certificate would be revoked in order to
> achieve some effect, or why.


Revocation of device certs to remove their access isn't what I'm thinking
of. I'm talking about devices that need to establish trust in remote
services via networks of questionable integrity (the public internet, but
maybe moreso private networks with lots of hiding places and little active
monitoring). Revocation/expiration are complementary tools: periodic
expiration means you don't need to maintain revocations forever, and
revocation means you don't need expirations to be *too* short and still
strictly limit the hazard period resulting from private key compromise. You
want to respect both, but for better or for worse the web's threat model
has seen revocation as really-nice-to-have and expiration as
mandatory-for-interoperation.

Consequently, I would not recommend any device interact with the web
without being able to establish that server certificates have not expired.
That means there is a requirement to somehow bootstrap the current time
when needed, whether via device RTC or via some network-connected entity in
which trust may be established in some way that is far less general than
web PKI. This is a rule-of-thumb: clearly there are ways to safely interact
with the web even via cleartext transports, assuming some other kind of
out-of-band mechanism for establishing trust in any transferred content.
But that goes beyond the scope of the web security model.

Ignoring CA billing-cycle^H^H^Hexpiry dates
>

You repeatedly bring up this point, but you do realize that one of the
people you're arguing with was instrumental in the establishment of a
mechanism for provisioning *free* web PKI certificates, right? The cost of
procuring signed certificates is no longer an obstacle to participating in
the web PKI.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Getting started, clock not set yet

2022-08-09 Thread Kyle Rose
On Tue, Aug 9, 2022 at 12:40 AM Hal Murray  wrote:

> I work on NTP software.  NTS (Network Time Security) uses TLS.
>
> Many security schemes get tangled up with time.  TLS has time limits on
> certificates.  That presents a chicken-egg problem for NTP when getting
> started.
>

IIRC, this is one of the main arguments for advancing Roughtime:

https://datatracker.ietf.org/doc/draft-ietf-ntp-roughtime/

Assuming Roughtime is 'close enough', you can bootstrap NTP and then do
whatever else requires an accurate notion of the current time.

What Peter said isn't quite right, since (for example) you wouldn't want to
be obliged to distribute revocations for compromised but long-expired
certificates under the assumption that a properly-functioning client
wouldn't accept them anyway, but relying on Roughtime as a bootstrapping
mechanism limits the risk of trusting an expired cert.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Does TLS support ECDHE based SEED cipher suites?

2021-12-31 Thread Kyle Rose
On Fri, Dec 31, 2021 at 11:24 AM tom.ripe  wrote:

>
> > I'd oppose any specification of new cipher suites without a good
> > justification, and I think this is an opinion many here share.
>
> And I just see an I-D for AEGIS-128L and AEGIS-256, albeit not for TLS.
>   There seems to be no limit to new algorithms!
>

IIRC, this was intentional: make it easy to get a code point so people
don't squat on them, but have IANA maintain a list of "recommended"
ciphers, as shown in the catalog here:

https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] SNI from CDN to Origin (was I-D Action: draft-ietf-tls-sni-encryption-08.txt)

2019-10-09 Thread Kyle Rose
>
> I'm wondering what the backhaul traffic from CDN to Origin looks like,
> even if a user-agent request to the CDN used ESNI. I noticed that many CDNs
> provide client certificates.
>

Some origins do require client certificates, but not all. This is up to the
customer.

In TLS handshakes that use a client certificate, it seems like the SNI
> might be able to be sent with the second message from the client (alongside
> the client certificate).
>

As I alluded to in the footnote from my last reply, I'm not sure how much
value this would have since the identity of the origin is typically evident
from the destination IP.

Kyle

>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] SNI from CDN to Origin (was I-D Action: draft-ietf-tls-sni-encryption-08.txt)

2019-10-09 Thread Kyle Rose
I'm struggling to understand the issue you're raising.

There's typically nothing special about the CDN to origin TLS interaction.
If the origin supports ESNI, why couldn't it advertise that? The CDN node
could just pick it up like any other TLS client would.*

The one way in which CDN to origin interactions may differ greatly from
client to origin interactions is the aggregation of many sessions to the
same origin inside the same connection, but this is at the application
layer. This aggregation may improve privacy by decoupling incoming traffic
from upstream requests through persistent connections and/or pre-cached
content.

Can you more precisely define your concern?

Kyle

*Not clear that it would be helpful, since the origin is probably obvious
from the destination IP, but I think the whole ESNI discussion presumes
traffic analysis is either hopelessly naïve or impossible, so I'll just
stipulate that and proceed from there.

On Wed, Oct 9, 2019, 7:55 AM Rob Sayre  wrote:

> On Wed, Oct 9, 2019 at 6:51 PM Salz, Rich  wrote:
>
>>
>>- A link from CDN to Origin is just a particularly easy-to-deploy use
>>case, since client certificates are already in wide use and IPv6 tends to
>>work flawlessly.
>>
>>
>>
>> It does?  Gee, cool.
>>
>
> This response sounds like anger. I'm sorry I've caused you to feel angry.
>
> It might be best to discuss technical concerns. Do you think an SNI field
> sent with a client certificate is a bad idea? I'm not a cryptographer, so I
> thought I would suggest the approach and see what people thought.
>
> thanks,
> Rob
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] On the difficulty of technical Mandarin (SM3 related)

2019-08-19 Thread Kyle Rose
Moving tls to bcc, and adding rfc-interest. (This is the kind of discussion
that is likely to ignite a dumpster fire, and it's not specific to TLS
work.)

On Mon, Aug 19, 2019 at 11:05 AM Watson Ladd  wrote:

> I see no reason why English alone should be accepted for standards
> documents we reference. French and German pose few difficulties, and one
> can always learn Russian.
>
> What I don't know is how difficult Mandarin is at a level to read a
> standards document. I expect the mechanics of using the dictionary to
> dominate.
>
> I'm concerned about the traceability of unofficial Englidh PDFs on some
> website: could the Chinese body responsible host them instead?
>
> I fully expect this to be a more general IETF problem.
>

For purely practical reasons, within a knowledge domain it makes sense to
have a single language in which normative documents are written, with
fluency in that language an implicit requirement of direct participation.
Otherwise, the number of people who will be able to contribute to IETF work
(writing or reviewing) will be very small, limiting the throughput, shared
knowledge base, and overall utility of the SDO.

For historical reasons, that single language is English. This isn't
something unique to the IETF, either: for a variety of reasons too numerous
to cover here, English has become the default shared language in a
multilingual world when actual work needs to get done between people with
different native tongues. The IETF exists in this reality: fair or not,
there's really no other practical choice.

Informational documents and upstream references are maybe a different story
as they are not required to implement IETF protocols, but I suspect similar
issues would crop up even there, given how important many informational
documents are.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Draft for SM cipher suites used in TLS1.3

2019-08-15 Thread Kyle Rose
On Thu, Aug 15, 2019 at 10:17 AM Paul Yang  wrote:

> Hi all,
>
> I have submitted a new internet draft to introduce the SM cipher suites
> into TLS 1.3 protocol.
>
> https://tools.ietf.org/html/draft-yang-tls-tls13-sm-suites-00
>

Corresponding to changes in the IANA registry for TLS Cipher Suites as
specified by RFC 8447 (see section 8 of https://tools.ietf.org/html/rfc8447),
you should add a "recommended" column with the value "N" to changes
requested to the cipher suites registry.

Additionally, the SignatureAlgorithms registry has been deprecated: its
contents apply only to versions of TLS prior to 1.3.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-24 Thread Kyle Rose
On Mon, Jul 24, 2017 at 10:33 AM, Paul Turner  wrote:

>
>
> Of course, this is precisely the point. All your proposal does is
> complicate the process of sharing sessions with a third-party: it doesn't
> stop an endpoint from surreptitiously doing evil.
>
>
>
> Is the objective to have the protocol prevent an endpoint “surreptitiously
> doing evil”?
>

To the extent it can, it should (within bounds of performance,
deployability, etc.). Many of us have been pointing out that there are
limits to what's possible, and tradeoffs involved in other facets.

Also, can you define what you mean by evil?
>

I am using it as shorthand in this conversation for the general notion of
actively enabling pervasive surveillance, which might be logging keys to a
government server or using a government-generated DH share, among other
possibilities. I am happy to use a different phrasing, but this one is
useful because it's pithy: it invokes intent, which separates it
conceptually from other classes of peer trust violations.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-24 Thread Kyle Rose
On Mon, Jul 24, 2017 at 10:03 AM, Brian Sniffen  wrote:

> Ted Lemon  writes:
>
> > On Jul 23, 2017, at 9:01 PM, Blumenthal, Uri - 0553 - MITLL <
> u...@ll.mit.edu> wrote:
> >> What I am trying to avoid is the ability to *surreptitiously* subvert a
> protocol that’s assumed to be secure.
> >
> > You don't seem to be hearing what I'm trying to say.   What you are
> > proposing is physically impossible.
>
> Is it?  I could imagine making the server ECDH share dependent on the
> client ECDH share, plus a local random value.  At the end of the
> session, the server discloses that random value, showing that it
> properly constructed a fresh ECDH share.
>
> Then the client has an opportunity to notice---this session didn't have
> a (retrospective) proof of proper generation of the server ECDH share,
> and so may involve key reuse in a dangerous way.
>
> This doesn't stop the server from logging the session key, of course.
>

Of course, this is precisely the point. All your proposal does is
complicate the process of sharing sessions with a third-party: it doesn't
stop an endpoint from surreptitiously doing evil. (Your proposal is
interesting, but because it neatly solves the problem of DH share reuse
without requiring some kind of CT-like infrastructure, not because it
somehow addresses the evil endpoint problem. On the downside, it also may
have implications for amplification DoS.)

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-19 Thread Kyle Rose
On Wed, Jul 19, 2017 at 4:02 PM, Ted Lemon  wrote:

> Bear in mind that what we (or at least I) want out of not publishing a
> spec is that this technology not be present by default in TLS
> implementations.   If someone wants to maintain a set of patches, there's
> not much we can do about it, and I don't honestly care *because* there
> isn't much that we can do about it.   What I do not want to see is *the
> IETF* recommending this solution.
>
> It would be very nice if the people who are hot on a solution to this
> problem were willing to do the work to do the three-way protocol.   But the
> purpose of pointing out that that is the right solution is to say "if you
> want to solve this problem in the IETF, here is what we could do that might
> get IETF consensus."
>

Agreed. 
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-19 Thread Kyle Rose
On Wed, Jul 19, 2017 at 3:43 PM, Ted Lemon  wrote:

> This is exactly right.   We have a *real* problem here.   We should
> *really* solve it.   We should do the math.   :)
>

Is there appetite to do this work? If we restrict this to two paths, one of
which is spending years designing and implementing a new multi-party
security protocol, the other of which is silently and undetectably (at
least on private networks) modifying the standardized protocol for which
lots of well-tested code already exists... my money is on the latter
happening.

In every decision we make with respect to the static DH approach, we have
to keep in mind that this change can be implemented unilaterally, i.e.,
without any modifications for interop. Consequently, I think the work we
really need to do is to design and implement a FS-breakage detector so we
can at least tell when this is happening on the public internet. Beyond
that, the best we can really do is ask implementors to be polite and
intentionally make their implementations not interoperate silently with TLS
1.3.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-15 Thread Kyle Rose
I might want to try responding to the right thread. Apologies for the
noise. ;-)

Kyle

On Sat, Jul 15, 2017 at 9:08 AM, Kyle Rose <kr...@krose.org> wrote:

> I've rebased from the kernel master HEAD (4.12.0+), tested, and
> force-pushed the repository.
>
> Conveniently, it looks like, since the last time I searched for one,
> someone added an ECDH implementation to the kernel. That makes this a lot
> easier.
>
> Kyle
>
> On Sat, Jul 15, 2017 at 8:18 AM, Kyle Rose <kr...@krose.org> wrote:
>
>> On Sat, Jul 15, 2017 at 7:59 AM, Roland Dobbins <rdobb...@arbor.net>
>> wrote:
>>
>>> On 15 Jul 2017, at 18:23, Daniel Kahn Gillmor wrote:
>>>
>>> Whether it justifies a loss of security is a separate question.
>>>>
>>>
>>> It isn't a loss of security - it's actually a net gain for security.
>>
>>
>> Security isn't a scalar quantity, so there's no way you can credibly
>> assert this. OTOH, it's easy to point to the individual security properties
>> lost by expanding the attack surface for a particular session key or by
>> mandating key-reuse.
>>
>> Analyzing the impact of any particular mechanism for middlebox inspection
>> is a question of tradeoffs: what are you giving up, what are you gaining,
>> and is the trade worth it? The first two are questions of fact (though I'm
>> under no illusion that there would even be broad agreement on those). The
>> last is not: it's inherently subjective and among other things it depends
>> on the threats, the alternative mechanisms available, and the value placed
>> on the properties TLS provides to end users in its unadulterated form.
>>
>> Every one of your emails seems to boil down to an argument of the form
>> "Organizations have infrastructure and operations set up to do inspection
>> this way, so we need some way to apply that to TLS 1.3." I am unpersuaded
>> by such arguments as a reason for standardizing a weakening of TLS. Given
>> that, I would like to understand the origins of this approach to
>> monitoring, as that may shed light on the hidden or unspecified constraints
>> other than those imposed by TLS. (For example, if this approach is deemed
>> to be less costly than doing endpoint monitoring, or if there are
>> insufficient access controls for entry to the privileged network, or if the
>> privileged network has systems that are too difficult to secure.)
>>
>> Kyle
>>
>>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-15 Thread Kyle Rose
I've rebased from the kernel master HEAD (4.12.0+), tested, and
force-pushed the repository.

Conveniently, it looks like, since the last time I searched for one,
someone added an ECDH implementation to the kernel. That makes this a lot
easier.

Kyle

On Sat, Jul 15, 2017 at 8:18 AM, Kyle Rose <kr...@krose.org> wrote:

> On Sat, Jul 15, 2017 at 7:59 AM, Roland Dobbins <rdobb...@arbor.net>
> wrote:
>
>> On 15 Jul 2017, at 18:23, Daniel Kahn Gillmor wrote:
>>
>> Whether it justifies a loss of security is a separate question.
>>>
>>
>> It isn't a loss of security - it's actually a net gain for security.
>
>
> Security isn't a scalar quantity, so there's no way you can credibly
> assert this. OTOH, it's easy to point to the individual security properties
> lost by expanding the attack surface for a particular session key or by
> mandating key-reuse.
>
> Analyzing the impact of any particular mechanism for middlebox inspection
> is a question of tradeoffs: what are you giving up, what are you gaining,
> and is the trade worth it? The first two are questions of fact (though I'm
> under no illusion that there would even be broad agreement on those). The
> last is not: it's inherently subjective and among other things it depends
> on the threats, the alternative mechanisms available, and the value placed
> on the properties TLS provides to end users in its unadulterated form.
>
> Every one of your emails seems to boil down to an argument of the form
> "Organizations have infrastructure and operations set up to do inspection
> this way, so we need some way to apply that to TLS 1.3." I am unpersuaded
> by such arguments as a reason for standardizing a weakening of TLS. Given
> that, I would like to understand the origins of this approach to
> monitoring, as that may shed light on the hidden or unspecified constraints
> other than those imposed by TLS. (For example, if this approach is deemed
> to be less costly than doing endpoint monitoring, or if there are
> insufficient access controls for entry to the privileged network, or if the
> privileged network has systems that are too difficult to secure.)
>
> Kyle
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-15 Thread Kyle Rose
On Sat, Jul 15, 2017 at 7:59 AM, Roland Dobbins  wrote:

> On 15 Jul 2017, at 18:23, Daniel Kahn Gillmor wrote:
>
> Whether it justifies a loss of security is a separate question.
>>
>
> It isn't a loss of security - it's actually a net gain for security.


Security isn't a scalar quantity, so there's no way you can credibly assert
this. OTOH, it's easy to point to the individual security properties lost
by expanding the attack surface for a particular session key or by
mandating key-reuse.

Analyzing the impact of any particular mechanism for middlebox inspection
is a question of tradeoffs: what are you giving up, what are you gaining,
and is the trade worth it? The first two are questions of fact (though I'm
under no illusion that there would even be broad agreement on those). The
last is not: it's inherently subjective and among other things it depends
on the threats, the alternative mechanisms available, and the value placed
on the properties TLS provides to end users in its unadulterated form.

Every one of your emails seems to boil down to an argument of the form
"Organizations have infrastructure and operations set up to do inspection
this way, so we need some way to apply that to TLS 1.3." I am unpersuaded
by such arguments as a reason for standardizing a weakening of TLS. Given
that, I would like to understand the origins of this approach to
monitoring, as that may shed light on the hidden or unspecified constraints
other than those imposed by TLS. (For example, if this approach is deemed
to be less costly than doing endpoint monitoring, or if there are
insufficient access controls for entry to the privileged network, or if the
privileged network has systems that are too difficult to secure.)

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] chairs - please shutdown wiretapping discussion...

2017-07-12 Thread Kyle Rose
On Wed, Jul 12, 2017 at 11:28 AM, Stephen Farrell <stephen.farr...@cs.tcd.ie
> wrote:

>
>
> On 12/07/17 16:27, Kyle Rose wrote:
> > The telco in the POTS case isn't either endpoint. The third-party
> > surveillance is unknown to those endpoints. Therefore: wiretapping.
>
> Same in the wordpress.com or smtp/tls cases already
> described on list. Therefore: wiretapping.
>
> My point was that "collaborating" does not mean not
> wiretapping. Saying otherwise is what'd be silly.
>

And yet that's what 2804, what you have repeatedly cited, explicitly
states. I'm going to go with the definition given there, "silly" or not.
This isn't wiretapping: it's *something else* potentially bad, but not all
surveillance is wiretapping.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] chairs - please shutdown wiretapping discussion...

2017-07-12 Thread Kyle Rose
On Wed, Jul 12, 2017 at 11:18 AM, Stephen Farrell  wrote:

> > If one endpoint is feeding
> > cryptographic material to a third party (the only way that information
> gets
> > out to the third party, vulnerabilities notwithstanding), they are
> > collaborating, not enabling wiretapping.
>
> That's nonsense. In the POTS case, telcos are collaborating
> with their local LEAs and that is wiretapping.


The telco in the POTS case isn't either endpoint. The third-party
surveillance is unknown to those endpoints. Therefore: wiretapping.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] chairs - please shutdown wiretapping discussion...

2017-07-12 Thread Kyle Rose
On Wed, Jul 12, 2017 at 10:38 AM, Ted Lemon  wrote:

> On Jul 12, 2017, at 10:32 AM, Richard Barnes  wrote:
>
> Oh, come on.  You've never seen code in a library that implements
> something that's not in an IETF RFC?
>
>
> Of course I have.   I think that putting a warning in the TLS 1.3 spec as
> Christian suggested will mean that the code won't appear in places where
> there isn't a strong use case for it.   It may well appear in places where
> there is a strong use case, but anything open source is going to face a
> stiff headwind in terms of implementing this, and that's what I'm
> suggesting we encourage.   If it doesn't show up in openssl, gnutls or
> boringssl, it's a much smaller problem.   We can't actually stop it
> happening—I'm just arguing for not making it convenient.
>

Knowing the people involved in at least some of those projects, there is
very little chance of that happening. Beyond that lies political action,
which is definitely not what the TLS WG mailing list should be used for.

To your last email, I agree that we've mostly beaten this to death. I'm
happy to let the conversation move elsewhere.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] chairs - please shutdown wiretapping discussion...

2017-07-12 Thread Kyle Rose
On Wed, Jul 12, 2017 at 10:22 AM, Ted Lemon <mel...@fugue.com> wrote:

> On Jul 12, 2017, at 10:18 AM, Kyle Rose <kr...@krose.org> wrote:
>
> We need to dispel the myth that mere inaction on our part will on its own
> prevent implementation of these mechanisms, if for no other reason but to
> redirect energy to the political arena where the pervasive monitoring
> battles *are* actually fought.
>
>
> Inaction on our part will prevent the code from going into the common
> distributions.   That's not worthless.
>

Which will have zero impact on pervasive surveillance until some government
decides they want to use this mechanism or something like it and mandates
that it be implemented universally within their borders. Then it will
appear in short order, even if the government has to hire their own code
monkeys to do it, at which point it will continue to have zero impact on
pervasive surveillance.

Again, I'm not recommending any TLS distribution implement this, only that
we stop fooling ourselves into believing that refusing to standardize a
mechanism like this will prevent one from being implemented when someone
decides they want it.

This is fundamentally different from the question of standardizing
potentially privacy-violating protocol extensions that need to survive
end-to-end on the internet to be useful to the third party: this entire
functionality can be implemented at a single endpoint without anyone else's
permission or custom interop.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] chairs - please shutdown wiretapping discussion...

2017-07-12 Thread Kyle Rose
On Wed, Jul 12, 2017 at 8:57 AM, Ted Lemon  wrote:

> The problem is that in modern times we can't assume that collaboration is
> consensual, so the rules in RFC2804 aren't as applicable as they were.
>

Until someone comes up with a technical countermeasure for involuntary
collusion, the solution space is entirely political. This isn't nuclear
science, in which it was conceivable in the 1940's that consensus among the
entire community of nuclear physicists could have crippled the development
of the bomb: the IETF neither discussing nor publishing a document
describing a mechanism for session key sharing does not imply that a
government hell-bent on pervasive surveillance will be unable to force
something down the throats of site admins, because even the
not-entirely-broken ways of doing this are pretty obvious extensions to the
protocol.

We need to dispel the myth that mere inaction on our part will on its own
prevent implementation of these mechanisms, if for no other reason but to
redirect energy to the political arena where the pervasive monitoring
battles *are* actually fought.


> There's no way to have the consensual collaboration use case without
> enabling the non-consensual use case.   Anything we do that makes it easier
> to enable the non-consensual use case is a bad idea.   So in my mind RFC
> 7258 is more applicable here than RFC 2804.
>

No contest.


> The problem with arguing this on the basis of whether or not there is a
> non-wiretapping operational use case for this is that there *is* a
> legitimate non-wiretapping operational use case here.   As I understand it,
> the motivation for doing this is to be able to avoid deploying different
> pieces of DPI hardware differently in data centers.   That's a legitimate
> motivation.   The problem is that (IMHO) it's not a good enough reason to
> standardize this.
>
> I would much rather see people who have this operational use case continue
> to use TLS 1.2 until the custom DPI hardware that they are depending on is
> sufficiently obsolete that they are going to remove it anyway; at that
> point they can retool and switch to TLS 1.3 without needing support for
> static keys.   The advantage of this is that simply using TLS 1.2 signals
> to the client that the privacy protections of TLS 1.3 are not available, so
> you get the consensual aspect that Tim was arguing for without having to
> modify TLS 1.3.
>

Absolutely. Your recommendation (among others') is precisely why I am
opposed to censoring this (or any) discussion. We're not the protocol
police: while there is simply no way for us to prevent implementors from
doing misguided things, we can provide alternatives and recommendations
along with justifications for those judgments. But I see this discussion as
mostly limited to improving the practical security of actual users, not as
part of some larger war against wiretapping or pervasive surveillance. This
isn't that battle, and this is not where that battle will be fought if
governments decide they want those things.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] chairs - please shutdown wiretapping discussion...

2017-07-12 Thread Kyle Rose
On Tue, Jul 11, 2017 at 9:11 AM, Ted Lemon  wrote:

> It’s also true that you can just exfiltrate every key as it’s generated,
> but that’s not what’s being proposed and would not, I think, suit the needs
> of the operators who are making this proposal.
>
> I don’t see how you could mitigate against deliberate key exfiltration.
>  At some point you really are relying on the security of the endpoint.
>  But being able to detect repeated keys is useful for preventing pervasive
> monitoring: it requires the monitored either to have access to the key
> generation stream in realtime, or to request the key for a particular
> conversation.
>

Much of this conversation seems to conflate wiretapping with collaboration.
2804 has a clear definition of wiretapping:

q( Wiretapping is what occurs when information passed across the
   Internet from one party to one or more other parties is delivered to
   a third party:

   1. Without the sending party knowing about the third party

   2. Without any of the recipient parties knowing about the delivery to
  the third party

   3. When the normal expectation of the sender is that the transmitted
  information will only be seen by the recipient parties or parties
  obliged to keep the information in confidence

   4. When the third party acts deliberately to target the transmission
  of the first party, either because he is of interest, or because
  the second party's reception is of interest. )

This proposal (and related proposals involving encrypting session keys to
inspection boxes, either in-band or OOB) violates condition 2 because one
endpoint would have to intentionally take action to deliver the session key
or private DH share to the third party. If one endpoint is feeding
cryptographic material to a third party (the only way that information gets
out to the third party, vulnerabilities notwithstanding), they are
collaborating, not enabling wiretapping.

(I'd argue the inspection box also fails to be a third party, as it is part
of the infrastructure of one endpoint, but that's largely irrelevant to the
question of whether this is wiretapping once we've determined the delivery
of keys is not a secret.)

I think this issue of static DH being an attack (maybe; not taking a
position) is something of a red herring, because shipping individual
session keys to a third party like a government doesn't add any substantive
hurdle beyond shipping them a single static DH share. That said, I agree
that an infrastructure for detecting the loss of forward secrecy, perhaps
in a CT-like manner, may make sense to protect against unintentional key
compromise or compromise of one endpoint: the problems that forward secrecy
is intended to address, which specifically do *not* include collaboration.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-green-tls-static-dh-in-tls13-01

2017-07-07 Thread Kyle Rose
On Fri, Jul 7, 2017 at 2:21 PM, Stephen Farrell 
wrote:

> I find it really hard to believe anyone is convinced of that.
>
> Yes, one could chose to use this proposed wiretapping scheme
> like that but figure 3 in the draft makes if fully clear that
> this colluding or coerced wiretapping device can be anywhere
> on the Internet.
>
> 2804 says "no" here - are you proposing to obsolete that?


I don't think 2804 says any such thing. In fact, it explicitly states that:

q( On the other hand, the IETF believes that mechanisms designed to
 facilitate or enable wiretapping, or methods of using other
 facilities for such purposes, should be openly described, so as to
 ensure the maximum review of the mechanisms and ensure that they
 adhere as closely as possible to their design constraints. The IETF
 believes that the publication of such mechanisms, and the
 publication of known weaknesses in such mechanisms, is a Good
 Thing. )

My reading of 2804 is that the IETF takes no moral position on wiretapping;
recommends against it on technical grounds; and encourages documentation of
proposed or in-use mechanisms for wiretapping for the express purpose of
publicizing the flaws inherent in any such approach.

IMO, an informational draft submitted via the ISE seems completely
appropriate for something like this. I'll add that we've already gotten
good input toward better alternatives on this very thread, which suggests
that having these discussions out in the open is likely to result in better
practical outcomes for user populations that are, one way or the other,
going to be subject to systems like this. Discussing something does not
presuppose or imply agreement on the objectives.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] The case for a single stream of data

2017-05-06 Thread Kyle Rose
On Sat, May 6, 2017 at 11:12 AM, Ilari Liusvaara <ilariliusva...@welho.com>
wrote:

> On Sat, May 06, 2017 at 09:43:55AM -0400, Kyle Rose wrote:
> > I asked this question a while back, and didn't get a satisfying answer:
> if
> > an on-path attacker replaces the early data with a replay from an earlier
> > connection, does the server eventually figure this out once the handshake
> > is complete, or is this mix-and-match impossible for the server to
> detect?
> > It would be nice if a security property of early data is that a replay
> > attack is eventually detected, because at least then you'll know you're
> > under attack.
>
> Trying to replace the early data leads to fatal handshake error if the
> server accepts 0-RTT (since actual deprotection failure from 0-RTT data
> is fatal). If server rejects, then the substitution is silently ignored.
>

I'm not sure this completely answers my question, so let me propose where I
think protection lies.

If the on-path attacker replaces only the early data bytes, deprotection of
early data will fail since the early traffic secret incorporates the
ClientHello in its derivation, which includes a (presumably) fresh client
random.

If the on-path attacker replaces the entire first flight (or at least
ClientHello and the early data), the early data may be accepted but the
subsequent handshake will fail because the client and server will derive
different handshake traffic keys.

If this is accurate, then replays of partial requests don't really pose a
problem (at least for HTTP) because the remainder of the request will fail
deprotection and so the request won't actually be delivered to the
application.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] The case for a single stream of data

2017-05-06 Thread Kyle Rose
On Sat, May 6, 2017 at 8:22 AM, Salz, Rich  wrote:

>
> What about when **part** of a request is in the 0RTT part, and the rest
> of it isn’t?  I believe this will happen often for H2 initial setup.
> Imagine the “fun” when initial connection data, such as login cookies, is
> replayed in other contexts and eventually decrypted?
>

I asked this question a while back, and didn't get a satisfying answer: if
an on-path attacker replaces the early data with a replay from an earlier
connection, does the server eventually figure this out once the handshake
is complete, or is this mix-and-match impossible for the server to detect?
It would be nice if a security property of early data is that a replay
attack is eventually detected, because at least then you'll know you're
under attack.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Definition of cipher suites for TLS 1.2 still possible?

2017-05-02 Thread Kyle Rose
On Tue, May 2, 2017 at 10:24 AM, Salz, Rich  wrote:

> > it may be a naïve question, but is it still possible to define and
> standardize new cipher suites for TLS 1.2 as an RFC, when TLS 1.3 is almost
> finished?
>
> Yes it is.  It might be "informational" not "standards-track" but it's
> certainly possible/allowed/etc.
>

Whether it's worth doing or not depends on the objective.

If the desire is to get support for a new TLS 1.2 cipher suite into
browsers or open source TLS stacks, well... good luck with that. If the
desire is to get something working for their own internal use,
standardization is not really necessary, though I would certainly advise
doing whatever is required to get a code point from IANA.

If the desire is somewhere in the middle, such as internal use plus interop
with other organizations within an industry or consortium, then publication
of an informational RFC might make sense. I'm skeptical, however, that they
will get a lot of attention from folks on this list as there seems to be
little interest in spending time on a legacy protocol; and pursuing
something standards track will probably go nowhere.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Support of integrity only cipher suites in TLS 1.3

2017-04-06 Thread Kyle Rose
On Apr 6, 2017 4:08 AM, "Fries, Steffen"  wrote:

You  are right, I did not take that option into account. But as you
mentioned, it is non-standard and with the desire is to be interoperable as
most as possible, proprietary enhancements are likely not to be favored.

>From a security standards perspective, interoperability by-default is
expressly *undesirable* for this mode of operation. We want this to break
for anyone who hasn't gone through the trouble of explicitly opting-in.

This seems to be a perfect case for the "allow registration of code points,
but do not standardize or recommend" approach: the mechanism is possible to
implement in a way that respects IANA namespacing by those parties with
special needs requiring it, but the risk that someone will accidentally
implement it in a stack used in end-user software is minimal.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Application layer interactions and API guidance

2016-10-12 Thread Kyle Rose
On Wed, Oct 12, 2016 at 2:03 PM, Ilari Liusvaara 
wrote:

> > There's my confusion. I misinterpreted both the Zero-RTT diagram and the
> > table of handshake contexts under "Authentication Messages", specifically
> > "ClientHello ... later of EncryptedExtensions/CertificateRequest". I'm
> > guessing I should be looking at the 0-RTT row only? I.e., if 0-RTT is
> > accepted, is the second Finished message from the client ("{Finished}")
> the
> > same message encrypted differently (using the handshake traffic secret)?
>
> No, there is no difference in ClientFinished in case of 0-RTT accept or
> reject (other than the contents of the CH and EE hashed in).
>

Still confused. :-)

In the message flow for 0-RTT, there are two Finished messages sent from
client to server. One is sent right after CH, and is protected by the
client_early_traffic_secret: (Finished). The other is sent after the server
sends its Finished, and this is protected by the handshake_traffic_secret:
{Finished}.

In the table under "Authentication Messages", there are four rows, one for
each Mode: 0-RTT, 1-RTT (Server), 1-RTT (Client), and Post-Handshake.

Which handshake context is used for the (Finished) message and which is
used for the {Finished} message?

The thing that protects the 0-RTT data from substitution is the record
> protection MACs that are made using key derived from the PSK secret. So
> if the PSK secret is unknown, the key for 0-RTT can't be derived, and
> as consequence, 0-RTT data can't be altered (there's end-of-data marker
> too, preventing truncation).
>

Altered is one thing, and I agree that is prevented; I'm talking about
substitution.


> And basically, ServerFinished MAC covers everything up to that point,
> and ClientFinished MAC covers the entiere handshake (0-RTT data not
> included).
>

So client Finished doesn't protect 0-RTT data, but...


> You can't swap out 0-RTT data (without PSK keys). One can only create
> new connection attempts (that fail!) with the same 0-RTT data (and the
> same ClientHello) before or after the real connection (if any, it
> could be supressed, in which case you would get only failed handshakes
> with 0-RTT data).
>

This is exactly what I'm trying to understand. What specifically prevents
this swapping? I.e., what ties the 0-RTT data sent on a particular
connection to the rest of that connection, such that replacing that 0-RTT
data with 0-RTT data from a previous successful connection will cause a
failure?

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Application layer interactions and API guidance

2016-10-12 Thread Kyle Rose
On Wed, Oct 12, 2016 at 1:02 PM, Ilari Liusvaara 
wrote:

> > By this point in the connection, there is proof that early_data has not
> > been replayed. The application doesn't necessarily know this when the
> early
> > data is first delivered, but it can find out later, which may be all that
> > some applications want. Clearly not all, as you point out:
>
> This is actually only useful if the application can cancel out effects
> of 0-RTT if handshake fails... Which tends to be fraught with peril to
> implement.
>

Absolutely, but it doesn't seem like it would be any more perilous than the
danger of accepting 0-RTT data in the first place: at worst you process the
same replayed data, and at best you process less replayed data. (Unless
there's a perverse incentive problem created by providing a half-measure.)


> The 0-RTT data is not part of ClientHello. It is sent in streaming
> manner (with handshake blocking if it hans't been completely sent by
> the time ServerFinished is received.
>
> ClientFinished does _not_ MAC 0-RTT data, even in case of successful
> transport.
>

There's my confusion. I misinterpreted both the Zero-RTT diagram and the
table of handshake contexts under "Authentication Messages", specifically
"ClientHello ... later of EncryptedExtensions/CertificateRequest". I'm
guessing I should be looking at the 0-RTT row only? I.e., if 0-RTT is
accepted, is the second Finished message from the client ("{Finished}") the
same message encrypted differently (using the handshake traffic secret)?

Is there a succinct explanation for the design choices around what is and
is not included in the handshake context? Being spread out over a year and
a half of mailing list messages makes it hard to track. :-) I'm concerned
that an on-path adversary that can slice-and-dice connections along MAC
context lines will be able to create mischief, so I'd like to be able to
convince myself that this isn't the case.

And also, receiving 1-RTT data does not imply that the 0-RTT data
> itself was not replayed (just that any replay it is of didn't
> complete, assuming PSK keys are secret).
>

Yeah, I get that now. It seems like a missed opportunity to detect mischief
after the fact, and could make for some interesting vulnerabilities for
stateful protocols. E.g., if your early data is "cd /tmp" and your 1-RTT
data is "rm -rf *", but the adversary is able to swap out the early data
for a replayed "cd ~". That one is probably too obvious of an example to
happen in real life, but imagine some developer who maintains his or her
own tlstunnel hearing about 0-RTT and implementing early data for arbitrary
applications using that tunnel wrapper because "reduced latency!": if early
data were later authenticated, it would limit the scope of vulnerability to
only those things that could fit in that first flight. But because it can't
catch every possible replay-based attack, maybe such a measure would
provide only a false sense of security. Sigh. I have no desire to re-ignite
arguments from a year ago.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Application layer interactions and API guidance

2016-10-12 Thread Kyle Rose
On Wed, Oct 12, 2016 at 4:11 AM, Ilari Liusvaara 
wrote:

> For when 0-RTT has become boring enough for me to implement, I would
> think the server-side interface I put in would be something like the
> following:
>
> - ServerSession::getReplayable0RttReader(alp_list) -> ZRttReader
> - ZRttReader::getAlpn() -> String
> - ZRttReader::dataAvailable() -> size
> - ZRttReader implements Read
>
> If there is no replayable 0RTT reader given when ClientHello is
> received, or if the ALP of the 0-RTT does not match any in alp_list,
> the 0-RTT is rejected. Otherwise it is accepted and the data stream is
> received via the ZRttReader. Then regular 1-RTT data will be returned by
> the usual Read interface of ServerSession.
>

Ok, I see where you're going with this. I'm not sure whether I would put
the ALP filtering logic in the API or do something more like:

early_data = get_early_data()
is_early_data_good = 
start_server_handshake(reject_early_data = !is_early_data_good)

This allows the server to decide whether or not to reject early data on any
basis, not just ALP. But maybe a shortcut is good for the common case.

> On the client side, one option is for the early data to be specified prior
> > to or alongside handshake initiation, with some indication by the stack
> of
> > whether it was written or not. (I suggest all-or-none.) This precludes
> > question on the part of the client as to which data might have been sent
> > 0-RTT and which must have been sent 1-RTT.
>
> The interface I envision has client write all the 0-RTT data and
> explicitly signal the end of data, with a callback if server rejects,
> so the application can abort the flight early (the stack just black-
> holes the data after error).
>

Agreed on that, with s/a callback/some client application signaling
appropriate to the language or development model involved/.

The reason for waiting for application to ACK the 0-RTT error is that
> the thing is MT-safe, so one has to prepare for races..
>

I'm frankly skeptical of the utility of (most) thread safety at the socket
level. (Safety against crashes, yes; safety against arbitrary interleaving
of stream data, no.) IMO, semantic guarantees around multiplexing belong in
a higher layer. Otherwise, the API must impose additional constraints
(e.g., max write size) that apply across the board, even to applications
that will never write from multiple threads, and must require the stack
provide additional guarantees (e.g., all-or-none writes) that the developer
has to know about.

Given how much TLS libraries try to look like Berkeley sockets to
applications, you'll probably see fewer usage errors if that multiplexing
logic is implemented at the application layer with an upward-facing
interface that either looks nothing like sockets or that presents distinct
logical sockets for distinct streams, intended for use by one thread at a
time, multiplexed over the same connection.

> Another nice thing is that at the moment you receive a single byte from

> > read() you know a priori that every byte of early data you processed was
> > authentic.
>
> Well, one always knows that for any received data, one who sent it
> possesses the PSK secret and it can't be tampered with without the
> PSK secret.
>

Absolutely. But is it an authentic part of *this* connection? That's what
the client {Finished} tells you, later.


> However, receiving any 1-RTT data after 0-RTT data does not imply that
> the 0-RTT data was not replayed!
>

I think I may have been imprecise. By "received" I mean "received by the
application", i.e., "delivered by the stack". By the time the stack
delivers one byte of 1-RTT data to the application, we know:

(1) Client {Finished} has been received by the server
(2) which authenticates ClientHello
(3) which incorporates early_data
(4) Client {Finished} is protected by client_handshake_traffic_secret
(5) which incorporates ServerHello
(6) which incorporates the server random
(7) which means that secret is fresh (i.e., not subject to replay)

By this point in the connection, there is proof that early_data has not
been replayed. The application doesn't necessarily know this when the early
data is first delivered, but it can find out later, which may be all that
some applications want. Clearly not all, as you point out:

> Do we want to support the case in which 0-RTT failure means the client can
> > send entirely different data? If so, then the above isn't general enough,
> > but the client API could offer an option to say "don't resend this data
> if
> > 0-RTT fails" with some flag set on this condition or (for event systems)
> a
> > callback registered to do something more interesting.
>
> There's the case where ALP mismatches (and unfortunately, due to how
> ALPN and 0-RTT interact, mismatches can happen in cases other than
> just that 0-RTT is fundamentially impossible).
>
> In that case, the data is obviously different. Then there are also
> things like the planned 0-RTT 

Re: [TLS] Application layer interactions and API guidance

2016-10-10 Thread Kyle Rose
On Mon, Oct 10, 2016 at 1:49 PM, Watson Ladd  wrote:

>
> The problem is with poorly-behaved senders and attackers resending
> 0-RTT data. Receivers should be able to ensure side-effectfull
> operations are not carried out by 0-RTT data. Making 0-RTT silent in
> APIs transforms an interoperability issue into a silent security
> issue. This is not a good idea.
>

+1.

FWIW, Patrick McManus made a pretty eloquent and convincing case in Berlin
that the web is substantially broken without retry logic in the browsers,
that naturally make application-level replay mitigation a necessity. But I
don't think (nor do I think he claimed) that the same is true of all
protocols or systems that might use TLS. So while 0-RTT-obliviousness may
be okay for browsers in particular given the other constraints under which
they operate, it is probably not good to bake that into the API for the
general case.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-22 Thread Kyle Rose
On Thu, Sep 22, 2016 at 1:19 PM, BITS Security <
bitssecur...@fsroundtable.org> wrote:

> Like many enterprises, financial institutions depend upon the ability to
> decrypt TLS traffic to implement data loss protection, intrusion detection
> and prevention, malware detection, packet capture and analysis, and DDoS
> mitigation.  Unlike some other businesses, financial institutions also rely
> upon TLS traffic decryption to implement fraud monitoring and surveillance
> of supervised employees.  The products which support these capabilities
> will need to be replaced or substantially redesigned at significant cost
> and loss of scalability to continue to support the functionality financial
> institutions and their regulators require.
>

I do not think this difficulty should be a consideration for TLS. These
capabilities can be enabled by the endpoint.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PR #604 Change "supported_groups" to "supported_kems"

2016-09-13 Thread Kyle Rose
To be honest, the purist in me likes the general idea here, though I think
I prefer "kex" as I'm used to that with SSH.

Then again, that isn't even quite correct, as the most popular mechanism is
DH, which is key agreement based on the exchange of inputs to a formula: no
keys are actually exchanged.

I fear we're bike shedding with this proposal. Across all of computing,
there are plenty of mechanisms that appear to be misnamed for the same
reason: unforeseen but reasonable extension beyond their
originally-intended applications.

The actual name in TLS is "10", so as long as we all agree on that and
understand the history behind "supported_groups", I'm not sure it matters.

Kyle

On Sep 13, 2016 2:27 PM, "William Whyte" 
wrote:

I'd like to just check and see if there are any objections to this PR.
There seems no reason to bake a particular cryptographic family into our
terminology. This is a low-cost change that will save us from looking silly
in a few years time.

Cheers,

William

On Tue, Sep 13, 2016 at 1:19 PM, Sean Turner  wrote:

> There appears to be no consensus to adopt the change proposed by this PR.
>
> The small condolence here is that the name+semantics for this extension
> has been changed once before and if the extension really needs to be
> renamed in 5-7 years we’ve got precedence for doing so.
>
> spt
>
> > On Aug 29, 2016, at 15:52, Zhenfei Zhang 
> wrote:
> >
> > Hi list,
> >
> >
> >
> > I have created a pull request
> >
> > https://github.com/tlswg/tls13-spec/pull/604
> >
> >
> >
> > I would like to suggest that we change the terminology "NamedGroup" to
> "KeyExchangeMethod".
> >
> >
> >
> > In [1], it is suggested that we redefine the syntax, which leads to the
> separation of public key crypto
> >
> > and symmetric crypto during a handshake. Because of this separation, new
> terminology was defined
> >
> > for key exchange algorithms and authentication algorithms for public key
> crypto in the key exchange
> >
> > extension. "NamedGroup" was used to refer the underlying key exchange
> parameters, which comes
> >
> > from the "Supported Elliptic Curves" in previous versions.
> >
> >
> >
> > The use of "NamedGroup" implicitly requests the key exchange algorithm
> to be Deffie-Hellman type.
> >
> > While it is safe for now, it would be nice to have some crypto agility,
> and future proof. It would make
> >
> > the transition to other key exchange primitives (such as lattice based
> key exchange) or methods
> >
> > (such as key encapsulation mechanism) easier in the future, if we do not
> restrict the key exchange
> >
> > by certain "Group".
> >
> >
> >
> > Knowing that NIST has planned to standardize quantum-safe cryptography
> within 7 years of time
> >
> > (which can and should be accelerated), and those algorithms cannot be
> described in terms of "group",
> >
> > the current terminology will due for a redesign by then. So I would
> suggest to change the
> >
> > "NamedGroup" now rather than later.
> >
> >
> >
> >
> > Overall, this will have the following impact
> >
> >
> >
> > 1. HelloRetryRequest
> >
> >
> >
> > Change HelloRetryRequest structure to
> >
> >
> >
> > struct {
> >
> > ProtocolVersion server_version;
> >
> > KeyExchangeMethod selected_kem;
> >
> > Extension extensions<0..2^16-1>;
> >
> > } HelloRetryRequest;
> >
> >
> >
> > 2. Negotiated Groups
> >
> >
> >
> > Throughout, change "supported_groups" to "supported_kems"; change
> "NamedGroupList" to
> > "KeyExchangeMethodList"; change "named_group_list" to "kem_list"; change
> NamedGroup to
> >
> > KeyExchangeMethod
> >
> >
> >
> > 3. Key Share:
> >
> > Change KeyShareEntry structure to
> >
> >
> >
> > struct {
> >
> > KeyExchangeMethod kem;
> >
> > opaque key_exchange<1..2^16-1>;
> >
> > } KeyShareEntry;
> >
> >
> > [1] https://github.com/ekr/tls13-spec/blob/15126cf5a08c445aeed97
> c0c25c4f10c2c1b8f26/draft-ietf-tls-tls13.md
> >
> >
> >
> > Thanks for your time.
> >
> >
> >
> > Zhenfei Zhang
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org
> > https://www.ietf.org/mailman/listinfo/tls
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Version negotiation, take two

2016-09-13 Thread Kyle Rose
On Thu, Sep 8, 2016 at 12:04 PM, David Benjamin 
wrote:

> The major arguments against this change seem to be:
>
> 1. It’s inelegant to have two mechanisms.
> 2. We should fix broken servers
>

There's also:

3. Implementors will find a way to screw this up, too.

But if you follow through with your plan to have Chrome randomly add a
really high version to the list to smoke out servers that fail when they
see unsupported versions, it's plausible version intolerance could be in
the noise next time around.

If you time-limit that behavior for a particular version of Chrome (say, to
a few months), you could even have it randomly add the next version or
current version + 2 to the list to detect and report on selective
next-version intolerance. I'd say this kind of failure mode is unlikely,
but... Murphy's Law.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] 3DES diediedie

2016-09-08 Thread Kyle Rose
On Thu, Sep 8, 2016 at 12:46 PM, Yoav Nir  wrote:

>
> Good questions, no doubt. But I don’t think they can be answered.
>
> Someone is going to specify protocols and algorithms. This could be the
> IETF. This could be the ITU, or IEEE, or some other SDO. Makes no
> difference.
> Someone is going to implement these protocols and algorithms in some
> combination of hardware and software.
> Someone is going to combine this implementation with other parts like
> network stack, wired or wireless communications, memory to create a
> “brains” for IoT devices.
> Someone is going to build a sensor or an actuator that includes that
> “brains” plus hardware and software. This could be a lightbulb, and smoke
> detector, a temperature sensor.
> Someone is going to use this sensor or actuator as part of a system: a
> car, a refrigerator, an HVAC system, a door alarm.
> Someone is going to deploy these systems in a home, in a data center, in a
> plane, in a nuclear power plant.
>
> It’s that last step that determines how attractive this is going to be to
> attackers and what value is going to be protected by a device using these
> protocols and algorithms. And the last two steps determine how isolated the
> “thing” will be.
>

Not necessarily. Manufacturers may be able to push some of those isolation
properties higher into the list. E.g., if the silicon simply cannot speak a
more general protocol even if compromised, or has some other physical
constraint (e.g., data rate, CPU power, etc.) that limits its
expressiveness, you may be able to make provable statements about the
potential impact of compromise. We obviously can't impose those
restrictions on manufacturers, but I don't know if it's completely hopeless
to create standards or compile best practices for mitigating compromise of
insecure devices.

But this brings me back to my earlier post where I said this sounds like a
research problem: the IETF is not anywhere near being in the position of
making such recommendations. And I'm not convinced that this path is a
productive use of our time, when hardware being hardware means that
increased functionality (better crypto, upgradability) gets cheaper all the
time.

OTOH, manufacturers being manufacturers means that security and
upgradability are baked in only when users demand it, and users being users
means that no one demands either until it's too late. So it's possible
there's real value in doing the research and then creating safety standards
for classes of devices that seem to be designed to atrophy, but it's
probably more useful for that standardization to focus on network isolation
rather than on crypto since the crypto is only one small part of the TCB.

Is the era in which the marginal cost of upgradability is too expensive to
support a short one, or is this a problem we're going to be dealing with
for a long time? If the latter, then I'm starting to see some value in not
having everything be globally-routable or even talk IP.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] 3DES diediedie

2016-09-08 Thread Kyle Rose
On Thu, Sep 8, 2016 at 1:53 AM, Peter Gutmann 
wrote:

> The only data point I have is that every time I've tried to disable DES in
> a new release (and by DES I mean single DES, not 3DES) I've had a chorus of
> complaints about it vanishing.

...

>  Alarms, for example, send data
> quantities measured in bytes, so some academic attack that would take 500
> million years to acquire the necessary data isn't going to lose anyone any
> sleep.  It's a nice piece of work, but you need to look at what practical
> effect it has on real, deployed systems...
>

To this class of examples, the problem seems to be less the implications
for security of the specific systems making use of weak crypto, and more
the effect the survival of weak options for crypto might have on other
systems. We don't want more general systems to be subject to attacks that
may not be applicable to the resource-constrained target systems, but that
requires us to answer a few questions about those constrained systems:

(1) Is the target system isolated, such that a compromise cannot either
leverage transitive trust to another system or provide an attacker a beach
head from which to attack (surreptitiously probe, etc.) other systems?

(2) Is the weak crypto being used by the target system in a way that
renders both the known and expected attacks inapplicable?

If we can answer #1 "yes", then all a user is dealing with is a device that
might malfunction in a well-defined/delineable way upon compromise, with no
impact on other systems. It might be hard to definitively answer because,
for instance, a light bulb malfunctioning might create a safety incident if
the light bulb is a control panel indicator for some problem, so I don't
want to minimize the difficulty of coming to a "yes" answer here.

If the answer to #1 is "no" or is too difficult to answer, however, then we
have to actually analyze the weaknesses in the crypto with respect to how
the device could be used.

Where I'm going with this is that #1 is going to be particularly difficult
to answer if the protocol making use of the weak crypto is TLS and the
device is connected to the internet, simply because the potential for
complex interactions is so high. The safest course may be to continue to
deprecate weak crypto for TLS, IPsec, etc. under the assumption that the
systems making use of those protocols are both powerful enough and
well-connected enough to cause a problem if compromised. Nothing stops
resource-constrained systems from continuing to use old implementations of
TLS that do support weak crypto, though I question the wisdom of producing
parts with no upgrade path that speak such a complex, general transport
protocol rather than something naturally 0-RTT in the steady state that
would use less CPU and power, have a smaller TCB, and not rely on an ASN.1
implementation for domain isolation (e.g., Kerberos).

Which leads directly into the issue of the potential for implementation
vulnerabilities, something that is probably even more likely to lead to
loss of control than weak crypto, and which may ultimately force users to
demand IoT devices for which the answer to #1 is always "yes". So I wonder
if all the worrying about weak crypto is just a red herring compared with
the exploits we are actually going to encounter.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] 3DES diediedie

2016-09-01 Thread Kyle Rose
On Mon, Aug 29, 2016 at 5:00 AM, Hubert Kario  wrote:

>
> we have enough problems weeding out implementation mistakes in TLS, we
> don't
> need yet another protocol and two dozen implementations that come with it
>

Strongly agreed.

Focusing energy on getting "something" working for low-power devices is
putting the cart before the horse. Security has to be a primary objective
here, in the standards world in general and in CFRG in particular. We can
surely consider tradeoffs---more frequent key rotations, security
guarantees reduced in a well-defined way, shorter lifetimes for
credentials, etc.---but these should be explicitly chosen, not determined
after the fact based on what happened to be in our toolbox at the time.
Keeping 3DES around in a general-purpose protocol headed for
standardization in spite of the known problems with small block sizes is
almost certain to create more work in the coming years for everyone simply
to benefit implementors of systems for which security is clearly not the
primary concern.

>From following the discussion, low power crypto seems like a research area
at this point, not an implementation effort. (Of course, the flaws in
whatever ill-advised schemes get implemented will generate their own
research efforts and inevitable transitive trust problems with supposedly
more-secure systems. Alas, we haven't yet figured out a way to keep people
from generating sufficient rope to hang themselves with.)

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Refactoring the negotiation syntax

2016-07-18 Thread Kyle Rose
On Mon, Jul 18, 2016 at 5:07 PM, Ilari Liusvaara 
wrote:

> > Tl;dr: I think this is a good approach because it eliminates much of the
> > existing negotiation redundancy/complexity and combinatorial explosion in
> > the 1.3+ ciphersuite menu.
>
> I recently tried to write code to handle this ciphersuite negotiation, so
> that it always negotiates a valid choice if any valid choice exists.
>
> I think the end result was insane. The problem that makes it so difficult
> is that legacy signature types, group types and protection/PRFs interact,
> so not all supported offers that server has enabled are actually possible
> to chose.
>

It may also be that combinatorial explosion in ciphersuites just isn't a
problem in reality: that there should be so few choices offered in the
standard, with those believed to be secure, and new ones introduced only
when they are actually needed, not simply when there is something shiny and
new.


> If you are talking about "strength-matching", there is no sane notion of
> security equivalence with protection, key exchange and signatures (due to
> different sub-threshold, multi-key and period properties).
> 
> IMO, no it is not necressary to tie the security margins. In case of
> authentication and encryption, there is also the issue that authentication
> is good if it is unbroken at that moment, whereas key exchange and
> encryption
> must remain unbroken for long time since.
>
> So that someone thinks ECDSA P-256 is OK for authentication does not mean
> that the same entity thinks ECDHE P-256 is OK for key exchange. Or to think
> that 128-bit symmetric encryption is OK.
>

Yeah, this is a really good point. This argues for a complete separation of
authentication (over which the client has no control) from the rest of the
security parameters. In some sense, the KX and symmetric cipher are still
tied together as a break in either in the future will reveal confidential
data, but I'll concede the point that there is no sane way to tie security
margins between KX and cipher suite.


> And coupling those in a way that doesn't lead to great difficulty (even
> greater than currently) REALLY causes the ciphersuite space to
> combinatorially explode.
>

The combinatorial explosion I was referring to was that of consuming code
points for multiple combinations of things that have a separate negotiation
mechanism (e.g., SignatureAlgorithms). But maybe it's not a problem in
practice.

> Or maybe prospective security margin is too complex and/or too ill-defined
> > to bother with, and a server should just choose the least CPU-intensive
> > combination of everything.
>
> Well, yeah, it is ill-defined. Of course, there can be bad algorithms
> that one wants to disable.
>
> However, interactions between bad algorithms are much less likely than
> actual bad algorithms causing problems by themselves.
>

I'm more concerned about the a la carte approach admitting bad ciphers or
PRFs that then get implemented naïvely by completists, but I suppose that
problem still exists when they are tied together. And maybe this won't be a
problem in practice in the presence of a "recommended" column in the
registry.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS client puzzles

2016-07-07 Thread Kyle Rose
> I agree, and I think it is clear that client puzzles can be a useful
> addition to the DDoS defense toolbox.  However, most of this can be handled
> at the higher levels above TLS, or possibly as a custom extension that does
> not complicate TLS.
>

A custom extension is a promising approach: this is what Erik Nygren
proposed in nygren-tls-client-puzzles-00 following discussions with some
IETF folks and Akamai colleagues. That draft has expired and doesn't
reference any of the recent work on memory-hard puzzles, but it might be a
good starting point.

Except at the pre-TLS stage for applications that use STARTTLS-style
mechanisms, I'm not sure how this could work at levels above TLS: the
primary attack targeted by client puzzles would be a client doing almost no
work in order to force the server to do expensive crypto, which means it
must be engaged prior to the handshake.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS client puzzles

2016-07-06 Thread Kyle Rose
On Wed, Jul 6, 2016 at 4:23 PM, Hannes Tschofenig  wrote:

> the question for me is whether it will be an effective mechanism when
> many devices just do not support it (for a number of reasons)? For IoT
> devices the reason is simple: they don't have MBs of memory.
>
> Even the regular puzzle technique has the problem that you have to
> adjust the puzzle difficulty and what is a piece of cake for a desktop
> computer kills the battery of an IoT device.
>
> (And note that I am not saying that IoT devices aren't used for DDoS
> attacks.)
>

The point I was making earlier was simply that many web properties have
client population profiles that are overwhelmingly web browsers, and others
that are overwhelmingly IoT devices: client puzzles might be helpful on the
former, and useless on the latter. Objections that "IoT devices can't
handle client puzzles" doesn't apply to web properties with a web browser
client profile: it's like arguing that liver tastes bad when presented with
strawberry shortcake: yeah, that might be true... but it's not relevant.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS client puzzles

2016-06-29 Thread Kyle Rose
Let's finish that last sentence:

I have to think a lot more about the IoT/resource-constrained client
problem, but I still don't think the existence of clients that would be
denied service by this scheme renders the concept completely inapplicable.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS client puzzles

2016-06-29 Thread Kyle Rose
On Wed, Jun 29, 2016 at 5:41 PM, Christian Huitema <huit...@microsoft.com>
wrote:

> On Wednesday, June 29, 2016 2:08 PM, Kyle Rose wrote:
> >
> > Raising the cost of requests has a similar problem in that you're
> punishing
> > every client, but in doing so you do allow all clients capable of
> absorbing
> > the increased cost (e.g., memory, computing power) to get access to the
> > resources they need if the user is willing to accept that cost (e.g.,
> energy,
> > latency).
>
> The obvious issue with the "proof of work" defense against DDOS is that
> the bot nets can do more work than many legitimate clients. The puzzle
> approach will cut off the least capable legitimate clients, such as old
> phones or IOT devices. It will not cut off the PC enrolled in a bot net. It
> will merely slow it down a little. But then, you could have the same effect
> by just delaying the response and enforcing one connection per client.
>

I agree with you that the above seems equivalent in theory, but in practice
it might not be feasible.

The biggest obstacle seems to be enforcing one connection per client. Let's
say rate limiting on a per-client basis doesn't work because many of your
clients are behind a NAT; or because the attacker is using IPv6 and
generates a ton of temporary addresses that make the situation
indistinguishable from many legitimate clients in the same subnet. So you
can either serve one (or a small N) of them at a time, or you drop that
restriction and allow a single client to mount an asymmetric attack.

Alternatively, what if you have lots of geographically-distributed servers
and can't share client rate limiting state among them quickly enough to
detect and blacklist attackers?

It's possible there are additional asymmetric attack vectors I'm not
thinking of, which is why I like this as a general defense against a class
of attacks. I mostly agree it's mostly worthless when you have one server
facing a botnet of 100,000 machines, but frankly that one server is a
sitting duck regardless of countermeasures. OTOH, what if you have 20,000
servers facing such a botnet? Client puzzles effectively become a mechanism
for enforcing distributed rate limiting, and could be used to dramatically
raise the cost of mounting such an attack.

I have to think a lot more about the IoT/resource-constrained client
problem, but I still don't think the existence of clients that would be  by
this scheme

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [FORGED] Re: no fallbacks please [was: Downgrade protection, fallbacks, and server time]

2016-06-07 Thread Kyle Rose
I'm a big fan of the idea of a very strict qualification suite, as well, to
try to head off some of these problems before (faulty) implementations
proliferate.

Hackathon?

Kyle
On Jun 7, 2016 2:00 AM, "Peter Gutmann"  wrote:

> Dave Garrett  writes:
>
> >Also, as with any new system, we now have the ability to loudly stress to
> TLS
> >1.3+ implementers to not screw it up and test for future-proofing this
> time
> >around.
>
> I think that's the main contribution of a new mechanism, it doesn't really
> matter whether it's communicated as a single value, a list, or interpretive
> dance, the main thing is that there needs to be a single location where the
> version is given (not multiple locations that can disagree with each other
> as
> for TLS < 1.3), and the spec should include a pseudocode algorithm for
> dealing
> with the version data rather than just "implementations should accept
> things
> that look about right".
>
> Peter.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PR #493: Multiple concurrent tickets

2016-06-04 Thread Kyle Rose
>
> (1) In many cases, the client can handle this unilaterally. Are there
> examples of this kind of ticket-relevant state change that the client would
> not be aware of? When the client is aware of a state change (such as client
> auth negotiation), it can purge any tickets received before the state
> change.
>


> Sure. HTTP-level authentication via a Web form.
>

Would that typically feed into the ticket state? I'm thinking of something
like Rails, which (at least when I worked with it) maintained its own
session state separately from TLS session state, and signaled via cookie or
Auth header.


> Correctness seems achievable either way, so I'm not sure a purge mechanism
>> (beyond expiration) is justified by this specific use case in isolation.
>> Are there other uses cases for which server-initiated purge of classes of
>> session tickets would be helpful?
>>
>
> Unexpected key changes.
>

Rotation of the ticket-encrypting key? This seems like a good argument for
it, because it's something that's likely to happen regularly, and maybe
even on a schedule, and almost certainly across all clients simultaneously
(meaning "load spike" if the clients aren't guided to the right behavior).

In general, I like the idea of having a way to purge state beyond simply
expiration; I wonder if generation is the right level of specificity, or if
it's too general and we should reopen the expiration time vs. issue
time+TTL discussion and allow purging by issue time. (I'm not actually
suggesting the latter.) But I also don't have a strong opinion about the
balance between complexity and performance benefit here.

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PR #493: Multiple concurrent tickets

2016-06-04 Thread Kyle Rose
On Sat, Jun 4, 2016 at 1:15 PM, Eric Rescorla  wrote:

> I don't think it's principally about discarding keying material, but
> rather about allowing the server to attach state to a ticket and then have
> predictable behavior. Consider the obvious case of post-handshake client
> auth (which I know you hate) and a client which has tickets issue before
> and after the auth event. If it tries to use them both, that's going to be
> annoying (though I agree, not fatal).
>

I have several thoughts:

(1) In many cases, the client can handle this unilaterally. Are there
examples of this kind of ticket-relevant state change that the client would
not be aware of? When the client is aware of a state change (such as client
auth negotiation), it can purge any tickets received before the state
change.

(2) If the ticket isn't encrypted data but is instead just an index into
full session state on the server, the ticket might actually still be valid
for state changes that occur after the ticket was handed out. I have no
idea how common this brand of session caching is likely to be, but I'm
guessing "not very".

(3) Tickets already allow the server to encode state: the proposal here
seems to be about revealing additional ticket semantics to the client. The
server could after all encode the generation in the (encrypted) ticket and
then use that to reject old tickets: this results in more full handshakes,
but it would eliminate weird behavior when clients use old tickets.

Correctness seems achievable either way, so I'm not sure a purge mechanism
(beyond expiration) is justified by this specific use case in isolation.
Are there other uses cases for which server-initiated purge of classes of
session tickets would be helpful?

Kyle
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Resumption ticket/PSK

2016-05-19 Thread Kyle Rose
I've modified the branch to use your wording. As Viktor said, it
doesn't address his objection, but it's still a more precise starting
point for further discussion.

Kyle

On Thu, May 19, 2016 at 4:37 PM, Martin Thomson
 wrote:
> On 19 May 2016 at 16:01, Viktor Dukhovni  wrote:
>> Nevertheless, some clients may want to attempt to gain fine-grained
>> protection against correlating back to back or parallel resumption
>> requests.  For this they'd have to ensure that all session tickets
>> are single use, and either perform new handshakes when increasing
>> the number of parallel connections to the server, or somehow obtain
>> more than one ticket within a single session.
>
> I believe that this is the intent of the PR.  I've suggested an
> alternative wording that cleaves closer to your text above.
>
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Resumption ticket/PSK

2016-05-19 Thread Kyle Rose
On Thu, May 19, 2016 at 3:19 PM, Viktor Dukhovni  wrote:

> It is good enough.  Clients that want strong protection against
> tracking by session ids can disable session caching entirely, or
> set an idle timeout of ~5 seconds, Ensuring that session re-use
> happens only for a quick burst of connections to the same server.
>
> This is only relevant to a particular type of client, and should
> not be default protocol behaviour.

I suspect the root of arguments for/against this proposal are
philosophical more than technical. I disagree with your contention
above that client-behavior-only is sufficient, because my definition
of "sufficient" includes something about resource usage on the server
in pursuit of the privacy desires of the client, which you implicitly
admit to below:

> If some clients desperately want a fresh ticket with every resumption,
> I think they should explicitly signal that via an optional new
> extension.  I'd have no objection to a zero-length payload extension
> that indicates that a fresh ticket is desired even if the session
> is resumed.

This IMO *would* be good enough from a technical standpoint.

I can't give you a technical argument for my proposal, because the
question fundamentally isn't a technical one: it's a philosophical one
regarding which properties we want this protocol to have by default. I
can only suggest a requirement that TLS, as a privacy protocol, must
at the very least not add to the ability to track clients. Reuse of
session tickets violates this requirement by adding an identifier
transmitted in the clear that allows a passive observer to track a
client much more easily than it would otherwise be able to do.

> I don't see how constantly generating and transmitting new tickets
> helps the server, or helps clients (at fixed network addresses)
> that don't need this protection.  Just a waste of bandwidth and
> needless churn in the client's session cache.

I'm not sympathetic to the bandwidth argument, given how typically
small these things are. I'm also not sympathetic to concerns about
generation, as session ticket generation is logically limited to
low-cost crypto operations (i.e., not public key crypto) that are
probably not significant in comparison to the rest of the key
derivation schedule: certainly, in the typical case thousands or tens
of thousands can be generated for the cost of a single extra PKC
operation. And nothing about this proposal suggests that a client MUST
not use a session ticket more than once, only that the server offer it
the opportunity to resume a session without doing so.

> There are clients where limiting ticket re-use makes sense, these
> clients can take appropriate measures.

My goal is to make it easy for the client to do this in a way that is
efficient for the server. A client can already simply turn off session
caching to gain added privacy, but there are ways to tweak
implementations to help resource-constrained servers, as well.

Kyle

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Resumption ticket/PSK

2016-05-19 Thread Kyle Rose
Regarding the ability for passive observers' tracking of clients
across connections (and potentially across IPs) via a session ticket
used more than once, should there be any language around recommended
practice here, especially for clients?

An appropriately-configured server can help the client avoid this
problem without performance penalty by issuing a new session ticket on
every connection (for non-overlapping handshakes) and/or multiple on
one (to cover that gap), and a client can help by keeping only the
most recent ticket for a particular session and/or using a given
ticket only once.

Thoughts on adding language under "Implementation Notes" such as:

"Clients concerned with privacy against tracking by passive observers
SHOULD use a PSK/session ticket at most once. Servers SHOULD issue
more than one session ticket per handshake, or issue a new session
ticket on every resumption handshake, to assist in the privacy of the
client while maintaining the performance advantage of session
resumption."

For pure PSK I assume tracking is less of an issue, but I'm happy to
entertain thoughts there, as well.

Kyle

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Headerless records (was: padding)

2015-08-25 Thread Kyle Rose
 uint16 length = TLSPlaintext.length;

 You can't recover the plaintext without knowing how long it is.  This
 part at a minimum needs to be in the clear.  At which point you need
 it to be based on TLSCiphertext.length

Is that really true? You could decrypt the first block/few bytes to
get the length (without authentication, of course) and then decrypt
the remainder according to this candidate length. Then authenticate
the entire record to make sure the candidate length was correct.

(I am not claiming anything about the purity of this approach, only
that it is technically feasible.)

Kyle

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls