Re: [TLS] Enforcing Protocol Invariants

2018-11-19 Thread Hannes Tschofenig

Hi Ryan

 

reading through your email and the subsequent exchanges I am surprised that you show up right after years of standardization of TLS 1.3.

Suggesting ideas at the right time is somewhat important.  

 

Later in your emails you explain what you consider complex in TLS and some of the ideas you are suggesting are alternative design approaches. What standardization helps you do is to work out the details of these different proposals and then to compare the different approaches against each other. 


 

> The state of cyber security is a horrible disappointment.

... and most (if not all) of that disappointment has nothing to do with TLS. 

 

Ciao

Hannes

 


Gesendet: Donnerstag, 08. November 2018 um 09:44 Uhr
Von: "Ryan Carboni" 
An: tls@ietf.org
Betreff: [TLS] Enforcing Protocol Invariants

Hmm. TLS has gotten too complex. How does one create a new protocol? Maybe we should ask Google.
 

The SSHFP DNS record exists. DNSSEC exists.

 

This might be a radical proposal, but maybe the certificate hash could be placed in a DNS TXT record. In another DNS TXT record, a list of supported protocols could be listed.

A DNS SRV record would define the port that one can use to connect to a service, because the URL scheme died after .onion was recognized as a domain and after chromium decided to that the presentation of sub domains was unimportant. So no browser has to show which port it is connected to.

Although, to be radical, all anyone needs is RSA-2048, ephemeral DH-3072, and SHAKE-128 as AEAD.

And maybe recommend that boot entropy could be obtained by using the timer entropy daemon for one second (and which would in theory provide enough entropy for perpetuity).

 

This isn’t rocket science. The state of cyber security is a horrible disappointment.
___ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls




___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-18 Thread Christopher Wood
On Sun, Nov 18, 2018 at 1:52 PM Viktor Dukhovni  wrote:
>
>
>
> > On Nov 18, 2018, at 4:27 PM, Salz, Rich  wrote:
> >
> >>   [ I don't know why you would choose to argue this point, let's not
> >>  confuse TLS with the CA/B forum WebPKI in browsers.  My post was
> >>  about TLS.
> >
> > I am not.  You say TLS is CA/B WebPKI.
>
> No, I specifically say that TLS *is not* CA/B WebPKI.
> The OP to whom I responded was comparing WebPKI to
> DNSSEC, so my response was about WebPKI and its use
> in TLS (which also supports other models).
>
> Anyway, this is way off topic.  I've made my points,
> and stand by them.  I think we're done.

Yes, this is off topic. Let’s please leave this thread here and focus
on Ryan’s original post.

Thanks,
Chris (chair hat on)

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-18 Thread Salz, Rich
>[ I don't know why you would choose to argue this point, let's not
  confuse TLS with the CA/B forum WebPKI in browsers.  My post was
  about TLS.
  
I am not.  You say TLS is CA/B WebPKI. I say TLS favors X509 CA-trust model, 
and in fact has it as its default.  For example, in TLS 1.3 page 44 (text 
format):
   "The signatures on certificates that are self-signed or certificates
   that are trust anchors are not validated, since they begin a
   certification path (see [RFC5280], Section 3.2)."

Section D.3, implementation notes in SSLv3 RFC 6101
   "Certificates should always be verified to ensure proper
   signing by a trusted certificate authority (CA).  The selection and
   addition of trusted CAs should be done very carefully.  Users should
   be able to view information about the certificate and root CA."
The glossary says this about certificates:
   "As part of the X.509 protocol (a.k.a.  ISO
  Authentication framework),"

>  However, since bashing
  DNSSEC is a popular sport, I may for the record, now and then
  post corrections to messages that mischaracterize DNSSEC. ]
  
I said nothing about DNSSEC, certainly nothing that could remotely be taken as 
bashing.
  
>The X.509 trust-anchors are NOT specified in TLS, and need not be
used.

They are specified, and if provided, how they are used is also specified and 
has been from the beginning, through and including TLS 1.3, as I showed via the 
excerpts above.

>   The existing X.509 encapsultion
works just fine, and makes it possible to transparently interoperate
with both DANE and CA/B forum WebPKI or other PKIX peers.

The existing encoding could be used just fine, just indicate that this is a 
DANE-validated cert.  Then it will be clear and obvious to everyone how to 
validate it. Complex combinations are hard to reason about, cannot be intuited 
by the protocol but must be done by reading code and configuration, etc.  A 
DANE client could specify DANE in the acceptable cert-type, which is encoded as 
X509. I believe this approach was considered, but rejected. I was trying to 
offer advice on how that draft might be edited and brought forward more 
productively if the authors want to try again in the general sense of enabling 
DANE-style trust within TLS generally.  I don't care, I have no horse in that 
race.

/r$


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-18 Thread Viktor Dukhovni
On Sun, Nov 18, 2018 at 02:30:53PM +, Salz, Rich wrote:

> >[ FWIW, TLS is trust-model agnostic, it is the WebPKI that uses the
>   usual panoply of CAs. ]
>  
> No, it is not agnostic.  It does support other trust models -- raw keys,
> PGP web-of-trust -- but it's default and primary model from its inception
> is X509 and its (so ingrained you might consider it implicit) "trust the
> issuer" model.  Look at the definitions of certs and CA's and things like
> path validation in PKIX and its predecessors, the "trust anchors" which
> builds on the chain model -- chained through *issuers* -- in the protocol,
> and so on.

[ I don't know why you would choose to argue this point, let's not
  confuse TLS with the CA/B forum WebPKI in browsers.  My post was
  about TLS.

  My post was emphatically not an attempt to revive the DANE chain
  discussion here, and was not even about DANE, nobody is looking
  forward to reopening that discussion here.  However, since bashing
  DNSSEC is a popular sport, I may for the record, now and then
  post corrections to messages that mischaracterize DNSSEC. ]

The X.509 trust-anchors are NOT specified in TLS, and need not be
used.  Long before DANE, Postfix supported "fingerprint" authentication,
especially for email submission clients, which bypassed the WebPKI
and never looked at the certificate issuer.  Peter Gutmann may
apprise you of similar usage in industrial automation.

> The "usual panoply of CAs" is the WebPKI instantiation of a trust model,
> but do not confuse it with the trust model itself. I have deployed several
> instances of the X509/PKI trust model at work, and none of them use a
> conventional WebPKI set of anchors.

This was explicitly acknowledged and discussed in more detail in
the message you're responding to.  Yes, PKIX is not always the
WebPKI, but in practice it typically is.

> If DANE-TLS is to come back, the authors should use a new TLS certificate
> type that is perhaps an X509 structure, but whose trust semantics are
> defined by DANE. The recent IEEE vehicle cert did similar, and all it took
> was a couple of pieces of email.

There is simply no need for that.  The existing X.509 encapsultion
works just fine, and makes it possible to transparently interoperate
with both DANE and CA/B forum WebPKI or other PKIX peers.

For clients that can do DNSSEC lookups directly and reliably, DANE
works in TLS with no friction.  With DANE-TA(2) and DANE-EE(3) you
get a different trust model, and DANE-TA(2) does use "trust the
issuer", but the ultimately trusted issuer is delivered via DNSSEC
TLSA records.

DANE does not have to "come back".  It is in use today, enabling
authenticated SMTP to over 337 thousand email domains and growing:

http://stats.dnssec-tools.org/#graphs

DANE TLS for SMTP is supported by Postfix, Exim, Halon, PowerMTA,
Cisco ESA, ...  There are also consumer email providers with millions
of users that employ DANE in both directions: comcast.net, web.de,
gmx.de, freenet.de, ... and more planned.

-- 
Viktor.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-18 Thread Salz, Rich
>[ FWIW, TLS is trust-model agnostic, it is the WebPKI that uses the
  usual panoply of CAs. ]
 
No, it is not agnostic.  It does support other trust models -- raw keys, PGP 
web-of-trust -- but it's default and primary model from its inception is X509 
and its (so ingrained you might consider it implicit) "trust the issuer" model. 
 Look at the definitions of certs and CA's and things like path validation in 
PKIX and its predecessors, the "trust anchors" which builds on the chain model 
-- chained through *issuers* -- in the protocol, and so on.

The "usual panoply of CAs" is the WebPKI instantiation of a trust model, but do 
not confuse it with the trust model itself. I have deployed several instances 
of the X509/PKI trust model at work, and none of them use a conventional WebPKI 
set of anchors.

If DANE-TLS is to come back, the authors should use a new TLS certificate type 
that is perhaps an X509 structure, but whose trust semantics are defined by 
DANE. The recent IEEE vehicle cert did similar, and all it took was a couple of 
pieces of email.



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-17 Thread Viktor Dukhovni
> On Nov 17, 2018, at 6:07 AM, Lanlan Pan  wrote:
> 
> And TLS's distribute certificate exchange maybe better than DNSSEC's 
> centralized trust anchor.

In principle, yes, when one carefully selects just the appropriate
trust anchor(s) for a given task.  Some applications do use specific
trust-anchors (internal corporate CAs) at least some of the time.

[ FWIW, TLS is trust-model agnostic, it is the WebPKI that uses the
  usual panoply of CAs. ]

In practice, one generally uses the Mozilla or similar trust bundle,
and so it is still centralized, except that now the attacker has a
choice of multiple central authorities to compromise.

So most of the time the WebPKI is weaker, but you sometimes have
a choice when you can limit the set of peers with which you need
to communicate.

With DNSSEC validating resolvers can also configure trust-anchors
at any point in the tree, which also allows for internal corporate
trust-anchors, and if some TLD or similar followed the RFC5011 key
rollover process used at the root, one could also track the TLD's
keys independently of the delegation from ICANN, but AFAIK this is
not presently a common TLD practice.

-- 
Viktor.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-17 Thread Lanlan Pan
Personally I think the low rate of dnssec deployment on SLD authoritative
server and recursive resolver is the problem.

And TLS's distribute certificate exchange maybe better than DNSSEC's
centralized trust anchor.

Ryan Carboni 于2018年11月8日周四 下午4:44写道:

> Hmm. TLS has gotten too complex. How does one create a new protocol? Maybe
> we should ask Google.
>
> The SSHFP DNS record exists. DNSSEC exists.
>
> This might be a radical proposal, but maybe the certificate hash could be
> placed in a DNS TXT record. In another DNS TXT record, a list of supported
> protocols could be listed.
> A DNS SRV record would define the port that one can use to connect to a
> service, because the URL scheme died after .onion was recognized as a
> domain and after chromium decided to that the presentation of sub domains
> was unimportant. So no browser has to show which port it is connected to.
> Although, to be radical, all anyone needs is RSA-2048, ephemeral DH-3072,
> and SHAKE-128 as AEAD.
> And maybe recommend that boot entropy could be obtained by using the timer
> entropy daemon for one second (and which would in theory provide enough
> entropy for perpetuity).
>
> This isn’t rocket science. The state of cyber security is a horrible
> disappointment.
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
-- 
致礼  Best Regards

潘蓝兰  Pan Lanlan
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-16 Thread Hubert Kario
On Tuesday, 13 November 2018 00:13:58 CET Viktor Dukhovni wrote:
> [ I agree that this thread is off topic for this WG, thus below
>   just a short OT aside on some oft-repeated critiques of DNSSEC. ]
> 
> > On Nov 12, 2018, at 2:15 PM, Tony Arcieri  wrote:
> > 
> > The cryptography employed by the X..509 PKI is substantially more modern
> > than what's in DNSSEC. Much of DNSSEC's security comes down to 1024-bit
> > or 1280-bit RSA ZSKs.
> It is true that while the KSKs tend to be 2048-bit RSA, ZSKs are typically
> 1024-bits or 1280-bits.
> 
>   http://stats.dnssec-tools.org/#keysize
> 
> That said, all the TLDs are using 2048-bit KSKs, and we're seeing
> increasing adoption of ECDSA in DSSSEC:
> 
>   http://stats.dnssec-tools.org/#parameter
> 
> and the .CZ and .BR TLDs switched to ECDSA this year, and more will likely
> follow.
> 
> Furthermore, the weakest link in the chain for both WebPKI and DNSSEC is not
> the cryptography.  Rather it is operational weaknesses in the enrollment
> processes.
> 
> For WebPKI, we basically have TOFU by the CA based on apparent
> unauthenticated control of a TCP endpoint as the basis of certificate
> issuance, occasionally strengthened via DNSSEC(!) validated CAA records
> and/or ACME challenges.

but that TOFU is global, not local to a client

-- 
Regards,
Hubert Kario
Senior Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 115, 612 00  Brno, Czech Republic

signature.asc
Description: This is a digitally signed message part.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-12 Thread Viktor Dukhovni
[ I agree that this thread is off topic for this WG, thus below
  just a short OT aside on some oft-repeated critiques of DNSSEC. ]

> On Nov 12, 2018, at 2:15 PM, Tony Arcieri  wrote:
> 
> The cryptography employed by the X..509 PKI is substantially more modern than 
> what's in DNSSEC. Much of DNSSEC's security comes down to 1024-bit or 
> 1280-bit RSA ZSKs.

It is true that while the KSKs tend to be 2048-bit RSA, ZSKs are typically
1024-bits or 1280-bits.

http://stats.dnssec-tools.org/#keysize

That said, all the TLDs are using 2048-bit KSKs, and we're seeing
increasing adoption of ECDSA in DSSSEC:

http://stats.dnssec-tools.org/#parameter

and the .CZ and .BR TLDs switched to ECDSA this year, and more will likely 
follow.

Furthermore, the weakest link in the chain for both WebPKI and DNSSEC is not the
cryptography.  Rather it is operational weaknesses in the enrollment processes.

For WebPKI, we basically have TOFU by the CA based on apparent unauthenticated
control of a TCP endpoint as the basis of certificate issuance, occasionally
strengthened via DNSSEC(!) validated CAA records and/or ACME challenges.

For DNSSEC, the domain administrator actually has login credentials at the
registrar, and domain control does not require a leap of faith, it is a
fundamental fact of the registrant/registrar bilateral relationship, there's
no third-party trying to bootstrap trust from indirect evidence.

What's more anyone who can compromise the registrar account and take over your
DNS can quickly obtain a WebPKI certificate as a trophy of their 
accomplishment. :-)

So the WebPKI picture isn't especially better, there pros and cons in each
space.

> Furthermore DNSSEC deployment in general lags behind the X.509 PKI 
> significantly.
> In general attempts to bolster browser security with DNSSEC have failed due to
> DNSSEC misconfigurations or outages.

Certificate renewals are also botched from time to time, but we've not abandoned
the WebPKI.  The main barriers to DNSSEC are last-mile issues, and poor support
for DS record enrollment at some registrars.  Zone signing tools have been 
difficult
to use, but are much improved lately.

-- 
Viktor.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-12 Thread Daniel Kahn Gillmor
On Thu 2018-11-08 18:31:28 -0800, Ryan Carboni wrote:
> Encrypting common knowledge is cargo cult fetishism for cryptography. The
> files could be sent unencrypted, and protected using subresource integrity.
> If you are sharing the same data to multiple second parties to serve to a
> single third party, the value of encryption is less than zero.

I agree that the widespread move to CDNs makes those CDNs a point of
vulnerability and potential compromise.

But from a harm reduction point of view, encrypting data that transits a
CDN does protect that traffic from surveillance by non-CDN network-based
adversaries.

There is more research and development work to be done to make that
protection even more robust: anti-traffic analysis work, for example.
But simply reverting to cleartext would be a mistake.

Ryan, your posts in this thread suggest an understandable frustration
with cryptographic deployment on the public Internet, and perhaps an
even more understandable frustration with cryptographic *deprecation* on
the public Internet.  However, the web suffers from the same two
problems as much of the public Internet: the curse of the deployed base,
and a small but non-negligible fraction of confused, interfering
middleboxes.

I love proposals that happily ignore these problems, because they tend
to be elegant, and more often correct than janky old stuff!  But, if we
want our protocol designs to actually eventually replace old, worse
protocol designs, we need to look at deployment/upgrade/deprecation
paths, which involves a *lot* of ugliness -- the main job of the TLS WG,
afaict.  Otherwise, our beautiful new designs will get rolled out, and
will simply co-exist alongside the old brokenness :/

 --dkg

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-10 Thread Eric Rescorla
On Fri, Nov 9, 2018 at 10:20 AM Ryan Carboni  wrote:

> Okay, a modern browser connecting to a server owned by billion dollar
> corporations are able to implement the latest version of TLS, I’ll concede
> that. Regardless, I can only underline one point: any new protocol needs to
> break both compatibility and be downgradable, and require a small amount of
> code. It probably wasn’t wrong for the average browser implementation to
> downgrade upon connection failure before, it certainly seem more sound than
> any gritty details of recent protocol design.
>
> Furthermore, TLS 1.2 is perfectly fine, and so is TLS 1.3 by everyone’s
> statements. If so, a new protocol has no need to quickly replace either one
> of them, but instead have a high likelihood of being superior and simpler,
> and performs better according to current design of the internet.
>

This thread seems like it has drifted afield of the TLS WG, which is
chartered to work on TLS.

-Ekr

And possibly list recommendations for how out of scope issues could be
> resolved in a subsection for the inevitable RFC describing it. Boot entropy
> can be solved by increasing boot times by one second. Reminders of various
> Javascript functions to ensure authenticity. Etc.
>
> Google’s idea to rush out experimental protocols looks disgusting to me.
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-09 Thread Eric Mill
On Thu, Nov 8, 2018 at 9:31 PM Ryan Carboni  wrote:

> On Thursday, November 8, 2018, Eric Rescorla  wrote:
>
>>  It's also worth noting that in practice, many sites are served on
>> multiple CDNs which do not share keying material.
>>
>
> Encrypting common knowledge is cargo cult fetishism for cryptography. The
> files could be sent unencrypted, and protected using subresource integrity.
> If you are sharing the same data to multiple second parties to serve to a
> single third party, the value of encryption is less than zero.
>

This misunderstands the utility and deployability of SRI. SRI is based on
hashing data exactly, and so sites can only practically use it for files
that do not change (e.g. jQuery x.y.z) and not services that do change
(e.g. an analytics service, or really any live service). Encryption in
transit for public files, between services operated by separate entities,
is a practical necessity to preserve integrity.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-09 Thread Patrick Mevzek





On 2018-11-08 20:41 -0500, Jim Reid wrote:

On 8 Nov 2018, at 08:44, Ryan Carboni  wrote:


This might be a radical proposal, but maybe the certificate hash could be 
placed in a DNS TXT record.


[..]


If you need to put this hash in the DNS, you might as well get a type code 
assigned for a specifc RR to do that.


Which is exactly what TLSA records are for (RFC 6698), and its type 3:

3 -- Certificate usage 3 is used to specify a certificate, or the
  public key of such a certificate, that MUST match the end entity
  certificate given by the server in TLS.  This certificate usage is
  sometimes referred to as "domain-issued certificate" because it
  allows for a domain name administrator to issue certificates for a
  domain without involving a third-party CA.  The target certificate
  MUST match the TLSA record.  The difference between certificate
  usage 1 and certificate usage 3 is that certificate usage 1
  requires that the certificate pass PKIX validation, but PKIX
  validation is not tested for certificate usage 3.
--
Patrick Mevzek

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-08 Thread Viktor Dukhovni
> On Nov 8, 2018, at 9:51 PM, Eric Rescorla  wrote:
> 
> I don't know what you consider "widespread", but presently both Chrome and 
> Firefox support TLS 1.3, and our data shows that about 5% of Firefox 
> connections use TLS 1.3. Chrome's numbers are similar and the numbers from 
> the server side perspective are higher (last time I checked, Facebook was 
> reporting > 50% TLS 1.3).

Even my SMTP submission server is seeing its fair share of TLS 1.3
connections, from the iPhones of some of its users...

The long tail of TLS 1.2 will persist for quite some time, but TLS
1.3 adoption ramp-up is happening.

-- 
Viktor.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-08 Thread Dmitry Belyavsky
Hello,

пт, 9 нояб. 2018 г., 7:03 Ryan Carboni rya...@gmail.com:

> I think I have implied that ClientHello is unneccesary to an extent, it
> can be replaced by a DNS TXT record.
>
> I think I implied that self-signed certificates are acceptable given the
> precedent of Let’s Encrypt and the use of DNSSEC (has there been evidence
> of DNS spoofing attacks against a CA?).
>

Sure.
At least this proof-of-concept one.

https://blog.powerdns.com/2018/09/10/spoofing-dns-with-fragments/
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-08 Thread Eric Rescorla
On Thu, Nov 8, 2018 at 6:31 PM Ryan Carboni  wrote:

> On Thursday, November 8, 2018, Eric Rescorla  wrote:
>
>>  It's also worth noting that in practice, many sites are served on
>> multiple CDNs which do not share keying material.
>>
>>
> Encrypting common knowledge is cargo cult fetishism for cryptography. The
> files could be sent unencrypted, and protected using subresource integrity.
> If you are sharing the same data to multiple second parties to serve to a
> single third party, the value of encryption is less than zero.
>

I believe you are misunderstanding me. The issue is not about
confidentiality of the records but about correctness.

Consider the case where example.com which is hosted on two CDNs, X and Y. X
and Y have different private keys (for the reasons you indicate) as well as
different crypto configurations. The client does an A/ lookup for
example.com and gets an A for X and then does a TXT lookup for example.com
and gets the TXT for Y. This creates failures. We've spent quite a while
working on this problem for ESNI and there aren't a lot of great answers
right now. It seems like your proposal would suffer from the same issue.


In any case, Eric, you inadvertently contradict yourself. The whole point
> of WebPKI is to certify trust, and has been an issue over the years. But
> CDNs act as a intermediary between the creator of the content and the end
> user. CDNs do not have as strict requirements as do CAs in terms of
> auditing, and have their own issues outside the scope of this conversation.
>

Well, I agree that this is out of scope, but the guarantees that the WebPKI
claims to enforce (at least for DV) is effective control of the domain
name, and a CDN acts as the origin server for a given domain and hence
effectively controls it. I appreciate that many people don't like this, but
it's essentially the only design that's consistent with having the CDN act
for the origin, which is at present basically essential for good
performance.


In any case, TLS 1.3 won’t reach widespread adoption for another few years,
>

I don't know what you consider "widespread", but presently both Chrome and
Firefox support TLS 1.3, and our data shows that about 5% of Firefox
connections use TLS 1.3. Chrome's numbers are similar and the numbers from
the server side perspective are higher (last time I checked, Facebook was
reporting > 50% TLS 1.3).

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-08 Thread Jim Reid
On 8 Nov 2018, at 08:44, Ryan Carboni  wrote:
> 
> This might be a radical proposal, but maybe the certificate hash could be 
> placed in a DNS TXT record.

This is a bad idea.

Overloading TXT records with special semantics rarely, if ever, has a happy 
ending. For instance application software would need to somehow work out which 
of the TXT records for some domain name was your hypothetical hash and which 
were SPF strings or whatever else has been dumped into TXT records.

If you need to put this hash in the DNS, you might as well get a type code 
assigned for a specifc RR to do that.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-08 Thread Viktor Dukhovni
> On Nov 8, 2018, at 4:34 AM, Salz, Rich  wrote:
> 
> What makes you say that?  Please be specific about what you think should be 
> taken out.

One example area where the complexity is noticeably high:

 * Certificate selection metadata specificity seems to far exceed
   plausible diversity of choice on the peer end.  Are there
   really clients or servers out there with so many different
   certificates to choose from that we need:

a.  CA name hints from client to server?
b.  OID filters in the certificate request from server to client?
c.  signature_algorithms_cert (TLS -> X.509 layer violation, I
just ignore this one)?

In TLS 1.2 the signature_algorithms extension needs to be combined
with the certificate types list, and neither 5246 or 4492 provides
a means for the client to decide which curves the server supports
in the client certificate (addressed in TLS 1.3).

TLS 1.2 has fixed-(EC)DH ciphersuites that should probably never have
been defined, and for some unknown reason added MD5 as a valid signature
algorithm hash, even though MD5 had already reached its use-by date...

For simplicity, I am a fan of Mike Hamburg's STROBE (a protocol design
framework, not a finished protocol):

   https://eprint.iacr.org/2017/003

of course one still needs to plug in some sort of PKI to get
a complete system, but it robustly unifies many things that in
TLS need to be built with one's bare hands.  I do hope that
STROBE is getting used somewhere, and not just languishing as
a paper design.

-- 
-- 
Viktor.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-08 Thread Eric Rescorla
Hi Ryan,

Thanks for your comments.

On Thu, Nov 8, 2018 at 12:44 AM Ryan Carboni  wrote:

> Hmm. TLS has gotten too complex. How does one create a new protocol? Maybe
> we should ask Google.
>
> The SSHFP DNS record exists. DNSSEC exists.
>
> This might be a radical proposal, but maybe the certificate hash could be
> placed in a DNS TXT record.
>

There's actually an RFC for this: https://tools.ietf.org/rfcmarkup?doc=6968.
Unfortunately, it is not really a viable option for replacing the WebPKI
for TLS for two reasons:

1. A nontrivial number of network elements will not correctly pass DNSSEC,
and so attempting to use it will cause failures.
2. Essentially all clients and servers only support WebPKI authentication,
so at least for existing applications (such as the Web), endpoints will
need to support WebPKI for a very long time.

There are some specific applications that could potentially use this
method, but that's not going to work for most applications. It's also worth
noting that in practice, many sites are served on multiple CDNs which do
not share keying material. This is a real unsolved problem for Encrypted
SNI and would also likely be a problem if the keys in question were
endpoint keys.


In another DNS TXT record, a list of supported protocols could be listed.
> A DNS SRV record would define the port that one can use to connect to a
> service, because the URL scheme died after .onion was recognized as a
> domain and after chromium decided to that the presentation of sub domains
> was unimportant. So no browser has to show which port it is connected to.
>

This is an orthogonal question to TLS, I believe. However, in general at
least the Web community has decided that it's not excited about SRV.
However, at least on the Web, the reason for the ubiquity of 443 isn't the
inability to indicate the right port in the URL (which has a slot for
this), but rather that other ports than 443 have much lower middlebox
penetration rates.


Although, to be radical, all anyone needs is RSA-2048, ephemeral DH-3072,
> and SHAKE-128 as AEAD.
>

This is a fairly surprising proposed set of ciphers, given that the
Internet seems to be rapidly moving towards elliptic curve. This proposal
would certainly have very significantly worse computationsl performance
than the mandatory TLS 1.3 ciphers.


And maybe recommend that boot entropy could be obtained by using the timer
> entropy daemon for one second (and which would in theory provide enough
> entropy for perpetuity).
>

This also seems out of scope for TLS.

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-08 Thread Ryan Carboni
I think I have implied that ClientHello is unneccesary to an extent, it can
be replaced by a DNS TXT record.

I think I implied that self-signed certificates are acceptable given the
precedent of Let’s Encrypt and the use of DNSSEC (has there been evidence
of DNS spoofing attacks against a CA?).

I think my suggestion has the implication that protocol negotiation is
unnecessary, although it can be kept.

I think my suggestion would have automatic backwards compatibility. A TLS
1.2-only client (likely to be found in regulated sectors that requires
middlebox inspection and decryption) would attempt to connect using port
443, while my proposed protocol would attempt to look up using DNS first to
obtain DNS records relevant. By using separate ports for each new protocol,
it maintains a greater level of cross-compatibility and allows for future
development.

These compounded extensions make the protocol less secure by making it
harder to implement, particularly given the recent spate of attacks against
unintuitively implemented state machines for key negotiation.

The entire protocol should be removed and replaced by something far simpler..

Is this sufficiently specific?

I hope in the future, the IETF TLS working group will reach out to
middlebox designers to understand what they are exactly doing to TLS so
that these things won’t be unexpected.

I must also say that CBC isn’t bad, as long as the padding is considered to
be part of the message to be authenticated.

On Thursday, November 8, 2018, Salz, Rich  wrote:

> *>*Hmm. TLS has gotten too complex.
>
>
>
> What makes you say that?  Please be specific about what you think should
> be taken out.
>
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-11-08 Thread Salz, Rich
>Hmm. TLS has gotten too complex.

What makes you say that?  Please be specific about what you think should be 
taken out.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing Protocol Invariants

2018-06-14 Thread Kyle Nekritz
That’s definitely a possibility if using a single key that never changes. With 
periodically rolling new keys, I’m not sure the risk is much different than 
with periodically rolling new versions. Ossifying on updated versions of either 
requires the middlebox to take a hard dependency on having the updated version 
available. Updating for arbitrary changes in the protocol is more complex than 
just updating a config for an encryption key, but I suspect we could also roll 
out an updated key mapping much faster than a new version that required TLS 
library code changes.

Using a negotiated key from a previous connection completely avoids that issue 
(for example the server sends an encrypted extension with identifier X and key 
Y, which the client remembers for future connections).

From: Steven Valdez 
Sent: Thursday, June 14, 2018 10:35 AM
To: Kyle Nekritz 
Cc: David Benjamin ;  
Subject: Re: [TLS] Enforcing Protocol Invariants

This scheme probably isn't sufficient by itself, since a middlebox just has to 
be aware of the anti-ossification extension and can parse the server's response 
by decrypting it with the known mapping (either from the RFC or fetching the 
latest updated mapping), and then ossifying on the contents of the 'real' 
ServerHello. To keep the ServerHello from ossifying, you'll need to change the 
serialization and codepoints of the ServerHello at each rolling version.

On Wed, Jun 13, 2018 at 8:29 PM Kyle Nekritz 
mailto:knekr...@fb.com>> wrote:
I think there may be lower overhead ways (that don’t require frequent TLS 
library code changes) to prevent ossification of the ServerHello using a 
mechanism similar to the cleartext cipher in quic. For example, a client could 
offer an “anti-ossification” extension containing an identifier that 
corresponds to a particular key. The identifier->key mapping can be established 
using a couple of mechanisms, depending on the level of defense desired against 
implementations that know about this extension:
* static mapping defined in RFC
* periodically updated mapping shared among implementations
* negotiated on a previous connection to the server, similar to a PSK
This key can then be used to “encrypt” the ServerHello such that it is 
impossible for a middlebox without the key to read (though would not add actual 
confidentiality and would probably involve aead nonce-reuse). There’s a couple 
of options to do this:
* Simply replace the plaintext record layer for the ServerHello with an 
encrypted record layer, using this key (this would not be compatible with 
existing middleboxes that have caused us trouble)
* Put a “real” encrypted ServerHello in an extension in the “outer” plaintext 
ServerHello
* Send a fake ServerHello (similar to how we encapsulate HelloRetryRequest in a 
ServerHello), and then send a real ServerHello in a following encrypted record
All of these would allow a server to either use this mechanism or negotiate 
standard TLS 1.3 (and the client to easily tell which one is in use).

With the small exception of potentially updating the identifier->key mapping, 
this would not require any TLS library changes (once implemented in the first 
place), and I believe would still provide almost all of the benefits.

From: TLS mailto:tls-boun...@ietf.org>> On Behalf Of 
David Benjamin
Sent: Tuesday, June 12, 2018 12:28 PM
To: mailto:tls@ietf.org>> mailto:tls@ietf.org>>
Subject: [TLS] Enforcing Protocol Invariants

Hi all,

Now that TLS 1.3 is about done, perhaps it is time to reflect on the 
ossification problems.

TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may be 
incrementally rolled out in an existing compliant TLS 1.2 deployment. Yet we 
had problems. Widespread non-compliant servers broke on the TLS 1.3 
ClientHello, so versioning moved to supported_versions. Widespread 
non-compliant middleboxes attempted to parse someone else’s ServerHellos, so 
the protocol was further hacked to weave through their many defects.

I think I can speak for the working group that we do not want to repeat this 
adventure again. In general, I think the response to ossification is two-fold:

1. It’s already happened, so how do we progress today?
2. How do we avoid more of this tomorrow?

The workarounds only answer the first question. For the second, TLS 1.3 has a 
section which spells out a few protocol 
invariants<https://urldefense.proofpoint.com/v2/url?u=https-3A__tlswg.github.io_tls13-2Dspec_draft-2Dietf-2Dtls-2Dtls13.html-23rfc.section.9.3=DwMFaQ=5VD0RTtNlTh3ycd41b3MUw=l2j4BjkO0Lc3u4CH2z7jPw=SaWplkxkbv9O_Z71HdE6L71vu2f7ovWWaEQrPRRbywc=F7IJ9Nac5Of_GCvBVk0_QhTplCJjoboBSemHl0pi1oM=>.
 It is all corollaries of existing TLS specification text, but hopefully 
documenting it explicitly will help. But experience has shown specification 
text is only necessary, not sufficient.

For extensibility problems in servers, we have 
GREASE<https://urldefense.proofpoint.com/v2/url?u=https-3A__tools.ie

Re: [TLS] Enforcing Protocol Invariants

2018-06-13 Thread Kyle Nekritz
I think there may be lower overhead ways (that don’t require frequent TLS 
library code changes) to prevent ossification of the ServerHello using a 
mechanism similar to the cleartext cipher in quic. For example, a client could 
offer an “anti-ossification” extension containing an identifier that 
corresponds to a particular key. The identifier->key mapping can be established 
using a couple of mechanisms, depending on the level of defense desired against 
implementations that know about this extension:
* static mapping defined in RFC
* periodically updated mapping shared among implementations
* negotiated on a previous connection to the server, similar to a PSK
This key can then be used to “encrypt” the ServerHello such that it is 
impossible for a middlebox without the key to read (though would not add actual 
confidentiality and would probably involve aead nonce-reuse). There’s a couple 
of options to do this:
* Simply replace the plaintext record layer for the ServerHello with an 
encrypted record layer, using this key (this would not be compatible with 
existing middleboxes that have caused us trouble)
* Put a “real” encrypted ServerHello in an extension in the “outer” plaintext 
ServerHello
* Send a fake ServerHello (similar to how we encapsulate HelloRetryRequest in a 
ServerHello), and then send a real ServerHello in a following encrypted record
All of these would allow a server to either use this mechanism or negotiate 
standard TLS 1.3 (and the client to easily tell which one is in use).

With the small exception of potentially updating the identifier->key mapping, 
this would not require any TLS library changes (once implemented in the first 
place), and I believe would still provide almost all of the benefits.

From: TLS  On Behalf Of David Benjamin
Sent: Tuesday, June 12, 2018 12:28 PM
To:  
Subject: [TLS] Enforcing Protocol Invariants

Hi all,

Now that TLS 1.3 is about done, perhaps it is time to reflect on the 
ossification problems.

TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may be 
incrementally rolled out in an existing compliant TLS 1.2 deployment. Yet we 
had problems. Widespread non-compliant servers broke on the TLS 1.3 
ClientHello, so versioning moved to supported_versions. Widespread 
non-compliant middleboxes attempted to parse someone else’s ServerHellos, so 
the protocol was further hacked to weave through their many defects.

I think I can speak for the working group that we do not want to repeat this 
adventure again. In general, I think the response to ossification is two-fold:

1. It’s already happened, so how do we progress today?
2. How do we avoid more of this tomorrow?

The workarounds only answer the first question. For the second, TLS 1.3 has a 
section which spells out a few protocol 
invariants.
 It is all corollaries of existing TLS specification text, but hopefully 
documenting it explicitly will help. But experience has shown specification 
text is only necessary, not sufficient.

For extensibility problems in servers, we have 
GREASE.
 This enforces the key rule in ClientHello processing: ignore unrecognized 
parameters. GREASE enforces this by filling the ecosystem with them. TLS 1.3’s 
middlebox woes were different. The key rule is: if you did not produce a 
ClientHello, you cannot assume that you can parse the response. Analogously, we 
should fill the ecosystem with such responses. We have an idea, but it is more 
involved than GREASE, so we are very interested in the TLS community’s feedback.

In short, we plan to regularly mint new TLS versions (and likely other 
sensitive parameters such as extensions), roughly every six weeks matching 
Chrome’s release cycle. Chrome, Google servers, and any other deployment that 
wishes to participate, would support two (or more) versions of TLS 1.3: the 
standard stable 0x0304, and a rolling alternate version. Every six weeks, we 
would randomly pick a new code point. These versions will otherwise be 
identical to TLS 1.3, save maybe minor details to separate keys and exercise 
allowed syntax changes. The goal is to pave the way for future versions of TLS 
by simulating them (“draft negative one”).

Of course, this scheme has some risk. It grabs code points everywhere. Code 
points are plentiful, but we do sometimes have collisions (e.g. 26 and 40). The 
entire point is to serve and maintain TLS’s extensibility, so we certainly do 
not wish to hamper it! Thus we have some 

Re: [TLS] Enforcing Protocol Invariants

2018-06-13 Thread Daniel Migault
The two mechanisms address different targets but overall I prefer the
design of the new proposal.
Yours,
Daniel

On Wed, Jun 13, 2018 at 4:29 PM, David Benjamin 
wrote:

> Are you asking about this new proposal (which still needs an amusing
> name), or the original GREASE mechanism?
>
> The original GREASE mechanism was only targetting ClientHello intolerance
> in servers. It's true that it uses specific values, and indeed there is
> nothing stopping buggy implementations from treating them differently. The
> thought then was ClientHello intolerance in servers is usually just
> accidental. It takes a certain willful ignorance to forget the default in
> your switch-case, and then go out of your way to special-case things,
> rather than recheck the spec as to what you're supposed to do. It was also
> meant to be lightweight (a one-time implementation cost and a one-time
> allocation). It's imperfect, but it does seem to help with the problem.
>
> This new proposal is targetting ServerHello intolerance problems. Rather
> than fixing a set of values initially, it regularly rerolls random values
> over time, with no fixed pattern. It should hopefully be more resilient to
> this sort of misbehavior. On the flip side, it is more work to maintain and
> only implementations that update sufficiently frequently can participate,
> whereas, in theory, anyone could deploy the original GREASE.
>
> On Wed, Jun 13, 2018 at 3:15 PM Daniel Migault <
> daniel.miga...@ericsson.com> wrote:
>
>> I also support something is being done in this direction. I like the idea
>> of taking ephemeral non allocated code points.
>>
>> What is not so clear to me is how GREASE prevents a buggy implementations
>> from behaving correctly for GREASE allocated code points, while remaining
>> buggy for the other (unallocated). code points.
>> Yours,
>> Daniel
>>
>> On Wed, Jun 13, 2018 at 2:06 PM, Alessandro Ghedini <
>> alessan...@ghedini.me> wrote:
>>
>>> On Tue, Jun 12, 2018 at 12:27:39PM -0400, David Benjamin wrote:
>>> > Hi all,
>>> >
>>> > Now that TLS 1.3 is about done, perhaps it is time to reflect on the
>>> > ossification problems.
>>> >
>>> > TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may
>>> be
>>> > incrementally rolled out in an existing compliant TLS 1.2 deployment.
>>> Yet
>>> > we had problems. Widespread non-compliant servers broke on the TLS 1.3
>>> > ClientHello, so versioning moved to supported_versions. Widespread
>>> > non-compliant middleboxes attempted to parse someone else’s
>>> ServerHellos,
>>> > so the protocol was further hacked to weave through their many
>>> defects..
>>>
>>> >
>>> > I think I can speak for the working group that we do not want to repeat
>>> > this adventure again. In general, I think the response to ossification
>>> is
>>> > two-fold:
>>> >
>>> > 1. It’s already happened, so how do we progress today?
>>> > 2. How do we avoid more of this tomorrow?
>>> >
>>> > The workarounds only answer the first question. For the second, TLS
>>> 1.3 has
>>> > a section which spells out a few protocol invariants
>>> > >> tls13.html#rfc.section.9..3>.
>>> > It is all corollaries of existing TLS specification text, but hopefully
>>> > documenting it explicitly will help. But experience has shown
>>> specification
>>> > text is only necessary, not sufficient.
>>> >
>>> > For extensibility problems in servers, we have GREASE
>>> > . This enforces
>>> the
>>> > key rule in ClientHello processing: ignore unrecognized parameters.
>>> GREASE
>>> > enforces this by filling the ecosystem with them. TLS 1.3’s middlebox
>>> woes
>>> > were different. The key rule is: if you did not produce a ClientHello,
>>> you
>>> > cannot assume that you can parse the response. Analogously, we should
>>> fill
>>> > the ecosystem with such responses. We have an idea, but it is more
>>> involved
>>> > than GREASE, so we are very interested in the TLS community’s feedback.
>>> >
>>> > In short, we plan to regularly mint new TLS versions (and likely other
>>> > sensitive parameters such as extensions), roughly every six weeks
>>> matching
>>> > Chrome’s release cycle. Chrome, Google servers, and any other
>>> deployment
>>> > that wishes to participate, would support two (or more) versions of TLS
>>> > 1.3: the standard stable 0x0304, and a rolling alternate version.
>>> Every six
>>> > weeks, we would randomly pick a new code point. These versions will
>>> > otherwise be identical to TLS 1.3, save maybe minor details to separate
>>> > keys and exercise allowed syntax changes. The goal is to pave the way
>>> for
>>> > future versions of TLS by simulating them (“draft negative one”).
>>> >
>>> > Of course, this scheme has some risk. It grabs code points everywhere..
>>> Code
>>> > points are plentiful, but we do sometimes have collisions (e.g. 26 and
>>> 40).
>>> > The entire point is to serve and maintain TLS’s 

Re: [TLS] Enforcing Protocol Invariants

2018-06-13 Thread David Benjamin
On Wed, Jun 13, 2018 at 5:04 PM Christopher Wood <
christopherwoo...@gmail.com> wrote:

> On Wed, Jun 13, 2018 at 11:06 AM Alessandro Ghedini
>  wrote:
> >
> > On Tue, Jun 12, 2018 at 12:27:39PM -0400, David Benjamin wrote:
> > > Hi all,
> > >
> > > Now that TLS 1.3 is about done, perhaps it is time to reflect on the
> > > ossification problems.
> > >
> > > TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may
> be
> > > incrementally rolled out in an existing compliant TLS 1.2 deployment.
> Yet
> > > we had problems. Widespread non-compliant servers broke on the TLS 1.3
> > > ClientHello, so versioning moved to supported_versions. Widespread
> > > non-compliant middleboxes attempted to parse someone else’s
> ServerHellos,
> > > so the protocol was further hacked to weave through their many defects.
> > >
> > > I think I can speak for the working group that we do not want to repeat
> > > this adventure again. In general, I think the response to ossification
> is
> > > two-fold:
> > >
> > > 1. It’s already happened, so how do we progress today?
> > > 2. How do we avoid more of this tomorrow?
> > >
> > > The workarounds only answer the first question. For the second, TLS
> 1.3 has
> > > a section which spells out a few protocol invariants
> > > <
> https://tlswg.github.io/tls13-spec/draft-ietf-tls-tls13.html#rfc.section.9..3
> >.
> > > It is all corollaries of existing TLS specification text, but hopefully
> > > documenting it explicitly will help. But experience has shown
> specification
> > > text is only necessary, not sufficient.
> > >
> > > For extensibility problems in servers, we have GREASE
> > > . This enforces
> the
> > > key rule in ClientHello processing: ignore unrecognized parameters.
> GREASE
> > > enforces this by filling the ecosystem with them. TLS 1.3’s middlebox
> woes
> > > were different. The key rule is: if you did not produce a ClientHello,
> you
> > > cannot assume that you can parse the response. Analogously, we should
> fill
> > > the ecosystem with such responses. We have an idea, but it is more
> involved
> > > than GREASE, so we are very interested in the TLS community’s feedback.
> > >
> > > In short, we plan to regularly mint new TLS versions (and likely other
> > > sensitive parameters such as extensions), roughly every six weeks
> matching
> > > Chrome’s release cycle. Chrome, Google servers, and any other
> deployment
> > > that wishes to participate, would support two (or more) versions of TLS
> > > 1.3: the standard stable 0x0304, and a rolling alternate version.
> Every six
> > > weeks, we would randomly pick a new code point. These versions will
> > > otherwise be identical to TLS 1.3, save maybe minor details to separate
> > > keys and exercise allowed syntax changes. The goal is to pave the way
> for
> > > future versions of TLS by simulating them (“draft negative one”).
> > >
> > > Of course, this scheme has some risk. It grabs code points everywhere..
> Code
> > > points are plentiful, but we do sometimes have collisions (e.g. 26 and
> 40).
> > > The entire point is to serve and maintain TLS’s extensibility, so we
> > > certainly do not wish to hamper it! Thus we have some safeguards in
> mind:
> > >
> > > * We will document every code point we use and what it refers to. (If
> the
> > > volume is fine, we can email them to the list each time.) New
> allocations
> > > can always avoid the lost numbers. At a rate of one every 6 weeks, it
> will
> > > take over 7,000 years to exhaust everything.
> > >
> > > * We will avoid picking numbers that the IETF is likely to allocate, to
> > > reduce the chance of collision. Rolling versions will not start with
> 0x03,
> > > rolling cipher suites or extensions will not be contiguous with
> existing
> > > blocks, etc.
> > >
> > > * BoringSSL will not enable this by default. We will only enable it
> where
> > > we can shut it back off. On our servers, we of course regularly deploy
> > > changes. Chrome is also regularly updated and, moreover, we will gate
> it on
> > > our server-controlled field trials
> > > 
> mechanism. We
> > > hope that, in practice, only the last several code points will be in
> use at
> > > a time.
> > >
> > > * Our clients would only support the most recent set of rolling
> parameters,
> > > and our servers the last handful. As each value will be short-lived,
> the
> > > ecosystem is unlikely to rely on them as de facto standards.
> Conversely,
> > > like other extensions, implementations without them will still
> interoperate
> > > fine. We would never offer a rolling parameter without the
> corresponding
> > > stable one.
> > >
> > > * If this ultimately does not work, we can stop at any time and only
> have
> > > wasted a small portion of code points.
> > >
> > > * Finally, if the working group is open to it, these values could be
> > > summarized in regular documents to 

Re: [TLS] Enforcing Protocol Invariants

2018-06-13 Thread Christopher Wood
On Wed, Jun 13, 2018 at 11:06 AM Alessandro Ghedini
 wrote:
>
> On Tue, Jun 12, 2018 at 12:27:39PM -0400, David Benjamin wrote:
> > Hi all,
> >
> > Now that TLS 1.3 is about done, perhaps it is time to reflect on the
> > ossification problems.
> >
> > TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may be
> > incrementally rolled out in an existing compliant TLS 1.2 deployment. Yet
> > we had problems. Widespread non-compliant servers broke on the TLS 1.3
> > ClientHello, so versioning moved to supported_versions. Widespread
> > non-compliant middleboxes attempted to parse someone else’s ServerHellos,
> > so the protocol was further hacked to weave through their many defects.
> >
> > I think I can speak for the working group that we do not want to repeat
> > this adventure again. In general, I think the response to ossification is
> > two-fold:
> >
> > 1. It’s already happened, so how do we progress today?
> > 2. How do we avoid more of this tomorrow?
> >
> > The workarounds only answer the first question. For the second, TLS 1.3 has
> > a section which spells out a few protocol invariants
> > .
> > It is all corollaries of existing TLS specification text, but hopefully
> > documenting it explicitly will help. But experience has shown specification
> > text is only necessary, not sufficient.
> >
> > For extensibility problems in servers, we have GREASE
> > . This enforces the
> > key rule in ClientHello processing: ignore unrecognized parameters. GREASE
> > enforces this by filling the ecosystem with them. TLS 1.3’s middlebox woes
> > were different. The key rule is: if you did not produce a ClientHello, you
> > cannot assume that you can parse the response. Analogously, we should fill
> > the ecosystem with such responses. We have an idea, but it is more involved
> > than GREASE, so we are very interested in the TLS community’s feedback.
> >
> > In short, we plan to regularly mint new TLS versions (and likely other
> > sensitive parameters such as extensions), roughly every six weeks matching
> > Chrome’s release cycle. Chrome, Google servers, and any other deployment
> > that wishes to participate, would support two (or more) versions of TLS
> > 1.3: the standard stable 0x0304, and a rolling alternate version. Every six
> > weeks, we would randomly pick a new code point. These versions will
> > otherwise be identical to TLS 1.3, save maybe minor details to separate
> > keys and exercise allowed syntax changes. The goal is to pave the way for
> > future versions of TLS by simulating them (“draft negative one”).
> >
> > Of course, this scheme has some risk. It grabs code points everywhere. Code
> > points are plentiful, but we do sometimes have collisions (e.g. 26 and 40).
> > The entire point is to serve and maintain TLS’s extensibility, so we
> > certainly do not wish to hamper it! Thus we have some safeguards in mind:
> >
> > * We will document every code point we use and what it refers to. (If the
> > volume is fine, we can email them to the list each time.) New allocations
> > can always avoid the lost numbers. At a rate of one every 6 weeks, it will
> > take over 7,000 years to exhaust everything.
> >
> > * We will avoid picking numbers that the IETF is likely to allocate, to
> > reduce the chance of collision. Rolling versions will not start with 0x03,
> > rolling cipher suites or extensions will not be contiguous with existing
> > blocks, etc.
> >
> > * BoringSSL will not enable this by default. We will only enable it where
> > we can shut it back off. On our servers, we of course regularly deploy
> > changes. Chrome is also regularly updated and, moreover, we will gate it on
> > our server-controlled field trials
> >  mechanism. We
> > hope that, in practice, only the last several code points will be in use at
> > a time.
> >
> > * Our clients would only support the most recent set of rolling parameters,
> > and our servers the last handful. As each value will be short-lived, the
> > ecosystem is unlikely to rely on them as de facto standards. Conversely,
> > like other extensions, implementations without them will still interoperate
> > fine. We would never offer a rolling parameter without the corresponding
> > stable one.
> >
> > * If this ultimately does not work, we can stop at any time and only have
> > wasted a small portion of code points.
> >
> > * Finally, if the working group is open to it, these values could be
> > summarized in regular documents to reserve them, so that they are
> > ultimately reflected in the registries. A new document every six weeks is
> > probably impractical, but we can batch them up.
> >
> > We are interested in the community’s feedback on this proposal—anyone who
> > might participate, better safeguards, or thoughts on the mechanism as a
> > whole. We 

Re: [TLS] Enforcing Protocol Invariants

2018-06-13 Thread David Benjamin
Are you asking about this new proposal (which still needs an amusing name),
or the original GREASE mechanism?

The original GREASE mechanism was only targetting ClientHello intolerance
in servers. It's true that it uses specific values, and indeed there is
nothing stopping buggy implementations from treating them differently. The
thought then was ClientHello intolerance in servers is usually just
accidental. It takes a certain willful ignorance to forget the default in
your switch-case, and then go out of your way to special-case things,
rather than recheck the spec as to what you're supposed to do. It was also
meant to be lightweight (a one-time implementation cost and a one-time
allocation). It's imperfect, but it does seem to help with the problem.

This new proposal is targetting ServerHello intolerance problems. Rather
than fixing a set of values initially, it regularly rerolls random values
over time, with no fixed pattern. It should hopefully be more resilient to
this sort of misbehavior. On the flip side, it is more work to maintain and
only implementations that update sufficiently frequently can participate,
whereas, in theory, anyone could deploy the original GREASE.

On Wed, Jun 13, 2018 at 3:15 PM Daniel Migault 
wrote:

> I also support something is being done in this direction. I like the idea
> of taking ephemeral non allocated code points.
>
> What is not so clear to me is how GREASE prevents a buggy implementations
> from behaving correctly for GREASE allocated code points, while remaining
> buggy for the other (unallocated). code points.
> Yours,
> Daniel
>
> On Wed, Jun 13, 2018 at 2:06 PM, Alessandro Ghedini  > wrote:
>
>> On Tue, Jun 12, 2018 at 12:27:39PM -0400, David Benjamin wrote:
>> > Hi all,
>> >
>> > Now that TLS 1.3 is about done, perhaps it is time to reflect on the
>> > ossification problems.
>> >
>> > TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may
>> be
>> > incrementally rolled out in an existing compliant TLS 1.2 deployment.
>> Yet
>> > we had problems. Widespread non-compliant servers broke on the TLS 1.3
>> > ClientHello, so versioning moved to supported_versions. Widespread
>> > non-compliant middleboxes attempted to parse someone else’s
>> ServerHellos,
>> > so the protocol was further hacked to weave through their many defects..
>> >
>> > I think I can speak for the working group that we do not want to repeat
>> > this adventure again. In general, I think the response to ossification
>> is
>> > two-fold:
>> >
>> > 1. It’s already happened, so how do we progress today?
>> > 2. How do we avoid more of this tomorrow?
>> >
>> > The workarounds only answer the first question. For the second, TLS 1.3
>> has
>> > a section which spells out a few protocol invariants
>> > <
>> https://tlswg.github.io/tls13-spec/draft-ietf-tls-tls13.html#rfc.section..9..3
>> >.
>> > It is all corollaries of existing TLS specification text, but hopefully
>> > documenting it explicitly will help. But experience has shown
>> specification
>> > text is only necessary, not sufficient.
>> >
>> > For extensibility problems in servers, we have GREASE
>> > . This enforces
>> the
>> > key rule in ClientHello processing: ignore unrecognized parameters.
>> GREASE
>> > enforces this by filling the ecosystem with them. TLS 1.3’s middlebox
>> woes
>> > were different. The key rule is: if you did not produce a ClientHello,
>> you
>> > cannot assume that you can parse the response. Analogously, we should
>> fill
>> > the ecosystem with such responses. We have an idea, but it is more
>> involved
>> > than GREASE, so we are very interested in the TLS community’s feedback.
>> >
>> > In short, we plan to regularly mint new TLS versions (and likely other
>> > sensitive parameters such as extensions), roughly every six weeks
>> matching
>> > Chrome’s release cycle. Chrome, Google servers, and any other deployment
>> > that wishes to participate, would support two (or more) versions of TLS
>> > 1.3: the standard stable 0x0304, and a rolling alternate version. Every
>> six
>> > weeks, we would randomly pick a new code point. These versions will
>> > otherwise be identical to TLS 1.3, save maybe minor details to separate
>> > keys and exercise allowed syntax changes. The goal is to pave the way
>> for
>> > future versions of TLS by simulating them (“draft negative one”).
>> >
>> > Of course, this scheme has some risk. It grabs code points everywhere.
>> Code
>> > points are plentiful, but we do sometimes have collisions (e.g. 26 and
>> 40).
>> > The entire point is to serve and maintain TLS’s extensibility, so we
>> > certainly do not wish to hamper it! Thus we have some safeguards in
>> mind:
>> >
>> > * We will document every code point we use and what it refers to. (If
>> the
>> > volume is fine, we can email them to the list each time.) New
>> allocations
>> > can always avoid the lost numbers. At a rate of one every 6 weeks, it
>> will

Re: [TLS] Enforcing Protocol Invariants

2018-06-13 Thread Daniel Migault
I also support something is being done in this direction. I like the idea
of taking ephemeral non allocated code points.

What is not so clear to me is how GREASE prevents a buggy implementations
from behaving correctly for GREASE allocated code points, while remaining
buggy for the other (unallocated). code points.
Yours,
Daniel

On Wed, Jun 13, 2018 at 2:06 PM, Alessandro Ghedini 
wrote:

> On Tue, Jun 12, 2018 at 12:27:39PM -0400, David Benjamin wrote:
> > Hi all,
> >
> > Now that TLS 1.3 is about done, perhaps it is time to reflect on the
> > ossification problems.
> >
> > TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may be
> > incrementally rolled out in an existing compliant TLS 1.2 deployment. Yet
> > we had problems. Widespread non-compliant servers broke on the TLS 1.3
> > ClientHello, so versioning moved to supported_versions. Widespread
> > non-compliant middleboxes attempted to parse someone else’s ServerHellos,
> > so the protocol was further hacked to weave through their many defects.
> >
> > I think I can speak for the working group that we do not want to repeat
> > this adventure again. In general, I think the response to ossification is
> > two-fold:
> >
> > 1. It’s already happened, so how do we progress today?
> > 2. How do we avoid more of this tomorrow?
> >
> > The workarounds only answer the first question. For the second, TLS 1.3
> has
> > a section which spells out a few protocol invariants
> >  tls13.html#rfc.section.9..3>.
> > It is all corollaries of existing TLS specification text, but hopefully
> > documenting it explicitly will help. But experience has shown
> specification
> > text is only necessary, not sufficient.
> >
> > For extensibility problems in servers, we have GREASE
> > . This enforces
> the
> > key rule in ClientHello processing: ignore unrecognized parameters.
> GREASE
> > enforces this by filling the ecosystem with them. TLS 1.3’s middlebox
> woes
> > were different. The key rule is: if you did not produce a ClientHello,
> you
> > cannot assume that you can parse the response. Analogously, we should
> fill
> > the ecosystem with such responses. We have an idea, but it is more
> involved
> > than GREASE, so we are very interested in the TLS community’s feedback.
> >
> > In short, we plan to regularly mint new TLS versions (and likely other
> > sensitive parameters such as extensions), roughly every six weeks
> matching
> > Chrome’s release cycle. Chrome, Google servers, and any other deployment
> > that wishes to participate, would support two (or more) versions of TLS
> > 1.3: the standard stable 0x0304, and a rolling alternate version. Every
> six
> > weeks, we would randomly pick a new code point. These versions will
> > otherwise be identical to TLS 1.3, save maybe minor details to separate
> > keys and exercise allowed syntax changes. The goal is to pave the way for
> > future versions of TLS by simulating them (“draft negative one”).
> >
> > Of course, this scheme has some risk. It grabs code points everywhere.
> Code
> > points are plentiful, but we do sometimes have collisions (e.g. 26 and
> 40).
> > The entire point is to serve and maintain TLS’s extensibility, so we
> > certainly do not wish to hamper it! Thus we have some safeguards in mind:
> >
> > * We will document every code point we use and what it refers to. (If the
> > volume is fine, we can email them to the list each time.) New allocations
> > can always avoid the lost numbers. At a rate of one every 6 weeks, it
> will
> > take over 7,000 years to exhaust everything.
> >
> > * We will avoid picking numbers that the IETF is likely to allocate, to
> > reduce the chance of collision. Rolling versions will not start with
> 0x03,
> > rolling cipher suites or extensions will not be contiguous with existing
> > blocks, etc.
> >
> > * BoringSSL will not enable this by default. We will only enable it where
> > we can shut it back off. On our servers, we of course regularly deploy
> > changes. Chrome is also regularly updated and, moreover, we will gate it
> on
> > our server-controlled field trials
> >  mechanism.
> We
> > hope that, in practice, only the last several code points will be in use
> at
> > a time.
> >
> > * Our clients would only support the most recent set of rolling
> parameters,
> > and our servers the last handful. As each value will be short-lived, the
> > ecosystem is unlikely to rely on them as de facto standards. Conversely,
> > like other extensions, implementations without them will still
> interoperate
> > fine. We would never offer a rolling parameter without the corresponding
> > stable one.
> >
> > * If this ultimately does not work, we can stop at any time and only have
> > wasted a small portion of code points.
> >
> > * Finally, if the working group is open to it, these values could be

Re: [TLS] Enforcing Protocol Invariants

2018-06-13 Thread Alessandro Ghedini
On Tue, Jun 12, 2018 at 12:27:39PM -0400, David Benjamin wrote:
> Hi all,
> 
> Now that TLS 1.3 is about done, perhaps it is time to reflect on the
> ossification problems.
> 
> TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may be
> incrementally rolled out in an existing compliant TLS 1.2 deployment. Yet
> we had problems. Widespread non-compliant servers broke on the TLS 1.3
> ClientHello, so versioning moved to supported_versions. Widespread
> non-compliant middleboxes attempted to parse someone else’s ServerHellos,
> so the protocol was further hacked to weave through their many defects.
> 
> I think I can speak for the working group that we do not want to repeat
> this adventure again. In general, I think the response to ossification is
> two-fold:
> 
> 1. It’s already happened, so how do we progress today?
> 2. How do we avoid more of this tomorrow?
> 
> The workarounds only answer the first question. For the second, TLS 1.3 has
> a section which spells out a few protocol invariants
> .
> It is all corollaries of existing TLS specification text, but hopefully
> documenting it explicitly will help. But experience has shown specification
> text is only necessary, not sufficient.
> 
> For extensibility problems in servers, we have GREASE
> . This enforces the
> key rule in ClientHello processing: ignore unrecognized parameters. GREASE
> enforces this by filling the ecosystem with them. TLS 1.3’s middlebox woes
> were different. The key rule is: if you did not produce a ClientHello, you
> cannot assume that you can parse the response. Analogously, we should fill
> the ecosystem with such responses. We have an idea, but it is more involved
> than GREASE, so we are very interested in the TLS community’s feedback.
> 
> In short, we plan to regularly mint new TLS versions (and likely other
> sensitive parameters such as extensions), roughly every six weeks matching
> Chrome’s release cycle. Chrome, Google servers, and any other deployment
> that wishes to participate, would support two (or more) versions of TLS
> 1.3: the standard stable 0x0304, and a rolling alternate version. Every six
> weeks, we would randomly pick a new code point. These versions will
> otherwise be identical to TLS 1.3, save maybe minor details to separate
> keys and exercise allowed syntax changes. The goal is to pave the way for
> future versions of TLS by simulating them (“draft negative one”).
> 
> Of course, this scheme has some risk. It grabs code points everywhere. Code
> points are plentiful, but we do sometimes have collisions (e.g. 26 and 40).
> The entire point is to serve and maintain TLS’s extensibility, so we
> certainly do not wish to hamper it! Thus we have some safeguards in mind:
> 
> * We will document every code point we use and what it refers to. (If the
> volume is fine, we can email them to the list each time.) New allocations
> can always avoid the lost numbers. At a rate of one every 6 weeks, it will
> take over 7,000 years to exhaust everything.
> 
> * We will avoid picking numbers that the IETF is likely to allocate, to
> reduce the chance of collision. Rolling versions will not start with 0x03,
> rolling cipher suites or extensions will not be contiguous with existing
> blocks, etc.
> 
> * BoringSSL will not enable this by default. We will only enable it where
> we can shut it back off. On our servers, we of course regularly deploy
> changes. Chrome is also regularly updated and, moreover, we will gate it on
> our server-controlled field trials
>  mechanism. We
> hope that, in practice, only the last several code points will be in use at
> a time.
> 
> * Our clients would only support the most recent set of rolling parameters,
> and our servers the last handful. As each value will be short-lived, the
> ecosystem is unlikely to rely on them as de facto standards. Conversely,
> like other extensions, implementations without them will still interoperate
> fine. We would never offer a rolling parameter without the corresponding
> stable one.
> 
> * If this ultimately does not work, we can stop at any time and only have
> wasted a small portion of code points.
> 
> * Finally, if the working group is open to it, these values could be
> summarized in regular documents to reserve them, so that they are
> ultimately reflected in the registries. A new document every six weeks is
> probably impractical, but we can batch them up.
> 
> We are interested in the community’s feedback on this proposal—anyone who
> might participate, better safeguards, or thoughts on the mechanism as a
> whole. We hope it will help the working group evolve its protocols more
> smoothly in the future.

This looks interesting and I very much agree that we should do *somthing* to
try to avoid the pain we've seen with deploying TLS 1.3