Re: [TLS] ESNIKeys over complex

2018-12-08 Thread Ilari Liusvaara
On Sat, Dec 08, 2018 at 11:42:56AM -0700, David Fifield wrote:
> On Sat, Dec 08, 2018 at 06:38:30PM +0200, Ilari Liusvaara wrote:
> > While thinking about the previous, I ran into some issues with the
> > split mode. Firstly, if the fronting server does not encrypt the
> > client_hello when transmitting it to backend server, passive attack
> > can match incoming connections with backend servers. This reduces
> > anonymity set to a single backend server (a lot smaller set).
> > 
> > And secondly, even if server encrypts the client_hello, but does not
> > use a tunnel to backend, if server does not have client hello replay
> > filtering (and such filtering is hard on typical fronting servers),
> > replay attacks and some very simple traffic analysis can discover the
> > backend server (again reducing the anonymity set by a lot).
> > 
> > This means that the fronting server should have an encrypted tunnel
> > with the backend server (and there is likely double encryption).
> 
> I see--you're thinking of an observer who can see "both sides" of a
> connection: the link between the Client and the Client-Facing Server,
> and the link between the Client-Facing Server and the Backend Server.
> I agree: such an observer could, through timing analysis, deanonymize
> client connections (i.e., tell which Backend Server the Client is
> accessing, as if it were accessing directly and not through the
> Client-Facing Server). I suppose to really defend against a global
> observer, you would need some kind of mixnet.

If the client hello is not encrypted to backend server, one does not
need timing analysis if one is on both sides: Things like key share
and client random allow much more robust correlation.

> It looks like a little bug in the draft. I've checked just now and I
> don't see that it says exactly *where* the observer is. I had assumed
> that the observer only sees the Client–Server link (in Shared Mode) or
> the link between Client and Client-Facing Server (in Split Mode).

I think there have actually been real world attacks where link from
client to CDN is encrypted, but link from CDN to backend is not,
and attacker attacked the CDN to backend link.

And those were active attacks, the attacks I am talking about are
passive-only (not as exciting, but more dangerous).

> I don't understand your point about replay and encrypting the Client
> Hello. An observer doesn't need to see the contents of the Client Hello
> to identify a Backend Server--the destination IP address is enough. I
> don't see what specific attack you have in mind using replay, but it
> seems to me that even an encrypted tunnel is insufficient. Unless the
> encrypted tunnel additionally obfuscates packet sizes and timing, an
> observer can correlate e.g. burst sizes and match incoming flows to
> outgoing flows, in the manner of website fingerprinting.

>From the destination IP address, one can only tell that _some_ client
is accessing the backend. One presumably wants to know _which_ client
is doing so (e.g. correlating the IP address of the client). Because
otherwise the client might actually be someone from juristiction that
is not friendly to you.

And yes, even with encrypted tunnel, traffic analysis can get
great amounts of data. It is just that if client hellos are replayable
and there is no tunnel, it is much easier for the attacker to
correlate connections on both sides (since attacker can then observe
new connections versus just some data exchange).



-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-12-08 Thread David Fifield
On Sat, Dec 08, 2018 at 06:38:30PM +0200, Ilari Liusvaara wrote:
> While thinking about the previous, I ran into some issues with the
> split mode. Firstly, if the fronting server does not encrypt the
> client_hello when transmitting it to backend server, passive attack
> can match incoming connections with backend servers. This reduces
> anonymity set to a single backend server (a lot smaller set).
> 
> And secondly, even if server encrypts the client_hello, but does not
> use a tunnel to backend, if server does not have client hello replay
> filtering (and such filtering is hard on typical fronting servers),
> replay attacks and some very simple traffic analysis can discover the
> backend server (again reducing the anonymity set by a lot).
> 
> This means that the fronting server should have an encrypted tunnel
> with the backend server (and there is likely double encryption).

I see--you're thinking of an observer who can see "both sides" of a
connection: the link between the Client and the Client-Facing Server,
and the link between the Client-Facing Server and the Backend Server.
I agree: such an observer could, through timing analysis, deanonymize
client connections (i.e., tell which Backend Server the Client is
accessing, as if it were accessing directly and not through the
Client-Facing Server). I suppose to really defend against a global
observer, you would need some kind of mixnet.

It looks like a little bug in the draft. I've checked just now and I
don't see that it says exactly *where* the observer is. I had assumed
that the observer only sees the Client–Server link (in Shared Mode) or
the link between Client and Client-Facing Server (in Split Mode).

I don't understand your point about replay and encrypting the Client
Hello. An observer doesn't need to see the contents of the Client Hello
to identify a Backend Server--the destination IP address is enough. I
don't see what specific attack you have in mind using replay, but it
seems to me that even an encrypted tunnel is insufficient. Unless the
encrypted tunnel additionally obfuscates packet sizes and timing, an
observer can correlate e.g. burst sizes and match incoming flows to
outgoing flows, in the manner of website fingerprinting.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-12-08 Thread Ilari Liusvaara
On Tue, Nov 20, 2018 at 09:45:51PM +, Stephen Farrell wrote:
> 
> I'm fine that such changes don't get done for a while (so
> I or my student get time to try make stuff work:-) and
> it might in any case take a while to figure out how to
> handle the multi-CDN use-case discussed in Bangkok which
> would I guess also affect this structure some, but I wanted
> to send this to the list while it's fresh for me.

For even nastier combination, combine zone apex (so no CNAMEs)
and multi-CDN. Due to atomicity constraints (the unit of atomicity is
the RRset), plain includes will not work, even if the site is only
using one CDN at a time.

A nasty hack would be to include valid prefixes (with usual more-
specific rules for matching) in include directives. The client could
then match the includes with addresses they belong to.

In the proposed include syntax, there is an issue that the base64
encoding prevents recursive server from returning the referenced
ESNI records in one query. The include would have to be specified as
DNS wire-format name field for DNS recursors to be able to perform such
performance optimization.

For CNAME multi-CDNs, it should be enough to ensure that the ESNI
and address records share owner name and class, as this should
suffice to ensure that address and ESNI always come from the same CDN.

The problems with using CDNs with zone apex are well-known, and there
are attempts at solving those. Any such solution if practical would
also solve the "address CDN" issue for ESNI.


The "recovery" issue seems to be Capital-H Hard. In TLS 1.3, a
really nasty hack would be to restart the handshake after server
finished without resetting encryption (since encryption is not reset,
the client_hello and server_hello would be encrypted, with keys
known by the client and fronting server). However, I do not think that
would work in DTLS 1.3 (as it assumes that keys can not change during
an epoch, and there is space for only one handshake epoch). And even
that hack is likely too much. And of course fallbacks would be horrible
idea here.

Then it seems to me that other ways to do recovery are even worse,
as they would dink even more with the internals of TLS 1.3, which is
not something to do lightly (the "extreme care" remark from RFC8446
definitely appiles here). This seems to apply even if somebody invents
a snappy shortcut using some more exotic cryptographic primitive.

In summary, I do not think "recovery" will work. But this is not the
first DNS record one can take down a site for a long time. But perhaps
the first where one can do that on mass scale...


While thinking about the previous, I ran into some issues with the
split mode. Firstly, if the fronting server does not encrypt the
client_hello when transmitting it to backend server, passive attack
can match incoming connections with backend servers. This reduces
anonymity set to a single backend server (a lot smaller set).

And secondly, even if server encrypts the client_hello, but does not
use a tunnel to backend, if server does not have client hello replay
filtering (and such filtering is hard on typical fronting servers),
replay attacks and some very simple traffic analysis can discover the
backend server (again reducing the anonymity set by a lot).

This means that the fronting server should have an encrypted tunnel
with the backend server (and there is likely double encryption).


Then there is a future compatiblity issue: If one has a PQ IND-CPA
KEM with sufficiently small size (over a dozen of those in NISTPQC),
one can extend base TLS 1.3 to post-quantum in straightforward manner
(client generates a public key, sticks it to key_share, the server
encapsulates a random session key to that public key, sticks the
ciphertext into its key_share and the client decrypts the session key
with the private key. PFS is possible if key generation is cheap enough
for client to do that per-connection). However, such extension is not
compatible with present ESNI design. 

The problem is that key_share in client_hello carries a public key, and
not a ciphertext. And besides, one can not encrypt messages with
IND-CPA KEM (one needs at least IND-CCA KEM). But even if the key
exchange algorithm was IND-CCA, it still would not help because of the
first problem. There are some ways to both perform key agreement and
encrypt using the same PQ key, but all of them are way too slow and
have way too little analysis.

Of course, there does not seem to be straightforware way to fix this:
Decoupling the keys would require some way to ensure one can not copy-
paste ESNI between client hellos (nasty hack that could work would be
to hash client random and client key_share and include that into key
derivation for the ESNI).

Except the above might run into trouble in PSK mode. And reading the
editor's draft: What prevents from using pure-PSK "resumption" with
ESNI (that would be pretty stupid)? As the binding between ESNI and
connection does not apply in straightforward 

Re: [TLS] ESNIKeys over complex

2018-11-21 Thread Eric Rescorla
On Tue, Nov 20, 2018 at 11:28 PM Paul Wouters  wrote:

> Although, if I am correct, the epectation is that all of this data
> will be used without mandating DNSSEC validation, so all these
> security parameters could be modified by any DNS party in transit
> to try and break the protocol or privacy of the user.
>

Yes, because being able to modify the A/ records is generally
sufficient to
determine the SNI. See:
https://tools.ietf.org/html/draft-ietf-tls-esni-02#section-7.1

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-11-21 Thread Florian Weimer
* Paul Wouters:

> On Wed, 21 Nov 2018, Stephen Farrell wrote:
>
>>> We currently permit >1 RR, but
>>> actually
>>> I suspect that it would be better to try to restrict this.
>>
>> Not sure we can and I suspect that'd raise DNS-folks' hackles,
>> but maybe I'm wrong.
>
> I think the SOA record is the only exception allowed (and there
> is an exception to that when doing AXFR I believe)
>
> Usually these things are defined as "pick the first DNS RRTYPE
> that satisfies you".

Not sure what you mean by that (RRTYPE?).

The DNAME algorithm (RFC 6672) only works if there is a single DNAME
record for an owner name.  RFC 1034 is also pretty clear that only CNAME
record is permitted per owner name.

To be honest, I don't expect much opposition from DNS people, as long as
there is no expectation that the DNS layer is expected to reject
multiple records.  If the higher-level protocol treats non-singleton
RRsets as a hard error, I expect that would be fine.

DNS treats RRsets as an atomic unit, so there is no risk here that a
zone file change ends up producing a multi-record RRset due to caching.

Thanks,
Florian

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Paul Wouters

On Wed, 21 Nov 2018, Stephen Farrell wrote:


We currently permit >1 RR, but
actually
I suspect that it would be better to try to restrict this.


Not sure we can and I suspect that'd raise DNS-folks' hackles,
but maybe I'm wrong.


I think the SOA record is the only exception allowed (and there
is an exception to that when doing AXFR I believe)

Usually these things are defined as "pick the first DNS RRTYPE
that satisfies you".


- get rid of not_before/not_after - I don't believe those

  are useful given TTLs and they'll just lead to failures



I'm mostly ambivalent on this, but on balance, I think these are useful,
as they are not tied to potentially fragile DNS TTLs.


If there were a justification offered for 'em I'd be
ok with it, but TBH, I'm not seeing it. And my main
experience of the similar dates on RRSIGs are that they
just break things and don't help.



This has a totally different expiry behavior from RRSIGs, so I'm
not sure that's that useful an analogy.


Disagree. They're both specifying a time window for DNS data.
Same problems will arise is my bet.


You mean the problem of not being able to replay old data? :)


My main ask though for these time values is that their presence
be explicitly justified. That's missing now, and I suspect won't
be convincing, but maybe I'll turn out to be wrong.


Note that TTLs are about the caching of data and nothing else. If
the content of your record requires some specific time of death,
you cannot rely on TTL. Note that a TTL on a received RRTYPE can
have any value under the published TTL on the authoritative server
if it was flowing through caching recursive servers. So you cannot
use a TTL for some other kind of expiry value for another protocol.
Also, DNS software sometimes enforces maximum and minimum TTL values,
so again, do not use DNS TTL for other protocol timing parameters.

Although, if I am correct, the epectation is that all of this data
will be used without mandating DNSSEC validation, so all these
security parameters could be modified by any DNS party in transit
to try and break the protocol or privacy of the user. The "not DNSSEC
camel" is growing fast.

Paul
Paul

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Eric Rescorla
On Tue, Nov 20, 2018 at 7:40 PM Salz, Rich  wrote:

>
>- No, I don't think so. The server might choose to not support one of
>the TLS 1.3 ciphers, for instance. And even if that weren't true, how would
>we add new ciphers?
>
>
>
> Standard TLS negotiation. I don’t see that we need to specify ciphers at
> the DNS layer. A client with new ciphers will add it in the hello message
> and the server will pick one it supports.  It seems complex and fragile
> (keeping the server cipher config, not just the fronted hosts, in sync with
> DNS).
>

I'm sorry, I'm not quite following. In this draft, ESNI ciphers are
orthogonal to the ciphers used to encrypt the TLS records.This is perhaps
easier to see in a split configuration, where (for instance) the
client-facing server might support only AES-128-GCM and the back-end server
might support only ChaCha/Poly1305. As you say, the negotiation works well
for the TLS records, but that doesn't influence the ESNI encryption cipher
suite selection (because that happens before the Hello exchange). So, if we
don't provide a list of the ESNI ciphers in the ESNIKeys record, then we
are effectively creating a fixed list.

Am I missing something here?

-Ekr


>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Salz, Rich
  *   No, I don't think so. The server might choose to not support one of the 
TLS 1.3 ciphers, for instance. And even if that weren't true, how would we add 
new ciphers?

Standard TLS negotiation. I don’t see that we need to specify ciphers at the 
DNS layer. A client with new ciphers will add it in the hello message and the 
server will pick one it supports.  It seems complex and fragile (keeping the 
server cipher config, not just the fronted hosts, in sync with DNS).

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Eric Rescorla
On Tue, Nov 20, 2018 at 6:04 PM Salz, Rich  wrote:

> >Sure a list of ciphersuites isn't bad. But the current
> design has a set of keys and a set of ciphersuites and a
> set of extensions and a set of Rdata values in the RRset.
>
> Since this is defined for TLS 1.3 with all known-good ciphers, can't that
> field be eliminated?
>

No, I don't think so. The server might choose to not support one of the TLS
1.3 ciphers, for instance. And even if that weren't true, how would we add
new ciphers?

-Ekr


> >I'd bet a beer on such complexity being a source of bugs
> every time.
>
> All sorts of aphorisms come to mind. :)
>
> > This has a totally different expiry behavior from RRSIGs, so I'm
> > not sure that's that useful an analogy.
>
> Disagree. They're both specifying a time window for DNS data.
> Same problems will arise is my bet.
>
> I am inclined to agree.
>
>
>
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Salz, Rich
>Sure a list of ciphersuites isn't bad. But the current
design has a set of keys and a set of ciphersuites and a
set of extensions and a set of Rdata values in the RRset.
  
Since this is defined for TLS 1.3 with all known-good ciphers, can't that field 
be eliminated?

>I'd bet a beer on such complexity being a source of bugs
every time.

All sorts of aphorisms come to mind. :) 

> This has a totally different expiry behavior from RRSIGs, so I'm
> not sure that's that useful an analogy.

Disagree. They're both specifying a time window for DNS data.
Same problems will arise is my bet.

I am inclined to agree.

 

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Stephen Farrell

(Trimming bits down...)

On 21/11/2018 00:59, Eric Rescorla wrote:
> On Tue, Nov 20, 2018 at 4:36 PM Stephen Farrell 
>> Aren't DNS answers RRsets? I may be wrong but I thought DNS
>> clients have to handle that anyway,
> 
> 
> Not really, because any of them is co-valid. 

Sure, in DNS terms.

> We currently permit >1 RR, but
> actually
> I suspect that it would be better to try to restrict this. 

Not sure we can and I suspect that'd raise DNS-folks' hackles,
but maybe I'm wrong.

>> That said, >1 ciphersuite wouldn't be so bad if that were
>> the only list per RData instance. Or maybe one could get
>> rid of it entirely via some conditional link the to set
>> of suites in the CH, not sure. (Or just go fully experimental
>> and say everyone doing esni for now has to use the same
>> suite all the time.)
>>
> 
> I've implemented this and did not find it to be a major obstacle.
> I do not think unnecessary duplication is a good tradeoff for
> such a trivial implementation complexity reduction.

Sure a list of ciphersuites isn't bad. But the current
design has a set of keys and a set of ciphersuites and a
set of extensions and a set of Rdata values in the RRset.

Surely we can collapse at least most of those down to
one list without too much of a problem. And as to trivial,
I'd bet a beer on such complexity being a source of bugs
every time.

> I don't see any advantage to choosing a suboptimal design, just
> based on it being Experimental.

All designs are suboptimal for someone:-)

>>> - get rid of not_before/not_after - I don't believe those
   are useful given TTLs and they'll just lead to failures

>>>
>>> I'm mostly ambivalent on this, but on balance, I think these are useful,
>>> as they are not tied to potentially fragile DNS TTLs.
>>
>> If there were a justification offered for 'em I'd be
>> ok with it, but TBH, I'm not seeing it. And my main
>> experience of the similar dates on RRSIGs are that they
>> just break things and don't help.
> 
> 
> This has a totally different expiry behavior from RRSIGs, so I'm
> not sure that's that useful an analogy.

Disagree. They're both specifying a time window for DNS data.
Same problems will arise is my bet.

My main ask though for these time values is that their presence
be explicitly justified. That's missing now, and I suspect won't
be convincing, but maybe I'll turn out to be wrong.

>> Put another way, I
>> don't know what sensible code to write to decide between
>> not connecting or sending SNI in clear if one of these
>> dates is out of whack. (I be tempted to just ignore the
>> time constraints and try send the SNI encrypted instead.)
>>
> 
> You should connect with SNI in the clear.

As a generic browser, I guess so. As some specialised
privacy sensitive application, not sure. As a library
that could be used for either, I'm not clear there's a
good answer other than ignoring the artificial time
window and encrypting anyway.

> And having to deploy a cron job equivalent for the DNS
>> data is an order of magnitude harder than not.
>>
> 
> Nothing stops you having an infinite expiry.

Nothing stops us deleting the useless dates:-)

> They will also use different keys for x and y, so they will
> have different records and can have different pad lengghts.

All going well, yes. All not going well, the pad lengths
may get out of whack, exposing names.

> How about rounding up to the nearest power of 2 that's
>> bigger than 5? (Or some such.)
> 
> I don't know what this means.

Ah sorry. I meant just take the length of the server
name and pad to the shortest of 32, 64, 128 or 256
octets. (Or some other breakpoints.) I'm sure we could
do some measurement so that an acceptable fraction of
names fit in the shortest bucket. (Didn't DKG do work
on that for DNS padding?)

>> (As a
>> nasty hack, you could even derive the padded_length
>> from the value of the key_share and fronters could just
>> keep generating shares until they get one that works:-)
> 
> I thought you were complaining about complexity

That's not complex, it's just waay hacky:-) It'd actually
be simpler for the client to just take some (e.g. low order)
bits of the key share as the padded_length. More work for
the people generating the key share yes, (they need to keep
iterating 'till they find a key share that works for their
preferred padding_length) but that's easy, offline, done by
fewer folks and removes a way of screwing up the ops.

All-in-all, while it's too hacky it's not complex at all.

Cheers,
S.

> 
> -Ekr
> 
> 
>>> - I'm not convinced the checksum is useful, but it's not
   hard to handle
 - (Possibly) drop the base64 encoding, make it DNS operator
   friendly text (or else binary with a zonefile text format
   defined in addition)

>>>
>>> We are likely to drop the base64 encoding.
>>
>> Ack.
>>
>> And just to note again - I suspect a bunch of the above would
>> be better sorted out as ancillary changes once a multi-CDN
>> proposal is figured out.
>>
>> 

Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Eric Rescorla
On Tue, Nov 20, 2018 at 4:36 PM Stephen Farrell 
wrote:

>
> Hiya,
>
> On 20/11/2018 23:30, Eric Rescorla wrote:
> > On Tue, Nov 20, 2018 at 1:46 PM Stephen Farrell <
> stephen.farr...@cs.tcd.ie>
> > wrote:
> >
> >>
> >> Hiya,
> >>
> >> I've started to try code up an openssl version of this. [1]
> >> (Don't be scared though, it'll likely be taken over by a
> >> student in the new year:-)
> >>
> >
> > Thanks for your comments. Responses below.
>
> Ditto.
>
> >
> >>From doing that I think the ESNIKeys structure is too
> >> complicated and could do with a bunch of changes. The ones
> >> I'd argue for would be:
> >>
> >> - use a new RR, not TXT
> >>
> >
> > This is likely to happen.
> >
> > - have values of ESNIKey each only encode a single option
> >>   (so no lists at all) since >1 value needs to be supported
> >>   at the DNS level anyway
> >>- that'd mean exactly one ciphersuite
> >>- exactly one key share
> >>
> >
> > I don't agree with this. It is going to lead to a lot of redundancy
> because
> > many
> > servers will support >1 cipher suite with the same key share. Moreover,
> from
> > an implementation perspective, supporting >1 RR would be quite a bit more
> > work.
>
> Aren't DNS answers RRsets? I may be wrong but I thought DNS
> clients have to handle that anyway,


Not really, because any of them is co-valid. We currently permit >1 RR, but
actually
I suspect that it would be better to try to restrict this. This would be
especially
true if we get rid of expiry, because then there would be no good reason to
have
>1 ESNIKeys record with a given version.


and I'd expect use of
> RRsets to be a part of figuring out a multi-CDN stpry.
>

That's not at all obvious.


> That said, >1 ciphersuite wouldn't be so bad if that were
> the only list per RData instance. Or maybe one could get
> rid of it entirely via some conditional link the to set
> of suites in the CH, not sure. (Or just go fully experimental
> and say everyone doing esni for now has to use the same
> suite all the time.)
>

I've implemented this and did not find it to be a major obstacle.
I do not think unnecessary duplication is a good tradeoff for
such a trivial implementation complexity reduction.


>- no extensions (make an even newer RR or version-bump:-)
> >>
> >
> > Again, not a fan of this. It leads to redundancy.
>
> That's reasonable. OTOH, it's equally reasonable to say that
> we're dealing with an experimental draft and a future PS
> version could use another RRytpe and add extensions if they
> end up needed.
>

I don't see any advantage to choosing a suboptimal design, just
based on it being Experimental.

Incidentally, there seems to be some uncertainty about the status.
I'm not quite sure why this is marked Experimental (I think it may
have just been a thinko on my part), and the chairs didn't
ask at call for acceptance time, so I'd encourage them to sort that
out.




> >
> >
> > - get rid of not_before/not_after - I don't believe those
> >>   are useful given TTLs and they'll just lead to failures
> >>
> >
> > I'm mostly ambivalent on this, but on balance, I think these are useful,
> > as they are not tied to potentially fragile DNS TTLs.
>
> If there were a justification offered for 'em I'd be
> ok with it, but TBH, I'm not seeing it. And my main
> experience of the similar dates on RRSIGs are that they
> just break things and don't help.


This has a totally different expiry behavior from RRSIGs, so I'm
not sure that's that useful an analogy.



> Put another way, I
> don't know what sensible code to write to decide between
> not connecting or sending SNI in clear if one of these
> dates is out of whack. (I be tempted to just ignore the
> time constraints and try send the SNI encrypted instead.)
>

You should connect with SNI in the clear.


And having to deploy a cron job equivalent for the DNS
> data is an order of magnitude harder than not.
>

Nothing stops you having an infinite expiry.

>
> >
> > - get rid of padded_length - just say everyone must
> >>   always use the max (260?) -
> >
> >
> > I'm not in favor of this. The CH is big enough as it is, and this has a
> > pretty big impact on that, especially for QUIC. There are plenty of
> > scenarios where the upper  limit is known and << 160.
>
> True, big CH's are a bit naff, but my (perhaps wrong)
> assumption was that nobody cared since the F5 bug.


This has nothing to do with the F5 bug. It's about not exceeding one
packet in the QUIC CH.


It
> seems a bit wrong though to have every domain that's
> behind the same front have to publish this.


They also have to publish the key, so I don't really see a problem.


I'm also
> not sure it'll work well if we ever end up with cases
> where domains A and B both use fronts/CDNs x and y and
> can't figure out a good value as x prefers 132 and y
> prefers 260.
>

They will also use different keys for x and y, so they will
have different records and can have different pad lengghts.


How about rounding up to 

Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Stephen Farrell

Hiya,

On 20/11/2018 23:30, Eric Rescorla wrote:
> On Tue, Nov 20, 2018 at 1:46 PM Stephen Farrell 
> wrote:
> 
>>
>> Hiya,
>>
>> I've started to try code up an openssl version of this. [1]
>> (Don't be scared though, it'll likely be taken over by a
>> student in the new year:-)
>>
> 
> Thanks for your comments. Responses below.

Ditto.

> 
>>From doing that I think the ESNIKeys structure is too
>> complicated and could do with a bunch of changes. The ones
>> I'd argue for would be:
>>
>> - use a new RR, not TXT
>>
> 
> This is likely to happen.
> 
> - have values of ESNIKey each only encode a single option
>>   (so no lists at all) since >1 value needs to be supported
>>   at the DNS level anyway
>>- that'd mean exactly one ciphersuite
>>- exactly one key share
>>
> 
> I don't agree with this. It is going to lead to a lot of redundancy because
> many
> servers will support >1 cipher suite with the same key share. Moreover, from
> an implementation perspective, supporting >1 RR would be quite a bit more
> work.

Aren't DNS answers RRsets? I may be wrong but I thought DNS
clients have to handle that anyway, and I'd expect use of
RRsets to be a part of figuring out a multi-CDN stpry.

That said, >1 ciphersuite wouldn't be so bad if that were
the only list per RData instance. Or maybe one could get
rid of it entirely via some conditional link the to set
of suites in the CH, not sure. (Or just go fully experimental
and say everyone doing esni for now has to use the same
suite all the time.)

> 
>- no extensions (make an even newer RR or version-bump:-)
>>
> 
> Again, not a fan of this. It leads to redundancy.

That's reasonable. OTOH, it's equally reasonable to say that
we're dealing with an experimental draft and a future PS
version could use another RRytpe and add extensions if they
end up needed.

> 
> 
> - get rid of not_before/not_after - I don't believe those
>>   are useful given TTLs and they'll just lead to failures
>>
> 
> I'm mostly ambivalent on this, but on balance, I think these are useful,
> as they are not tied to potentially fragile DNS TTLs.

If there were a justification offered for 'em I'd be
ok with it, but TBH, I'm not seeing it. And my main
experience of the similar dates on RRSIGs are that they
just break things and don't help. Put another way, I
don't know what sensible code to write to decide between
not connecting or sending SNI in clear if one of these
dates is out of whack. (I be tempted to just ignore the
time constraints and try send the SNI encrypted instead.)
And having to deploy a cron job equivalent for the DNS
data is an order of magnitude harder than not.

> 
> - get rid of padded_length - just say everyone must
>>   always use the max (260?) -
> 
> 
> I'm not in favor of this. The CH is big enough as it is, and this has a
> pretty big impact on that, especially for QUIC. There are plenty of
> scenarios where the upper  limit is known and << 160.

True, big CH's are a bit naff, but my (perhaps wrong)
assumption was that nobody cared since the F5 bug. It
seems a bit wrong though to have every domain that's
behind the same front have to publish this. I'm also
not sure it'll work well if we ever end up with cases
where domains A and B both use fronts/CDNs x and y and
can't figure out a good value as x prefers 132 and y
prefers 260.

How about rounding up to the nearest power of 2 that's
bigger than 5? (Or some such.) Very long names might
lose some protection, but I'm not sure that's a big
deal and one can likely just register a shorter name
for applications using ESNI.

> 
> 
> that needs to be the same
>>   for all encrypted sni values anyway so depending on
>>   'em all to co-ordinate the same value in DNS seems
>>   fragile
>>
> 
> It only has to be the same for all the ones in the anonymity set, and they
> already need to coordinate on the key.

Saying that every key share in DNS needs to be published
with the same padded_length would be ok actually. (As a
nasty hack, you could even derive the padded_length
from the value of the key_share and fronters could just
keep generating shares until they get one that works:-)

> - I'm not convinced the checksum is useful, but it's not
>>   hard to handle
>> - (Possibly) drop the base64 encoding, make it DNS operator
>>   friendly text (or else binary with a zonefile text format
>>   defined in addition)
>>
> 
> We are likely to drop the base64 encoding.

Ack.

And just to note again - I suspect a bunch of the above would
be better sorted out as ancillary changes once a multi-CDN
proposal is figured out.

Cheers,
S.

> 
> -Ekr
> 
> 
> 
>> [1] https://github.com/sftcd/openssl/tree/master/esnistuff
>> ___
>> TLS mailing list
>> TLS@ietf.org
>> https://www.ietf.org/mailman/listinfo/tls
>>
> 


0x5AB2FAF17B172BEA.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
TLS mailing list

Re: [TLS] ESNIKeys over complex

2018-11-20 Thread Eric Rescorla
On Tue, Nov 20, 2018 at 1:46 PM Stephen Farrell 
wrote:

>
> Hiya,
>
> I've started to try code up an openssl version of this. [1]
> (Don't be scared though, it'll likely be taken over by a
> student in the new year:-)
>

Thanks for your comments. Responses below.

>From doing that I think the ESNIKeys structure is too
> complicated and could do with a bunch of changes. The ones
> I'd argue for would be:
>
> - use a new RR, not TXT
>

This is likely to happen.

- have values of ESNIKey each only encode a single option
>   (so no lists at all) since >1 value needs to be supported
>   at the DNS level anyway
>- that'd mean exactly one ciphersuite
>- exactly one key share
>

I don't agree with this. It is going to lead to a lot of redundancy because
many
servers will support >1 cipher suite with the same key share. Moreover, from
an implementation perspective, supporting >1 RR would be quite a bit more
work.

   - no extensions (make an even newer RR or version-bump:-)
>

Again, not a fan of this. It leads to redundancy.


- get rid of not_before/not_after - I don't believe those
>   are useful given TTLs and they'll just lead to failures
>

I'm mostly ambivalent on this, but on balance, I think these are useful,
as they are not tied to potentially fragile DNS TTLs.


- get rid of padded_length - just say everyone must
>   always use the max (260?) -


I'm not in favor of this. The CH is big enough as it is, and this has a
pretty big impact on that, especially for QUIC. There are plenty of
scenarios where the upper  limit is known and << 160.


that needs to be the same
>   for all encrypted sni values anyway so depending on
>   'em all to co-ordinate the same value in DNS seems
>   fragile
>

It only has to be the same for all the ones in the anonymity set, and they
already need to coordinate on the key.




- I'm not convinced the checksum is useful, but it's not
>   hard to handle
> - (Possibly) drop the base64 encoding, make it DNS operator
>   friendly text (or else binary with a zonefile text format
>   defined in addition)
>

We are likely to drop the base64 encoding.

-Ekr



> [1] https://github.com/sftcd/openssl/tree/master/esnistuff
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] ESNIKeys over complex

2018-11-20 Thread Stephen Farrell

Hiya,

I've started to try code up an openssl version of this. [1]
(Don't be scared though, it'll likely be taken over by a
student in the new year:-)

From doing that I think the ESNIKeys structure is too
complicated and could do with a bunch of changes. The ones
I'd argue for would be:

- use a new RR, not TXT
- have values of ESNIKey each only encode a single option
  (so no lists at all) since >1 value needs to be supported
  at the DNS level anyway
   - that'd mean exactly one ciphersuite
   - exactly one key share
   - no extensions (make an even newer RR or version-bump:-)
- get rid of not_before/not_after - I don't believe those
  are useful given TTLs and they'll just lead to failures
- get rid of padded_length - just say everyone must
  always use the max (260?) - that needs to be the same
  for all encrypted sni values anyway so depending on
  'em all to co-ordinate the same value in DNS seems
  fragile
- I'm not convinced the checksum is useful, but it's not
  hard to handle
- (Possibly) drop the base64 encoding, make it DNS operator
  friendly text (or else binary with a zonefile text format
  defined in addition)

I'm fine that such changes don't get done for a while (so
I or my student get time to try make stuff work:-) and
it might in any case take a while to figure out how to
handle the multi-CDN use-case discussed in Bangkok which
would I guess also affect this structure some, but I wanted
to send this to the list while it's fresh for me.

Cheers,
S.

[1] https://github.com/sftcd/openssl/tree/master/esnistuff


0x5AB2FAF17B172BEA.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls