Re: [TLS] Certificate compression draft

2017-03-06 Thread Vlad Krasnov
Don't know about neutral dictionary, but simply compressing Cloudflare cert 
using Google cert, gives an additional 6% using brotli -15.

I would rather have a biased dictionary than none at all :)

Cheers,
Vlad

> On Mar 6, 2017, at 4:38 PM, Martin Thomson <martin.thom...@gmail.com> wrote:
> 
> Seems like you might get some traction with adding www. .com, some DN
> fields (CN=, O=, C=), common OIDs, with some OIDs attached to values
> (like key usage and signature algorithm).  Most of that is relatively
> short though.
> 
>> On 7 March 2017 at 11:15, Victor Vasiliev <vasi...@google.com> wrote:
>> Hi Vlad,
>> 
>> This is still an open issue:
>> https://github.com/ghedo/tls-certificate-compression/issues/2
>> 
>> The problem here is creating a dictionary that is both neutral with respect
>> to
>> the certificate's issuing authority, and actually has a noticeable effect.
>> So
>> far my personal attempts at making such a dictionary have not been very
>> successful, but this might change.  Even if we get a dictionary, I do not
>> expect the effect to be large compared to the effect of just compressing the
>> chain in the first place.
>> 
>>  -- Victor.
>> 
>>> On Mon, Mar 6, 2017 at 6:32 PM, Vlad Krasnov <v...@cloudflare.com> wrote:
>>> 
>>> Hi Victor,
>>> 
>>> Have you considered creating a common dictionary, similarly to what SPDY
>>> did for header compression?
>>> 
>>> Cheers,
>>> Vlad
>>> 
>>> 
>>> On Mar 6, 2017, at 3:23 PM, Victor Vasiliev <vasi...@google.com> wrote:
>>> 
>>> Hi Martin,
>>> 
>>> I've measured the effect of compression on a corpus of popular website
>>> certificate chains I had lying around (Alexa Top 100k from a few years
>>> ago),
>>> and the effect seems to be about -30% of size at the median and -48% at
>>> 95th
>>> percentile (with Brotli, subtract 3-5% for zlib).
>>> 
>>> I think the most dramatic effect from the compression is observed for the
>>> certificates with a lot of SNI values, which is not uncommon.
>>> 
>>>  -- Victor.
>>> 
>>> On Mon, Mar 6, 2017 at 6:06 PM, Martin Thomson <martin.thom...@gmail.com>
>>> wrote:
>>>> 
>>>> Hi Victor,
>>>> 
>>>> Do you have any evidence to suggest that this reduces size in any
>>>> meaningful way?  Certificates tend to include both repetitious values
>>>> (OIDs), and non-repetitious values (keys).
>>>> 
>>>>> On 7 March 2017 at 09:58, Victor Vasiliev <vasi...@google.com> wrote:
>>>>> Certificate compression has been discussed on this list briefly before,
>>>>> and
>>>>> there was some interest in at least considering a draft for it.  The
>>>>> draft
>>>>> now
>>>>> exists (co-authored by Alessandro and myself), and it can be found at:
>>>>> 
>>>>> 
>>>>> https://datatracker.ietf.org/doc/draft-ghedini-tls-certificate-compression/
>>>>>  [ GitHub repo: https://github.com/ghedo/tls-certificate-compression ]
>>>>> 
>>>>> The proposed scheme allows a client and a server to negotiate a
>>>>> compression
>>>>> algorithm for the server certificate message.  The scheme is purely
>>>>> opt-in
>>>>> on
>>>>> both sides.  The current version of the draft defines zlib and Brotli
>>>>> compression, both of which are well-specified formats with an existing
>>>>> deployment experience.
>>>>> 
>>>>> There are multiple motivations to compress certificates.  The first one
>>>>> is
>>>>> that
>>>>> the smaller they are, the faster they arrive (both due to the transfer
>>>>> time
>>>>> and
>>>>> a decreased chance of packet loss).
>>>>> 
>>>>> The second, and more interesting one, is that having small certificates
>>>>> is
>>>>> important for QUIC in order to achieve 1-RTT handshakes while limiting
>>>>> the
>>>>> opportunities for amplification attacks.  Currently, TLS 1.3 over TCP
>>>>> without
>>>>> client auth looks like this:
>>>>> 
>>>>>  Round trip 1: client sends SYN, server sends SYN ACK
>>>>>Here, the server provides its own random value which c

Re: [TLS] Certificate compression draft

2017-03-06 Thread Vlad Krasnov
Hi Victor,

Have you considered creating a common dictionary, similarly to what SPDY did 
for header compression?

Cheers,
Vlad


> On Mar 6, 2017, at 3:23 PM, Victor Vasiliev  wrote:
> 
> Hi Martin,
> 
> I've measured the effect of compression on a corpus of popular website
> certificate chains I had lying around (Alexa Top 100k from a few years ago),
> and the effect seems to be about -30% of size at the median and -48% at 95th
> percentile (with Brotli, subtract 3-5% for zlib).
> 
> I think the most dramatic effect from the compression is observed for the
> certificates with a lot of SNI values, which is not uncommon.
> 
>   -- Victor.
> 
> On Mon, Mar 6, 2017 at 6:06 PM, Martin Thomson  > wrote:
> Hi Victor,
> 
> Do you have any evidence to suggest that this reduces size in any
> meaningful way?  Certificates tend to include both repetitious values
> (OIDs), and non-repetitious values (keys).
> 
> On 7 March 2017 at 09:58, Victor Vasiliev  > wrote:
> > Certificate compression has been discussed on this list briefly before, and
> > there was some interest in at least considering a draft for it.  The draft
> > now
> > exists (co-authored by Alessandro and myself), and it can be found at:
> >
> > https://datatracker.ietf.org/doc/draft-ghedini-tls-certificate-compression/ 
> > 
> >   [ GitHub repo: https://github.com/ghedo/tls-certificate-compression 
> >  ]
> >
> > The proposed scheme allows a client and a server to negotiate a compression
> > algorithm for the server certificate message.  The scheme is purely opt-in
> > on
> > both sides.  The current version of the draft defines zlib and Brotli
> > compression, both of which are well-specified formats with an existing
> > deployment experience.
> >
> > There are multiple motivations to compress certificates.  The first one is
> > that
> > the smaller they are, the faster they arrive (both due to the transfer time
> > and
> > a decreased chance of packet loss).
> >
> > The second, and more interesting one, is that having small certificates is
> > important for QUIC in order to achieve 1-RTT handshakes while limiting the
> > opportunities for amplification attacks.  Currently, TLS 1.3 over TCP
> > without
> > client auth looks like this:
> >
> >   Round trip 1: client sends SYN, server sends SYN ACK
> > Here, the server provides its own random value which client will
> > have to echo in the future.
> >   Round trip 2: client sends ACK, ClientHello, server sends
> > ServerHello...Finished
> > Here, ACK confirms to server that the client can receive packets and is
> > not
> > just spoofing its source address.  Server can send the entire
> > ServerHello to
> > Finished flight.
> >
> > In QUIC, we are trying to merge those two rounds into one.  The problem,
> > however, is that the ClientHello is one packet, and ServerHello...Finished
> > can
> > span multiple packets, meaning that this could be used as an amplification
> > attack vector since the client's address is not yet authenticated at this
> > point.
> > In order to address this, the server has to limit the number of packets it
> > sends
> > during the first flight (i.e. ServerHello...Finished flight).  Since
> > certificates make up the majority of data in that flight, making them
> > smaller
> > can push them under the limit and save a round-trip.
> >
> > Cheers,
> >   Victor.
> >
> > ___
> > TLS mailing list
> > TLS@ietf.org 
> > https://www.ietf.org/mailman/listinfo/tls 
> > 
> >
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] record layer limits of TLS1.3

2016-11-24 Thread Vlad Krasnov
A) OpenSSL does not measure the actual TLS performance (including nonce 
construction, additional data, etc),  but rather just the speed of the main 
encryption loop.

B) Still, I agree with Yoav. From my experience, the difference in TPT between 
16K records and 64K records is negligible, as well as the network overhead. On 
the other hand using larger records increases the risk of HoL blocking.

Cheers,
Vlad

> On Nov 24, 2016, at 6:16 AM, Yoav Nir  wrote:
> 
> 
>> On 24 Nov 2016, at 15:47, Hubert Kario  wrote:
>> 
>> On Wednesday, 23 November 2016 10:50:37 CET Yoav Nir wrote:
 On 23 Nov 2016, at 10:30, Nikos Mavrogiannopoulos  wrote:
> On Wed, 2016-11-23 at 10:05 +0200, Yoav Nir wrote:
> Hi, Nikos
> 
> On 23 Nov 2016, at 9:06, Nikos Mavrogiannopoulos 
 That to my understanding is a way to reduce
 latency in contrast to cpu costs. An increase to packet size targets
 bandwidth rather than latency (speed).
>>> 
>>> Sure, but running ‘openssl speed’ on either aes-128-cbc or hmac or sha256
>>> (there’s no test for AES-GCM or ChaCha-poly) you get smallish differences
>>> in terms of kilobytes per second between 1024-byte buffers and 8192-byte
>>> buffers. And the difference going to be even smaller going to 16KB buffers,
>>> let alone 64KB buffers.
>> 
>> this is not valid comparison. openssl speed doesn't use the hardware
>> accelerated codepath
>> 
>> you need to use `openssl speed -evp aes-128-gcm` to see it (and yes, 
>> aes-gcm and chacha20-poly1305 is supported then)
>> 
>> What I see is nearly a 1GB/s throughput increase between 1024 and 8192 byte 
>> blocks for AES-GCM:
>> 
>> type 16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes
>> aes-128-gcm 614979.91k  1388369.31k  2702645.76k  3997320.76k  
>> 4932512.79k
>> 
>> While indeed, for chacha20 there's little to no difference at the high end:
>> type 16 bytes 64 bytes256 bytes   1024 bytes   8192 
>> bytes  16384 bytes
>> chacha20-poly1305   242518.50k   514356.72k  1035220.57k  1868933.46k  
>> 1993609.50k  1997438.98k
>> 
>> (aes-128-gcm performance from openssl-1.0.2j-1.fc24.x86_64, 
>> chacha20-poly1305 from openssl master, both on 
>> Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz)
> 
> Cool. So you got a 23% improvement, and I got an 18% improvement for AES-GCM. 
> I still claim (but cannot prove without modifying openssl code (maybe I’ll do 
> that over the weekend) that the jump from 16KB to 64KB will be far, far less 
> pronounced.
> 
> Yoav
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-19 Thread Vlad Krasnov
 "Then why is the library still
> called OpenSSL?"

All those arguments show basic confusion of what TLS is. Version numbers won't 
help solve that. 

Only going back to using the SSL name might.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-18 Thread Vlad Krasnov

> People changing browser settings?  Really?

I was thinking about site admins.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-18 Thread Vlad Krasnov

> Well, for example, your website has twice as many mentions of SSL as TLS.  
> Why?  Why don't you have a product called "Universal TLS"? The ratio is the 
> same for letsencrypto.org. TLS 1.0 had already existed for more then a decade 
> before either place existed.  BTW, at google, it's 20:1, and that's just 
> google, not the web.  (Counts were done in the obvious dumb way 
> "site:letsencrypt.org tls" and then with "ssl" and noting the summary stats 
> at the top of the return results.) 
> 
> People are confused because we treat them as the same thing. 

Well, if the result of the confusion would be people *disabling* TLS 1.* in 
favor of SSL 3.0, they would discover very quickly what is TLS, and why no 
major browser works for them.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-18 Thread Vlad Krasnov
First: where can we see the study that proves people are indeed confused that 
TLS > SSL? I don’t buy into that. Are people really confused after 17 years of 
TLS?

Second: I don’t think that the changes between TLS 1.3 and TLS 1.2 are 
considered a major: just look at the difference between HTTP/2 and HTTP/1 - 
those are completely different protocols.

Most of TLS 1.3 could be implemented on top of TLS 1.2 with extensions (the way 
it really looks, if you consider even client_version remains the same).

Third: There was *some* marketing on TLS 1.3, and changing the name now will 
just tell the public the WG is confused and doesn’t know what its doing.

I vote for TLS 1.3.


> On 18 Nov 2016, at 10:07, D. J. Bernstein  wrote:
> 
> The largest number of users have the least amount of information, and
> they see version numbers as part of various user interfaces. It's clear
> how they will be inclined to guess 3>1.3>1.2>1.1>1.0 (very bad) but
> 4>3>1.2>1.1>1.0 (eliminating the problem as soon as 4 is supported).
> 
> We've all heard anecdotes of 3>1.2>1.1>1.0 disasters. Even if this type
> of disaster happens to only 1% of site administrators, it strikes me as
> more important for security than any of the arguments that have been
> given for "TLS 1.3". So I would prefer "TLS 4".
> 
> Yes, sure, we can try to educate people that TLS>SSL (but then we're
> fighting against tons of TLS=SSL messaging), or educate them to use
> server-testing tools (so that they can fix the problem afterwards---but
> I wonder whether anyone has analyzed the damage caused by running SSLv3
> for a little while before switching the same keys to a newer protocol),
> and hope that this education fights against 3>1.3 more effectively than
> it fought against 3>1.2. But it's better to switch to a less error-prone
> interface that doesn't require additional education in the first place.
> 
> ---Dan
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Version negotiation, take two

2016-09-20 Thread Vlad Krasnov
Another concern here is that in order to reduce memory footprint, some 
implementations will probably introduce bugs by trying to optimize and infer 
the version by observing the cipher-suits in client hello instead waiting for 
the extension.

Cheers,
Vlad

> On 19 Sep 2016, at 03:42, Hubert Kario  wrote:
> 
> On Saturday, 17 September 2016 01:04:07 CEST David Benjamin wrote:
>> On Fri, Sep 16, 2016 at 4:29 PM Andrei Popov > >
>> 
>> wrote:
>>> At the very least, if version is negotiated as extension it must be the
>> 
>> very first extension advertised.
>> I don't think it's a good idea to impose extension ordering requirements.
>> 
>> 
>> Agreed. If we're concerned with the order, I suppose are other options like
>> smuggling them in the front of the cipher list or hacky things like that.
>> 
>> :-) But using extensions is cleaner, and still perfectly deployable.
>> :
>>> Some implementations out there rely on the fact that they can read the
>> 
>> first two bytes of the client hello, and take the appropriate code path on
>> the spot.
>> Yes, these implementations (Windows TLS stack included) will need to do
>> more elaborate/slightly slower pre-parsing if we use TLS version
>> negotiation via TLS extension(s). Not something I like, but can be done.
>> 
>> 
>> TLS already does not strictly permit sniff-based implementations like this.
>> A handshake message may be fragmented pathologically or even interspersed
>> with warning alerts. It's doable if you reject such fragmentations (no one
>> would send a ClientHello this way...), but you need to be careful because
>> this fragmentation does not figure into the handshake transcript. In
>> particular, you cannot have an else clause in your dispatch. The dispatcher
>> must reject anything it can't definitively resolve rather than blindly
>> forward to your pre-TLS-1.3 implementation.
> 
> I don't see how that prevents streaming implementation - warning alerts is 
> something you can handle in the dispatcher (though I'm not sure why it's 
> something you should worry /before/ the first client hello received), then to 
> the specific implementation you pass the buffer with current record and the 
> socket, the first of which may be empty if the record boundary landed right 
> after the client_version
> 
>> CVE-2014-3511 is an example of OpenSSL's 1.0.x sniff-based implementation
>> going wrong (OpenSSL 1.1.x is no longer sniff-based). It is a particularly
>> silly instance, but it's the sort of failure mode you can get.
>> 
>> Further, with the current trajectory, TLS 1.3 servers will need to do
>> version-negotiation based on extensions anyway. All the various
>> implementors have been using this "draft_version" extension to experiment
>> with TLS 1.3. (draft_version is really just a worse version of this
>> proposal.)
>> https://github.com/tlswg/tls13-spec/wiki/Implementations#version-negotiation 
>> 
> 
> for experimental implementations memory usage is not such a big problem, it's 
> not the case for everybody
> 
>> I don't think anyone has actually enabled client code by default yet, but
>> once anyone does, servers will need to process extensions for versioning
>> until draft TLS 1.3 clients are out of the ecosystem. This seems the worst
>> of both worlds. We'll have extensions in versioning and an undeployable
>> protocol. I think we should go for the latter and, if we must have the
>> former, at least do it properly.
> 
> hmm, what if we did define both mechanisms? so that clients that worry about 
> compatibility with the broken servers can advertise TLSv1.3 through extension 
> while ones that don't, advertise through client_version?
> 
> similar to how secure renegotiation indication works
> 
> -- 
> Regards,
> Hubert Kario
> Senior Quality Engineer, QE BaseOS Security team
> Web: www.cz.redhat.com 
> Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls