There was a general sentiment (during the design phase of QUIC) that it was
a mistake to ever have supported plain-text (HTTP) on the web, and
the world(?) would have been better off with HTTPS everywhere (which most
sites have tried to support across much of industry today!!)

However, if you look back to those early days at Netscape, you'd also find
that the cost of encryption was relatively high, and hence it was common to
"offload" encryption to high-performance specialized hardware.  Given that
extra processing-time and cost historically, it was probably a good
decision to support HTTP.  Today, that is far less of a problem. Today with
ECC, we are doing a pile of crypto on cell phones, and the server side
cost is a miniscule component of performance.

The original Google QUIC crypto design/implementation also sought to remove
attack surfaces.  Any potential for downgrading a connection to "auth only"
would have had to be carefully handled. In addition, there were some
"state-sponsored monitors" that would/could/can "encourage" the use of
unencrypted elements of any protocol.  In addition, some early
discussion of QUIC's crypto demonstrated that TLS had a legacy that could
be dropped (cropped? minimized?).  This simplification further reduced
attack surfaces, and that is part of what drove QUICs crypto
development, in ADDITION to the need (desire?) for 0-RTT negotiations
(greatly enhancing/reducing connection time).

As an example of "extra attack surfaces," there was historically a
significant difference between original session establishment, vs session
resumption.  That meant that both portions of the protocol had to be
defended.  Going back in time to "then final" approval of an early version
of SSL (predecessor of TLS), I recall sadly seeing a down-grade attack
being discovered during the final final final review-gathering by an array
of cryptographers.  We were glad it was caught... but then several years
later, I heard about a formal analysis of TLS (coming out of Stanford?)...
wherein they found... sadly... a downgrade attack, which (this time, as I
recall) was instead <sigh> part of session resumption <the other place to
defend!>.

Crypto is hard.  We should be happy when it works, and the implementations
are correct (secure?).  It is IMO a bad idea to try to add "features" which
give us more ways to be less secure. Each new feature is an attack
surface.

YMMV,

Jim



On Thu, Jan 25, 2024 at 2:05 PM Nick Harper <[email protected]> wrote:

>
>
> On Thu, Jan 25, 2024 at 3:29 AM Ted Hardie <[email protected]> wrote:
>
>> On Thu, Jan 25, 2024 at 7:58 AM Mikkel Fahnøe Jørgensen <
>> [email protected]> wrote:
>>
>>> As to why TLS 1.3 specifically, I recall early on asking for schemes
>>> that were more IoT friendly.
>>>
>>>
>> You may recall that Google QUIC did not use TLS.  If memory serves me
>> correctly that was partly because Google wanted 0-RTT resumption and a few
>> other capabilities that TLS did not provide at the time.  When those
>> facilities were added to TLS in TLS 1.3, it made sense to re-use TLS rather
>> than maintain a separate crypto stack.
>>
>
> This is my recollection as well. The Google QUIC crypto handshake protocol
> was designed knowing that it would be replaced. The doc (
> https://docs.google.com/document/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDwvZ5L6g/edit?tab=t.0)
> calls out that the protocol was destined to die. It was created because
> there wasn't something that did 0-RTT at that point, but it would be a
> feature of TLS 1.3.
>
>>
>> I no longer work for Google, so I don't have access to my notes on this;
>> you may want to confirm with Ian Swett or one of the folks who was both
>> there at the time and is still a Googler.
>>
>> regards,
>>
>> Ted
>>
>>
>>
>>> The consensus at the time was that encryption is important for
>>> ossification prevention and non-encryption was a deal breaker, as has been
>>> explained in this thread. However, there was nothing preventing other
>>> encryption schemes than TLS and this could be added in other later QUIC
>>> versions separately, but for QUIC 1.0, IoT would not be a priority in part
>>> to get something out of the door to finalize 1.0, and in part because TLS
>>> 1.3 allows for negotiation in case some encryption turns out weak.
>>>
>>> As to network transparency for troubleshooting, various tooling has been
>>> mentioned but there are header bits explicitly exposed to measure pacing
>>> and roundtrip of encrypted packets modulating a signal that can be
>>> observed, which was deemed sufficient after some testing, although there
>>> was a push for more insight operator side of things.
>>>
>>> QUIC load balancing protocol was also discussed, partly in order to
>>> avoid early TLS termination. LBs requires access to some confidential
>>> information in order to route packets correctly. I have not studied this
>>> closely, but I guess one could introduce a load balancer to gain more
>>> insights?
>>>
>>> Mikkel
>>>
>>>

Reply via email to