On Tue, Jun 25, 2024, at 23:57, Paul Vixie wrote: > On Jun 25, 2024 10:14, Lucas Pardue <[email protected]> wrote: > > <<Secure proxy discovery and configuration would be useful for a number of > use cases beyond just QUIC. I totally support the idea of people working on > that in the appropriate IETF venue.>> > > Because it would fly in the face of RFC 8890 and because connection annealing > would be a protocol change to QUIC, there is no other possible venue than > this working group. I do not think it is possible to achieve consensus on the > goal set but the proof of that one way or another will be here. I'll help if > there's hope. Speaking only the capacity as an individual.
That's merging two unrelated things that can be easily decomposed. Proxy discovery standards are lacking, working to solve that problem can be done independently. That work doesn't suit the QUIC WG. The other stuff is independent and I don't have a position on it wearing any hats. > > But back to the topic at hand the controversial activism included among the > protocol's goals will limit QUIC deployment. CISO sensitivities being one > example. There's 101 ways to configure the network to disrupt traffic flows. Some intentional, some not so intentional. For example, people who decide to run networks that can't support QUICs minimal MTU requirements can also affect the deployability. That sounds obvious, it is less so when tunneling encapsulation is considered. The word "limit" implies some statistical significance. I find it hard to believe, based on real world data presented here and elsewhere, that such networks are a major contributor to QUIC not being used. This entire thread started from quite an odd position of question. With one reading of it being there might be an expectation that all traffic should run over QUIC on all networks. And by extension, any measurements that don't show ~100% usage are some indicator of a problem. I believe that to be a fallacy. The QUIC WG standardised a new transport protocol and application protocol that fits seamlessly into the mix of live Internet use cases and existing technologies. The diversity of the Internet means there are endless opportunities, constraints and rationale for selecting technologies at any given moment. It is useful to look and see if there are any blockers to technology choice. To understand if there is collective interest in trying to unblock them. This thread has explored some of that, so I thank Chris for starting it. However, IMHO any fixation on absolute percentage figures of aggregate traffic is a distraction. Daniel Stenberg's blog is a good example. There are practical engineering challenges for that project related to library dependencies. Should those get resolved and HTTP/3 enabled by default, I would expect HTTP servers with curl-heavy workloads to start to see a traffic share shift. However, there's a long tail of deployments. Curl's 2024 annual survey asked, for the first time about version in use. Of the 1289 that responded, the median was version 8.5.0 ( meaning half of respondants are within 3 versions old). However, version 7.81 is almost 2.5 years old and still holds a share of between 5-10%. The survey is of course a self-selecting population and those results might not be indicative of broader deployments. Perhaps a server operator might be so kind to share the curl version distribution data they see. Looking at top level metrics and drawing conclusions risks mistaken conclusions. Stuff is complicated and multi-faceted. There's no magic bullet here. I encourage folks to dig deeper in order to find specific challenges or opportunities that can be actioned through engineering, standardisation or advocation. Cheers Lucas > p vixie > > On Jun 25, 2024 10:14, Lucas Pardue <[email protected]> wrote: >> >> >> On Tue, Jun 25, 2024, at 17:39, Paul Vixie wrote: >>> On Jun 25, 2024 09:00, Lucas Pardue <[email protected]> wrote: >>> >>> <<As others have noted, some folks are risk averse and are happy enough >>> with the status quo. If there are performance wins to be had, they'll >>> likely be wanting to see case studies before the CTO signs off on turning >>> something on that might even have the slightest hint of beung able to cause >>> some regression in performance or stability.>> >>> >>> I know that the designated topic is performance, but please give a thought >>> to the CISO's sign off. In a secure private network, a protocol designed to >>> prohibit monitoring is a nonstarter. We could get further faster with QUIC >>> deployment to these networks if there was support for secure proxy >>> discovery with connection annealing after the proxy had applied its >>> policies to a flow. >> >> Secure proxy discovery and configuration would be useful for a number of use >> cases beyond just QUIC. I totally support the idea of people working on that >> in the appropriate IETF venue. >> >> Cheers >> Lucas >>> >>> p vixie >>> >>> On Jun 25, 2024 09:00, Lucas Pardue <[email protected]> wrote: >>>> >>>> >>>> On Tue, Jun 25, 2024, at 15:20, Chris Box wrote: >>>>> On Tue, 25 Jun 2024 at 14:05, Lucas Pardue <[email protected]> wrote: >>>>>> __ >>>>>> >>>>>> It can be difficult to monitor and prove with actual data that end-users >>>>>> see meaningful performance improvements due to leveraging the features >>>>>> of QUIC. >>>>> >>>>> I'm surprised, as I'd expected the faster connection setup to be a win >>>>> across all networks. But yes the specifics matter, and it's important to >>>>> be able to spot areas where it's slower. >>>> >>>> That sort of depends also. A key question has to be, how much does a use >>>> case see connection time dominating their performance user journey. >>>> >>>> The web has several techniques to pull connection times out of the >>>> critical path, things such as link preload or preconnect can ask a browser >>>> to do that. >>>> >>>> TCP+TLS with regular session resumption or 0-RTT resumption is pretty fast >>>> too. >>>> >>>> In an ideal scenario for fresh connections, you're looking at saving an >>>> RTT with QUIC. For many reasons, short RTTs are desired for performance >>>> (indeed, there's a whole CDN industry that provides this). If a complete >>>> page load, of a poorly designed page takes 20 seconds to load, shaving off >>>> 20 milliseconds is peanuts. The bigger opportunities are in improving the >>>> web page design itself, then leveraging other technologies on the content >>>> and content loading level that will make more-substantial improvements >>>> than a single RTT saving. >>>> >>>> For some use cases, absolutely saving any concrete milliseconds are a win. >>>> Its just the story isn't cut and dry as soon as you use any advanced user >>>> agent with a complex website. >>>> Where the RTT is a larger, shaving it off is a no brainer. However, that >>>> might indicate you'll have issues with BDP that can affect all transports >>>> equally. QUIC doesn't change the laws of physics :-) >>>> >>>> My anecdata is that the majority of page load oriented metric reports >>>> declare connection duration of 0. Meaning that the user is already >>>> connected. Therefore connection time optimisation is targeting a subset of >>>> journeys where a fresh connection is needed. For example, the user >>>> discovers a shopping website via an Internet search engine and clicks >>>> through. Capturing this is sometimes a different business metric. Not >>>> everyone knows or cares to measure it. >>>> >>>> It shouldn't be so hard to weigh up all these factors. Ideally there'd be >>>> some kind of common definition and more objective way to consider them but >>>> to my knowledge that's a bit lacking. >>>> >>>>> >>>>>> As a platform or service operator, its likely that you'll have no idea >>>>>> what the KPIs of the web applications that pass through you are. >>>>>> Focusing on low-level observables, such as connection or request times >>>>>> doesn't give the full, actual, picture. There is a disjoint, that can be >>>>>> hard to fill. >>>>> >>>>> My day job involves a lot of mobile network optimisation. We regularly >>>>> conduct drive tests to see the effect of network changes on end-user >>>>> performance. The services tested are necessarily limited. Seeing the >>>>> results of changes immediately and across the entire country, would be a >>>>> big win. But perhaps not a topic for this working group. >>>> It's a fascinating topic! The capabilities for simulating this are >>>> woefully limited. I'm think some others can speak to their experience >>>> about how L2 technologies interact with their QUIC stacks. See Christian's >>>> blog post on WiFi for example [1]. >>>> >>>> Maybe getting better at teating this is a good item for the WG discussion. >>>> >>>>> >>>>>> There's only so much engineering hours to go around, and companies make >>>>>> a value judgement about where to invest that for peak ROI. >>>>>> If there are performance wins to be had, they'll likely be wanting to >>>>>> see case studies before the CTO signs off on turning something on that >>>>>> might even have the slightest hint of beung able to cause some >>>>>> regression in performance or stability. >>>>> >>>>> Great point. So in a ranking of potential web performance gains, HTTP/3 >>>>> may not place high enough. And in other companies, there could simply be >>>>> a shortage of data to understand where it should be ranked. >>>> That's a good way to phrase it. I'm keen to make this ranking easier to >>>> do, or eliminate its need altogether by just "Doing the right thing". It's >>>> hard when the web keeps evolving at its current pace :) >>>> >>>> Cheers >>>> Lucas >>>> >>>> [1] >>>> http://www.privateoctopus.com/2023/05/18/the-weird-case-of-wifi-latency-spikes.html >>>>> >>>>>> >>>>>> It also behoves me to mention we have QUIC powering the web in different >>>>>> ways than the OP question presents it. >>>>> >>>>> Oh, definitely. QUIC goes far beyond this use case, and MASQUE for >>>>> example is an important building block for privacy partitioning. I'm all >>>>> for further deployment of QUIC. It's the first truly *evolvable* >>>>> internet-scale transport protocol. >>>>> >>>>> Chris >>>> >>
