On Wed, Sep 30, 2020 at 3:17 PM Olivier Bonaventure <
[email protected]> wrote:

> Lucas,
>
> > It's this special sauce that concerns me. > I don't know how I'd
> objectively measure that the MP-QUIC design and all
> > the implementation effort would actual result in improvement. For
> > instance, more bandwidth via aggregation can still be misused; some
> > types of streams will be more latency sensitive than others. Putting the
> > decision making into the transport library could also be seen as black
> box.
>
> I would suggest that we start by considering MPQUIC as a blackbox as we
> did with MPTCP. This worked well and getting the multipath mechanisms
> will be easy.
>
> > I also want to draw some parallels between uniflows and the HTTP/2
> > priority tree. The tree was a fully expressive model that allowed a
> > client to hint to the server about the relationship between streams. The
> > problem of actioning this signal was left up to the server. Deployment
> > experience reveals that getting this well-tuned, just for a single TCP
> > connection, is hard. Some implementers just never got around to fixing
> > some serious and easily detectable performance problems.
> >
> > Presenting a bunch of uniflows to a sever and leaving it responsible for
> > using them properly seems to me quite a similar problem.
>
> That's a policy decision and policy can be very complex. In MPTCP, we
> have very limited support for policies:
> - clients use a path manager to decide when they create subflows
> - servers basically never create subflows (due to NATs and firewalls)
> - clients and servers can use the backup bit to indicate that a subflow
> should only be used if all the non-backup subflows failed or have problems
>
> We discussed multiple times how to exchange policies over MPTCP. It
> turned out that this was very difficult given the middleboxes. In the
> end, MPTCP does not support the exchange of policies and the current
> deployments embed the policies on the hosts that use MPTCP with a
> specific path manager. This works well enough.
>
> For MPQUIC, the situation could be different as there will be no direct
> inteference from middleboxes that modify options. We could exchange
> information such as the priority of a flow, mapping streams to flows,
> rtt preference, capping flows, ... My fear is that if we open this, the
> discussion could never stop and result in something similar that is too
> complex to implement. I would suggest to focus initially on very simple
> policies that are implemented locally (i.e. the application that uses
> QUIC would control when and how to advertise addresses and when and how
> to create flows, QUIC implementations could include different packet
> schedulers) and discuss the possibility of exchanging this policy
> information for a revision of MPQUIC.
>
>
Defining the capability, presenting little evidence to support its
applicability and punting on the problems is exactly what got HTTP/2
priorities into the pickle that it's in.

Personally, I think starting on a basis of ignoring QUIC transport's core
method of exchanging application data is a bad idea.

Cheers,
Lucas

Reply via email to