Of course, you could also put such info in an HTTP header :) [1] [1] https://tools.ietf.org/html/draft-ohanlon-transport-info-header-01
On Tue, 27 Oct 2020, 00:15 Matt Joras, <[email protected]> wrote: > What Christian describes, a sort of "BDP Token" is something we are > currently prototyping. Specifically we are using a new frame type to > proactively send information from server -> client containing metrics of > interest. The first such metric is "goodput", as measured recently by the > server, and we intend to use it as an input to the client-side ABR scheme > for the video player. > > In general I thought this idea could be extended and standardized. It > seems likely that different applications will have interest in the same set > of metrics as measured by the peer (e.g. RTT and bandwidth). > > Matt Joras > > > On Mon, Oct 26, 2020 at 4:43 PM Kazuho Oku <[email protected]> wrote: > >> >> >> 2020年10月27日(火) 8:24 Christian Huitema <[email protected]>: >> >>> >>> On 10/26/2020 2:36 PM, Kazuho Oku wrote: >>> > Hello, >>> > >>> > Thank you for the draft. I just had the opportunity to read it, and >>> > therefore am leaving comments. >>> > >>> > First of all, I think it is a good idea to write down how endpoints >>> > can reuse information from previous connections when reconnecting. >>> > Many people have been talking about the idea, it seems that there are >>> > pitfalls, and having a compilation of good practices could help. >>> > >>> > At the same time, I was puzzled with the following two aspects of the >>> > draft: >>> > >>> > 1. The draft requires the characteristics of paths be communicated via >>> > session tickets. IIUC, QUIC resumes the properties of the *endpoint* >>> > using TLS session tickets, while the properties of a path (e.g., >>> > peer's IP address) is to be remembered by a NEW_TOKEN token. The >>> > draft's use of session ticket goes against that principle. >>> >>> I also think that forcing the information in the session ticket is a bad >>> idea. As Kazuho says, the session ticket is used to resume sessions, not >>> necessarily from the same network location at which the session ticket >>> was acquired. Using a "token" is better from that point of view. It also >>> has the advantage that tokens are fully managed within the QUIC layer, >>> without any dependency on the TLS stack, which makes the implementation >>> significantly simpler. However, there is still an issue because New >>> Tokens are normally sent just once at the beginning of the connection, >>> and are used to manage the "stateless retry" process. If the server >>> sends several New Tokens, the client is expected to remember all of >>> them, and use them only once. >>> >>> It might be simpler to create a new frame, very similar to the "New >>> Token" frame, maybe calling it "BDP_TOKEN", and a "bdp_token" transport >>> parameter. The frame carries a binary blob that encode server defined >>> data. The server sends the client whatever blob it wants. It may send it >>> several time, in which case the client only remembers the last one. The >>> client puts the blob in the bdp_token TP, or if no token is available >>> sends an empty blob to signal its support for the process. The server >>> may reply with an empty token if it does support the process. >>> >> >> I tend to agree with the high order design, and it is my understanding >> that use of NEW_TOKEN tokens is fine for the purpose. >> >> The transport draft has a paragraph stating that a server might send new >> NEW_TOKEN tokens as the state of the connection changes, and that it makes >> sense for the client to use the most recently received NEW_TOKEN token. >> >> https://quicwg.org/base-drafts/draft-ietf-quic-transport.html#section-8.1.3-9 >> >> >>> > >>> > 2. The draft suggests that the server's properties (e.g., server's >>> > BDP) be shared with the client. Is there any reason why they have to >>> > be shared? I tend to think that each endpoint unilaterally remembering >>> > what they prefer provides the most agility without the need to >>> > standardize something. >>> >>> I think that "the endpoint remembers" should be the starting point. For >>> one thing, we should also care of the scenario in which the client >>> pushes data to the server, in which case the client needs to remember >>> the BDP and RTT of the previous connection to the server. The server >>> could do the same thing, just keep an LRU list of recent client >>> connection addresses and the associated BDP and RTT. >>> >>> The whole business of sending blobs is just an optimization on top of >>> the "endpoint remembers" strategy. We may be concerned that servers have >>> to remember data for too many clients, that local server storage would >>> not scale, or maybe that the same client hits a different server in the >>> farm each time its reconnect. Hence the idea of storing these parameters >>> in a blob, sending the blob to the client during the previous >>> connection, and have the client provide the blob back during the new >>> connection. Also, we seem concerned that the server does not trust the >>> client, because otherwise the client could just add a couple of TP such >>> as "previous RTT" and "previous Bandwidth", based on what the client >>> observed before. Managing blobs has some complexity, so the tradeoffs >>> should be explored. >>> >>> -- Christian Huitema >>> >>> >>> >>> >>> >> >> -- >> Kazuho Oku >> >
