Comments on the general idea:

Kazuho Oku:
2. The draft suggests that the server's properties (e.g., server's BDP) be 
shared with the client. Is there any reason why they have to be shared? I tend 
to think that each endpoint unilaterally remembering what they prefer provides 
the most agility without the need to standardize something.

Christian Huitema:
I think that "the endpoint remembers" should be the starting point. For one 
thing, we should also care of the scenario in which the client pushes data to 
the server, in which case the client needs to remember the BDP and RTT of the 
previous connection to the server. The server could do the same thing, just 
keep an LRU list of recent client connection addresses and the associated BDP 
and RTT.

[NK] While each client and server can have their dedicated solution to store 
path parameters, having a standard way of exchanging this information helps in 
improving the ramp up. If one endpoint sends such BDP information, it should be 
readable by the other endpoint that may or may not consider it in its 
algorithms.

Christian Huitema:
Reading the draft again, I am concerned with statements such as "for 0-RTT data 
to be sent, the server must remember [various congestion control parameters]". 
In QUIC, servers do not send 0-RTT data, only the clients do. Servers MAY start 
sending data in response to a 0-RTT packet sent by the client, but at that 
point the server has received the new values of the client's transport 
parameters, so it can just use the correct values, no remembering needed. Is 
the current text a typo, writing "server" where "client" was meant?

[NK] Indeed, 0-RTT server mode is missing; our proposal helps in tuning the 
server ramp up immediately after it receives the CHO of a 0-RTT reconnection.

Christian Huitema:
There is also some ambiguity in the draft regarding the status of the "bdp 
parameters". Why do we have identifiers associated with these parameters? Is 
the intent to send these as unencrypted transport parameters?

[NK] BDP_parameter are sent after the handshake so they are encrypted.

Christian Huitema:
That's certainly doable, although I do not understand well the "max packet 
number" parameter. That seems redundant with the "initial_max_data" and 
"initial_max_stream_data" parameters.
Of course, if these are clear text parameters, we have to be concerned with 
issues of privacy and security. Can the value of the parameters be used to 
track clients? Can the client send creative values and affect server behavior?

[NK] initial_max_data is the maximum amount of data that can be sent on the 
connection; the client may ask for a smaller value with the 
recon_max_pkt_number. What do you think?

Christian Huitema:
The whole business of sending blobs is just an optimization on top of the 
"endpoint remembers" strategy. We may be concerned that servers have to 
remember data for too many clients, that local server storage would not scale, 
or maybe that the same client hits a different server in the farm each time its 
reconnect. Hence the idea of storing these parameters in a blob, sending the 
blob to the client during the previous connection, and have the client provide 
the blob back during the new connection. Also, we seem concerned that the 
server does not trust the client, because otherwise the client could just add a 
couple of TP such as "previous RTT" and "previous Bandwidth", based on what the 
client observed before. Managing blobs has some complexity, so the tradeoffs 
should be explored.

[NK] Indeed – we agree that these trade-offs should be discussed.


Nicolas and Emile

De : Ian Swett <[email protected]>
Envoyé : mardi 27 octobre 2020 14:39
À : Kazuho Oku <[email protected]>
Cc : Christian Huitema <[email protected]>; Kuhn Nicolas 
<[email protected]>; IETF QUIC WG <[email protected]>; Matt Joras 
<[email protected]>; Lucas Pardue <[email protected]>
Objet : Re: New Version Notification for draft-kuhn-quic-0rtt-bdp-07.txt

I agree with most of the points made by Kazuho and Christian.

In particular, I think relying on information the client repeats back to the 
server which is not encrypted by the server is potentially quite dangerous.  
Even with that, there's real potential for abuse, particularly when combined 
with 0-RTT, so if we're going to document something, I believe we need a lot of 
MUST NOTs to prevent that abuse.  Using the same token across many connections 
seems particularly concerning.

On Mon, Oct 26, 2020 at 10:30 PM Kazuho Oku 
<[email protected]<mailto:[email protected]>> wrote:


2020年10月27日(火) 11:14 Christian Huitema 
<[email protected]<mailto:[email protected]>>:


On 10/26/2020 6:20 PM, Lucas Pardue wrote:

On Tue, 27 Oct 2020, 01:12 Matt Joras, 
<[email protected]<mailto:[email protected]>> wrote:
Indeed, but as much as I love HTTP it's not the only protocol we have on top of 
QUIC. A consistency argument can also be made for having a connection-level 
metric tied to a connection-level semantic (i.e. a QUIC frame) rather than the 
transactional-level semantic (HTTP header).

I agree! A QUIC-level frame could be the most appropriate thing in this case. I 
think this will be an interesting space for innovation. And let's not forget 
all the other datagram-oriented protocols that have preceeded - so perhaps 
there will be some re-innovation.



I am glad that we agree. Now, we have an issue regarding security. Nicolas Kuhn 
and his coauthors have pursued a design in which the server sends an encrypted 
blob to the client, and then client echoes it in the new connection. This is 
largely based on concerns about potential attacks. Suppose the client said "I 
received at 1Gbps last time", when in fact it can only absorb 10 Mbps. Some bad 
stuff might well be happening along the way. But then, Matt is looking at 
passing statistics from server to client, so the client can debug issues, 
display statistics in the app, and potentially also reuse the statistics to 
inform server that the last connection had an RTT of 600 ms and a data-rate of 
100 Mbps, and maybe we should shortcut initial congestion control in order to 
gain lots of time. How do we reconcile these multiple goals?

I think that the answer is simple.

Loss recovery and CC states are properties maintained by an endpoint. The peer 
should not be given the right to change them (e.g., CWND, RTT). Servers might 
offload the state to the client, but that state has to be encrypted and 
integrity-protected. NEW_TOKEN tokens cover this purpose.

OTOH, servers can expose any information to the client regarding the quality of 
the connection. It is totally reasonable to give the client the bandwidth 
information that the server retains, so that the client could choose a good 
bandwidth. There are no security concerns providing such information, 
regardless of the medium being a QUIC frame or a HTTP header. And that's 
because it does not affect loss recovery or CC.


-- Christian Huitema


--
Kazuho Oku

Reply via email to