To pick up on one point:

On Mon, 6 Nov 2023, 13:38 Gorry Fairhurst, <[email protected]> wrote:

> See below:
>
> On 06/11/2023 10:59, Kazuho Oku wrote:
>
>
>
> 2023年11月6日(月) 10:08 Gorry Fairhurst <[email protected]>:
>
>> On 05/11/2023 15:49, Kazuho Oku wrote:
>>
>>
>>
>> 2023年11月5日(日) 16:41 Marten Seemann <[email protected]>:
>>
>>> > On different CC's: the set of parameters exchanged are fairly generic,
>>> and I think it's very likely a client will use the same CC to talk to the
>>> same server when it next resumes a session, so I am unsure i share the
>>> concern about different CCs.
>>>
>>> Gorry, I might be misreading the draft, but in my understanding the
>>> BDP_FRAME frame is used by servers to offload state to the client, not the
>>> other way around, so your argument should be that the server will use the
>>> same CC on the original and the resumed connection. The client might also
>>> remember CC parameters, but they wouldn't be sent on the wire.
>>> My argument here is twofold: If this frame is supposed to be read and
>>> acted upon by the client, you now have to deal with the situation where
>>> client and server use different CCs, which will lead to problems if server
>>> and client don't use the same CC. On the other hand, if the frame is not
>>> supposed to be acted upon by the client, there's no reason to use a frame
>>> in the first place, as servers can just unilaterally decide to put the
>>> information into the token.
>>>
>>
>> FWIW, in case of quicly I'm not sure we'd want to use CWND and current
>> RTT to calculate the jump CWND.
>>
>>
>> That is because quicly has a delivery rate estimate based on ACK clock
>> that gives us a better estimation of the available bandwidth; we'd prefer
>> using that and the min RTT.
>>
>> Sure. This makes this interesting, and I know that ACK'ed rate can have
>> advantages.
>>
>>
>> As such, I'm not sure if having a standard definition of a BDP frame
>> would help server implementers.
>>
>> I don't agree (yet?) The CC params that are used are to characterise the
>> path, and have two roles - to enable "reconnaisance" in CR and to decide
>> upon the unvalidate jump size.  This needs at least a saved RTT value and a
>> rate/window.
>>
>> While I agree using rate is different to cwnd, for this specific purpose,
>> isn't it possible to simply used  saved cwnd = rate*RTT? II think we might
>> be able to agree on parameters for this specific use.
>>
> The problem is that the kind of RTT that we want to choose can be
> different depending on if we have ack clock or if we rely on CWND.
>
> When ack clock is available, it would make sense to use idle RTT, because
> that resembles the state of connection when we are not sending much data
> (i.e., at the beginning of the connection). I think we should be comparing
> the previous idle RTT and the new initial RTT in this case.
>
> Right, this seems fine. This is why we called it "capacity" ... in recent
> drafts, let's agreee on a term for the next rev!
>
>
> But when relying on CWND, it would make more sense to send SRTT, CWND
> changes depending on the size of the bottleneck buffer as well.
>
> Also, because CWND is an unconfirmed estimate of the path, it can be 2x as
> large (at the end of slow start), calculation of CR has to be more
> conservative than when jumping based on the ack clock.
>
> Sure, so iin observation phase CR stores a  "saved_capacity" (if we
> conbtinue  to call it that), which ought toi be calculated in the correct
> way by the sender to measure the capacity. We have some text in the Careful
> Resume text to say this ought to not include over buffering or data-limited
> moments - if people think it useful, we can certainly add more text for
> scenarios using different CCs.  (We haven't yet, because the CC saving the
> params, is in most cases the CC using the params for CR.)
>

>From a protocol deployability perspective, we should expect that larger
fleets of servers will be wanting to run multivariate testing of CCs in a
way that doesn't require any coordination with the peer. For example, if I
want to experiment with something I might deploy to canary machines or some
subset of machines, and clients have limited-to-no influence on how resumed
connections are traffic steered.

A token-based approach provides some attractiveness. I can encode private
information there to help me decide what to do with it later.

Cheers
Lucas

Reply via email to