I’ll cut some clutter:

>> What you’re doing is jumping ahead. I suggest doing this with research 
>> rather than an email discussion, but that’s what we’re now already into.
> 
>       [SM] Sure, except my day job
(snip)

that’s ok - I wasn’t trying to tell you off for discussing ideas!   - I was 
just trying to clarify:
1. we agree there’s a problem;
2. our paper made the point that research is needed;
3. now you’re discussing possible research ideas, which is the step that comes 
after.

… and trying to discuss 3. in depth easily gets hand-wavy, unless one actually 
does the research first. My So all I say is that this discussion is a bit 
pointless  (I’m trying to end it - please!  :-)  )


> Maybe a reason for me to respectfully bow out of this discussion as talk is 
> cheap and easy to come by even without me helping.

Well, let’s at least try to keep this short…


>> Okay, there is ONE thing that such a flow gets: the RTT. “Blind except for 
>> RTT measurements”, then.
> 
>       [SM] I guess your point is "flow does not know the maximal capacity it 
> could have gotten"?

Yes.


>> Importantly, such a flow never learns how large its cwnd *could* have become 
>> without ever causing a problem. Perhaps 10 times more? 100 times?
> 
>       [SM] Sure. ATM the only way to learn a path's capacity is actually to 
> saturate the path*, but if a flow is done with its data transfer, having if 
> exchange dummy data just to probe capacity seems like a waste all around.

That’s an example of details of a possible future mechanism. And yes, I 
wouldn’t advocate this particular one.


> I guess what I want to ask is, how would knowing how much available but 
> untapped capacity was available at one point help?
> 
> 
> *) stuff like deducing capacity from packet pair interval at the receiver 
> (assuming packets sent back to back) is notoriously imprecise, so unless 
> "chirping" overcomes that imprecision without costing too many round trips 
> worth of noise supression measuring capacity by causing congestion is the 
> only way. Not great.

We’re dealing with a world of heuristics here; nothing is ever 100% known 
beforehand about Internet transfers in general. So, something like your aside 
here would also never be a binary 100% bad case - how bad it would be depends 
on many parameters which are worth investigating, per envisioned mechanism - 
and then we’re doing research. So “I imagine mechanism X, but this will never 
work” is exactly the kind of discussion that's a waste of time, IMO.


>> I don’t even think that this name has that kind of history. My point was 
>> that they’re called PEPs because they’re *meant* to improve performance;
> 
>       [SM] That is not really how our economic system works... products are 
> primarily intended to generate more revenue than cost, it helps if they offer 
> something to the customer, but that is really just a means to extract 
> revenue...
> 
> 
>> that’s what they’re designed for. You describe “a PEP that does not enhance 
>> performance”, which, to me, is like talking about a web server that doesn’t 
>> serve web pages. Sure, not all PEPs may always work well, but they should - 
>> that’s their raison d’être.
> 
>       [SM] That is a very optimistic view, I would love to be able to share.

This is not about optimism. If a device doesn’t improve performance, it 
shouldn’t be called a PEP. If someone does it nevertheless, okay, but then 
that’s not the device we’re discussing here.
I.e.: I say “but we’re discussing a web server, not a DNS server” and you say 
“this is optimistic”. That’s just weird.


>>>> There are plenty of useful things that they can do and yes, I personally 
>>>> think they’re the way of the future - but **not** in their current form, 
>>>> where they must “lie” to TCP, cause ossification,
>>> 
>>>     [SM] Here I happily agree, if we can get the nagative side-effects 
>>> removed that would be great, however is that actually feasible or just 
>>> desirable?
>>> 
>>>> etc. PEPs have never been considered as part of the congestion control 
>>>> design - when they came on the scene, in the IETF, they were despised for 
>>>> breaking the architecture, and then all the trouble with how they need to 
>>>> play tricks was discovered (spoofing IP addresses, making assumptions 
>>>> about header fields, and whatnot). That doesn’t mean that a very different 
>>>> kind of PEP - one which is authenticated and speaks an agreed-upon 
>>>> protocol - couldn’t be a good solution.
>>> 
>>>     [SM] Again, I agree it could in theory especially if well-architected. 
>> 
>> That’s what I’m advocating.
> 
>       [SM] Well, can you give an example of an existing well-architected PEP 
> as proof of principle?

It doesn’t really exist yet - else I wouldn’t need to advocate it  :-)   but: 
e.g., for QUIC, one could extend MASQUE proxies with performance-enhancing 
functions. I believe that colleagues from Ericsson are advocating that. For 
TCP, RFC 8803 sets a very good example, IMO.


>>>     [SM] This is no invention, but how capitalism works, sorry. The party 
>>> paying for the PEP decides on using it based on the advantages it offers 
>>> for them. E.g. a mobile carrier that (in the past) forcible managed to 
>>> downgrade the quality of streaming video over mobile links without giving 
>>> the paying end-user an option to use either choppy high resolution or 
>>> smooth low resolution video. By the way, that does not make the operator 
>>> evil, it is just that operator and paying customers goals and desires are 
>>> not all that well aligned (e.g. the operator wants to maximize revenue, the 
>>> customer to minimize cost).
>> 
>> You claim that these goals and desires are not well aligned (and a PEP is 
>> then an instrument in this evil)
> 
>       [SM] Again this is expected behavior in our economic system, I have not 
> and will not classify that as "evil", but I will also not start believing 
> that companies offer products just to get a warm and fuzzy feeling. It is 
> part of the principle of how a market economy works that the goals of the 
> entities involved are opposite of each other, that is how a market is 
> supposed to optimize resource allocation.
> 
>> - do you have any proof, or even anecdotes, to support that claim?
> 
>       [SM] The claim that sellers want the highest revenue/cost ratio while 
> buyers want the lowest cost/utility seems hardly controversial or requiring a 
> citation.

Aha. No, I understand you now. But it *is* more subtle than that, because the 
market will also make unhappy customers switch to a different company, except 
when they have no choice (monopoly).


>> I would think that operators generally try to make their customers happy (or 
>> they would switch to different operators).  Yes there may be some 
>> misalignments in incentives, but I believe that these are more subtle 
>> points. E.g., who wants a choppy high resolution video? Do such users really 
>> exist?
> 
>       [SM] I might be able to buffer that choppy video well enough to allow 
> smooth playback at the desired higher resolution/quality (or I might be happy 
> with a few seconds to compare quality of displays), given that I essentially 
> buy internet access from my mobile carrier that carrier should get out of my 
> way. (However if the carrier also offers "video-optimization" as an opt-in 
> feature end-users can toggle at will that is a different kettle of fish and 
> something I would consider good service). IIRC a German carrier was simply 
> downforcing quality for all video streaming at all time, mostly to minimize 
> cost and bandwidth usage, which pretty much looks like an exercise to 
> minimize operational cost and not to increase customer satisfaction. So yes 
> there are "misalignments in incentives" that are inherent and structural to 
> the way we organize our society. (I am sort of okay living with that, but I 
> will not sugar coat it).

You say “you” might be able to buffer - but was this about a specific 
application?
Anyway, let’s say a provider downgrades Netflix and instead tries people to opt 
in to their own, costly service instead. I suspect that this would make most 
customers want to switch their provider.
So - in this way, the market does enforce a certain alignment of interests 
nevertheless.


>> Now, if we just had an in-network device that could divide the path into a 
>> “core” segment where it’s safe to use a pretty large IW value, and a 
>> downstream segment where the IW value may need be smaller, but a certain 
>> workable range might be known to the device, because that devices sits right 
>> at the edge…
> 
>       [SM] This seems to be problematic if end-to-end encryption is desired, 
> no? But essentially this also seems to be implemented already, except that we 
> call these things CDNs instead of proxies ;) (kidding!)

Potentially solvable with MASQUE proxies (see above).


>>>     [SM] I understand, however I see clear reasons why L4S is detrimental 
>>> to your stated goals as it will getting more information from the network 
>>> less likely. I also tried to explain, why I believe that to be a 
>>> theoretically viable way forwards to improve slow-start dynamics. Maybe 
>>> show why my proposal is bunk while completely ignoring L4S? Or is that the 
>>> kind of "particular solution" you do not want to discuss at the current 
>>> stage?
>> 
>> I’d say the latter. We could spend weeks of time and tonds of emails 
>> discussing explicit-feedback based schemes…  instead, if you think your idea 
>> is good, why not build it, test it, and evaluate its trade-offs?
> 
>       [SM] In all honesty, because my day-job is in a pretty different field 
> and I do not believe I can or even would want to perform publishable CS 
> research after hours (let alone find a journal accepting layman submissions 
> without any relevant affiliation).

That’s totally fine, but then it’s not worth the time to discuss the idea, 
frankly.


>> I don’t see L4S as being *detrimental* to our stated goals, BTW - but, as it 
>> stands, I see limitations in its usefulness because TCP Prague (AFAIK) only 
>> changes Congestion Avoidance, at least up to now. I’m getting the impression 
>> that Congestion Avoidance with a greedy sender is a rare animal.
> 
>       [SM] Erm, DASH-type traffic seems quite common, no? There the 
> individual segments transmitted can be large enough to reach (close to) 
> capacity?

Here I don’t have enough data (though there are some papers we could dig into)… 
 but I suspect that the answer is: no, they are probably not large enough to 
reach capacity (and "close to”: who knows?).


Cheers,
Michael

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to