I don't think that this feature really hurts TCP.
TCP is robust to that in any case. Even if there is avg RTT increase and
stddev RTT increase.

And, I agree that what is more important is the performance of sparse
flows, which is not affected by this feature.

There is one little thing that might appear negligible, but it is not from
my point of view,
which is about giving incentives to transport end-points
to behaves in the right way. For instance a transport end-point that sends
traffic using pacing should be considered as
behaving better than a transport end-point that sends in burst. And get
reward for that.

Flow isolation creates incentives to pace transmissions and so create less
queueing in the network.
This feature reduces the level of that incentive.
I am not saying that it eliminates  the incentive, because there is still
flow isolation, but it makes it less
effective. If you send less bursts you dont get lower latency.

When I say transport end-point I don't only think toTCP but also QUIC and
all other possible TCPs
as we all know TCP is a variety of protocols.

But I understand Jonathan's point.

Luca


On Thu, Apr 19, 2018 at 12:33 PM, Toke Høiland-Jørgensen <t...@toke.dk>
wrote:

> Jonathan Morton <chromati...@gmail.com> writes:
>
> >>>> your solution significantly hurts performance in the common case
> >>>
> >>> I'm sorry - did someone actually describe such a case?  I must have
> >>> missed it.
> >>
> >> I started this whole thread by pointing out that this behaviour results
> >> in the delay of the TCP flows scaling with the number of active flows;
> >> and that for 32 active flows (on a 10Mbps link), this results in the
> >> latency being three times higher than for FQ-CoDel on the same link.
> >
> > Okay, so intra-flow latency is impaired for bulk flows sharing a
> > relatively low-bandwidth link. That's a metric which few people even
> > know how to measure for bulk flows, though it is of course important
> > for sparse flows. I was hoping you had a common use-case where
> > *sparse* flow latency was impacted, in which case we could actually
> > discuss it properly.
> >
> > But *inter-flow* latency is not impaired, is it? Nor intra-sparse-flow
> > latency? Nor packet loss, which people often do measure (or at least
> > talk about measuring) - quite the opposite? Nor goodput, which people
> > *definitely* measure and notice, and is influenced more strongly by
> > packet loss when in ingress mode?
>
> As I said, I'll run more tests and post more data once I have time.
>
> > The measurement you took had a baseline latency in the region of 60ms.
>
> The baseline link latency is 50 ms; which is sorta what you'd expect
> from a median non-CDN'en internet connection.
>
> > That's high enough for a couple of packets per flow to be in flight
> > independently of the bottleneck queue.
>
> Yes. As is the case for most flows going over the public internet...
>
> > I would take this argument more seriously if a use-case that mattered
> > was identified.
>
> Use cases where intra-flow latency matters, off the top of my head:
>
> - Real-time video with congestion response
> - Multiple connections multiplexed over a single flow (HTTP/2 or
>   QUIC-style)
> - Anything that behaves more sanely than TCP at really low bandwidths.
>
> But yeah, you're right, no one uses any of those... /s
>
> > So far, I can't even see a coherent argument for making this tweak
> > optional (which is of course possible), let alone removing it
> > entirely; we only have a single synthetic benchmark which shows one
> > obscure metric move in the "wrong" direction, versus a real use-case
> > identified by an actual user in which this configuration genuinely
> > helps.
>
> And I've been trying to explain why you are the one optimising for
> pathological cases at the expense of the common case.
>
> But I don't think we are going to agree based on a theoretical
> discussion. So let's just leave this and I'll return with some data once
> I've had a chance to run some actual tests of the different use cases.
>
> -Toke
>
_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to