Re: Lossy cogent p2p experiences?

2023-09-04 Thread Masataka Ohta

William Herrin wrote:


Well it doesn't show up in long slow pipes because the low
transmission speed spaces out the packets,


Wrong. That is a phenomenon with slow access and fast backbone,
which has nothing to do with this thread.

If backbone is as slow as access, there can be no "space out"
possible.


and it doesn't show up in
short fat pipes because there's not enough delay to cause the
burstiness.


Short pipe means speed of burst shows up continuously
without interruption.

> So I don't know how you figure it has nothing to do with
> long fat pipes,

That's your problem.

Masataka Ohta


Re: Lossy cogent p2p experiences?

2023-09-04 Thread William Herrin
On Mon, Sep 4, 2023 at 7:07 AM Masataka Ohta
 wrote:
> William Herrin wrote:
> > So, I've actually studied this in real-world conditions and TCP
> > behaves exactly as I described in my previous email for exactly the
> > reasons I explained.
>
> Yes of course, which is my point. Your problem is that your
> point of slow start has nothing to do with long fat pipe.

Well it doesn't show up in long slow pipes because the low
transmission speed spaces out the packets, and it doesn't show up in
short fat pipes because there's not enough delay to cause the
burstiness. So I don't know how you figure it has nothing to do with
long fat pipes, but you're plain wrong.

Regards,
Bill Herrin


-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: Lossy cogent p2p experiences?

2023-09-04 Thread Masataka Ohta

William Herrin wrote:


No, not at all. First, though you explain slow start,
it has nothing to do with long fat pipe. Long fat
pipe problem is addressed by window scaling (and SACK).


So, I've actually studied this in real-world conditions and TCP
behaves exactly as I described in my previous email for exactly the
reasons I explained.


Yes of course, which is my point. Your problem is that your
point of slow start has nothing to do with long fat pipe.

> Window scaling and SACK makes it possible for TCP to grow to consume
> the entire whole end-to-end pipe when the pipe is at least as large as
> the originating interface and -empty- of other traffic.

Totally wrong.

Unless the pipe is long and fat, a plain TCP without window scaling
or SACK is to grow to consume the entire whole end-to-end pipe when
the pipe is at least as large as the originating interface and
-empty- of other traffic.

> Those
> conditions are rarely found in the real world.

It is usual that TCP consumes all the available bandwidth.

Exceptions, not so rare in the real world, are plain TCPs over
long fat pipes.

Masataka Ohta




Re: Lossy cogent p2p experiences?

2023-09-04 Thread William Herrin
On Mon, Sep 4, 2023 at 12:13 AM Masataka Ohta
 wrote:
> William Herrin wrote:
> > That sounds like normal TCP behavior over a long fat pipe.
>
> No, not at all. First, though you explain slow start,
> it has nothing to do with long fat pipe. Long fat
> pipe problem is addressed by window scaling (and SACK).

So, I've actually studied this in real-world conditions and TCP
behaves exactly as I described in my previous email for exactly the
reasons I explained. If you think it doesn't, you don't know what
you're talking about.

Window scaling and SACK makes it possible for TCP to grow to consume
the entire whole end-to-end pipe when the pipe is at least as large as
the originating interface and -empty- of other traffic. Those
conditions are rarely found in the real world.

Regards,
Bill Herrin


-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: Lossy cogent p2p experiences?

2023-09-04 Thread Nick Hilliard

Masataka Ohta wrote on 04/09/2023 12:04:

Are you saying you thought a 100G Ethernet link actually consisting
of 4 parallel 25G links, which is an example of "equal speed multi
parallel point to point links", were relying on hashing?


this is an excellent example of what we're not talking about in this thread.

A 100G serdes is an unbuffered mechanism which includes a PLL, and this 
allows the style of clock/signal synchronisation required for the 
deserialised 4x25G lanes to be reserialised at the far end.  This is one 
of the mechanisms used for packet / cell / bit spray, and it works 
really well.


This thread is talking about buffered transmission links on routers / 
switches on systems which provide no clocking synchronisation and not 
even a guarantee that the bearer circuits have comparable latencies. 
ECMP / hash based load balancing is a crock, no doubt about it; it's 
just less crocked than other approaches where there are no guarantees 
about device and bearer circuit behaviour.


Nick


Re: Lossy cogent p2p experiences?

2023-09-04 Thread Masataka Ohta

Mark Tinka wrote:


ECMP, surely, is a too abstract concept to properly manage/operate
simple situations with equal speed multi parallel point to point links.


I must have been doing something wrong for the last 25 years.


Are you saying you thought a 100G Ethernet link actually consisting
of 4 parallel 25G links, which is an example of "equal speed multi
parallel point to point links", were relying on hashing?

Masataka Ohta



Re: Lossy cogent p2p experiences?

2023-09-04 Thread Masataka Ohta

William Herrin wrote:


Hi David,

That sounds like normal TCP behavior over a long fat pipe.


No, not at all. First, though you explain slow start,
it has nothing to do with long fat pipe. Long fat
pipe problem is addressed by window scaling (and SACK).

As David Hubbard wrote:

: I've got a non-rate-limited 10gig circuit

and

: The initial and recurring packet loss occurs on any flow of
: more than ~140 Mbit.

the problem is caused not by wire speed limitation of a "fat"
pipe but by artificial policing at 140M.

Masataka Ohta



Re: Lossy cogent p2p experiences?

2023-09-04 Thread Masataka Ohta

Nick Hilliard wrote:


In this case, "Without buffer bloat" is an essential assumption.


I can see how this conclusion could potentially be reached in
specific styles of lab configs,


I'm not interested in how poorly you configure your
lab.


but the real world is more complicated and


And, this thread was initiated because of unreasonable
behavior apparently caused by stupid attempts for
automatic flow detection followed by policing.

That is the real world.

Moreover, it has been well known both in theory and
practice that flow driven architecture relying on
automatic detection of flows does not scale and is
no good, though MPLS relies on the broken flow
driven architecture.

> Generally in real world situations on the internet, packet reordering
> will happen if you use round robin, and this will impact performance
> for higher speed flows.

That is my point already stated by me. You don't have to repeat
it again.

> It's true that per-hash load
> balancing is a nuisance, but it works better in practice on larger
> heterogeneous networks than RR.

Here, you implicitly assume large number of slower speed flows
against your statement of "higher speed flows".

Masataka Ohta