Saku Ytti wrote:
I'm afraid you imply too much buffer bloat only to cause
unnecessary and unpleasant delay.
With 99% load M/M/1, 500 packets (750kB for 1500B MTU) of
buffer is enough to make packet drop probability less than
1%. With 98% load, the probability is 0.0041%.
I feel like I'll
If it's of any help... the bloat mailing list at lists.bufferbloat.net has
the largest concentration of
queue theorists and network operator + developers I know of. (also, bloat
readers, this ongoing thread on nanog about 400Gbit is fascinating)
There is 10+ years worth of debate in the archives:
Buffering is a near-religious topic across a large swath of the network
industry, but here are some opinions of mine:
a LOT of operators/providers need more buffering than you can realistically put
directly onto the ASIC die. Fast chips without external buffers measure
capacity in tens of
There are MANY real world use cases which require high throughput at 64 byte
packet size. Denying those use cases because they don’t fit your world view is
short sighted. The word of networking is not all I-Mix.
> On Aug 7, 2022, at 7:16 AM, Masataka Ohta
> wrote:
>
> Saku Ytti wrote:
>
On Sun, 7 Aug 2022 at 17:58, wrote:
> There are MANY real world use cases which require high throughput at 64 byte
> packet size. Denying those use cases because they don’t fit your world view
> is short sighted. The word of networking is not all I-Mix.
Yes but it's not an addressable market.
Disclaimer: I often use the M/M/1 queuing assumption for much of my work to
keep the maths simple and believe that I am reasonably aware in which
context it's a right or a wrong application :). Also, I don't intend to
change the core topic of the thread, but since this has come up, I couldn't
Masataka Ohta wrote on 07/08/2022 12:16:
Ethernet switches with small buffer is enough for IXes
That would not be the experience of IXP operators.
Nick
You're getting to the core of the question (sorry, I could not resist...) --
and again the complexity is as much in the terminology as anything else.
In EZChip, at least as we used it on the ASR9k, the chip had a bunch of
processing cores, and each core performed some of the work on each
On Sun, 7 Aug 2022 at 14:16, Masataka Ohta
wrote:
> When many TCPs are running, burst is averaged and traffic
> is poisson.
If you grow a window, and the sender sends the delta at 100G, and
receiver is 10G, eventually you'll hit that 10G port at 100G rate.
It's largely an edge problem, not a
You are incredibly incorrect, in fact the market for those devices is in the
Billions of Dollars. But you continue to pretend that it doesn’t exist.
Shane
> On Aug 7, 2022, at 11:57 AM, Saku Ytti wrote:
>
> On Sun, 7 Aug 2022 at 17:58, wrote:
>
>> There are MANY real world use cases which
On Sat, 6 Aug 2022 at 17:08, wrote:
> For a while, GSR and CRS type systems had linecards where each card had a
> bunch of chips that together built the forwarding pipeline. You had chips
> for the L1/L2 interfaces, chips for the packet lookups, chips for the
> QoS/queueing math, and chips
ljwob...@gmail.com wrote:
Buffer designs are *really* hard in modern high speed chips, and
there are always lots and lots of tradeoffs. The "ideal" answer is
an extremely large block of memory that ALL of the
forwarding/queueing elements have fair/equal access to... but this
physically looks
On Sun, 7 Aug 2022 at 12:16, Masataka Ohta
wrote:
> I'm afraid you imply too much buffer bloat only to cause
> unnecessary and unpleasant delay.
>
> With 99% load M/M/1, 500 packets (750kB for 1500B MTU) of
> buffer is enough to make packet drop probability less than
> 1%. With 98% load, the
On Sat, 6 Aug 2022 at 23:08, Eric Kuhnke wrote:
> I have a morbid curiosity about what the CRM database looks like inside
> Cogent, for the stale/cold leads that get passed on to a new junior sales rep
> every six months.
>
> The amount of peoples' names/email addresses/phone numbers in there
14 matches
Mail list logo