Buffer size has nothing to do with feature richness.
Assuming you are asking about DC - in a wide radix low oversubscription
network shallow buffers do just fine, some applications (think map reduce/ML
model training) have many to one traffic patterns and suffer from incast as the
result, deep
On 4/9/21 00:19, Eric Kuhnke wrote:
As an anecdotal data point, the only effect this has had is teaching
random 14 year olds how to use ordinary consumer grade VPNs, which
work just fine.
One way or the other, you can't keep the kids from what they want :-).
Mark.
This is an automated weekly mailing describing the state of the Internet
Routing Table as seen from APNIC's router in Japan.
The posting is sent to APOPS, NANOG, AfNOG, SANOG, PacNOG, SAFNOG
TZNOG, MENOG, BJNOG, SDNOG, CMNOG, LACNOG and the RIPE Routing WG.
Daily listings are sent to bgp-st...@li
It will not be easy to get a straight answer, I would say more about your
environ and applications. So if you considered the classical TCP algorithm
ignoring latency it is large buffer, yet what about microburst?
LG
From: NANOG on behalf of
William Herrin
S
On Fri, Apr 9, 2021 at 6:05 AM Mike Hammett wrote:
> What I've observed is that it's better to have a big buffer device
> when you're mixing port speeds. The more dramatic the port
> speed differences (and the more of them), the more buffer you need.
>
> If you have all the same port speed, small
❦ 9 avril 2021 17:20 +03, Saku Ytti:
> If we'd change TCP sender to bandwidth estimation, and newly created window
> space would be serialised at estimated receiver rate then we would need
> dramatically less buffers. However this less aggressive TCP algorithm would
> be outcompeted by new reno
The reason why we need larger buffers on some applications is because of
TCP implementation detail. When TCP window grows in size (it grows
exponentially) the newly created window size is bursted on to the wire at
sender speed.
If sender is significantly higher speed than receiver, someone needs t
I have seen the opposite, where small buffers impacted throughput.
Then again, it was observation only, no research into why, other than
superficial.
-
Mike Hammett
Intelligent Computing Solutions
Midwest Internet Exchange
The Brothers WISP
- Original Message -
From: "T
>
> If you have all the same port speed, small buffers are fine. If you have
> 100G and 1G ports, you'll need big buffers wherever the transition to the
> smaller port speed is located.
While the larger buffer there you are likely to be severely impacting
application throughput.
On Fri, Apr 9, 2
There is no easy, one side fits all answer to this question. It's a complex
subject, and the answer will often be different depending on the
environment and traffic profile.
On Fri, Apr 9, 2021 at 8:58 AM Dmitry Sherman wrote:
> Once again, which is better shared buffer featurerich or fat buffer
>
> As an anecdotal data point, the only effect this has had is teaching
> random 14 year olds how to use ordinary consumer grade VPNs, which work
> just fine.
>
Or, perhaps some kid watched that and said "Oh that's cool, I want to know
more about how that works!" , and planted a seed for a future
What I've observed is that it's better to have a big buffer device when you're
mixing port speeds. The more dramatic the port speed differences (and the more
of them), the more buffer you need.
If you have all the same port speed, small buffers are fine. If you have 100G
and 1G ports, you'll
Once again, which is better shared buffer featurerich or fat buffer switches?
When its better to put big buffer switch? When its better to drop and
retransmit instead of queueing?
Thanks.
Dmitry
13 matches
Mail list logo