Hi Abe,
the problem is that the AMS-IX data only covers the public fabric, but
the peering connections between the big CDNs/clouds and the large ISPs
all happen on private dedicated circuits as it is so much traffic that
it does not make sense to run it over a public IX fabric (in addition to
Nicolas Fevrier has a very detailed blog post on how Cisco handles the prefixes
on their Broadcom Jericho based NCS 5500 gear.
https://xrdocs.io/cloud-scale-networking/tutorials/2017-08-03-understanding-ncs5500-resources-s01e02/
I'm pretty sure the principle is more or less the same for the
I would not recommend to do that.
If you really do this, please make sure that the owner of the supernet (in this
case the university) also does transit for the subnet (which they should as
they are supposed to accept and forward traffic for the whole aggregate that
they are announcing).
On 26/04/16 02:03, Tom Hill wrote:
On 19/04/16 14:46, Chris Welti wrote:
According to some slides from a russian cisco connect event, the
upcoming small-size NCS 5501 and NCS 5502 will support 1M+ FIB and
50ms per port buffers. Seem to be killer boxes. 48x100GE in 2RU with
large FIB & buf
On 20/04/16 16:27, Leo Bicknell wrote:
90%+ of the stacks deployed will be too small. Modern Unix generally
has "autotuning" TCP stacks, but I don't think Windows or OS X has
those features yet (but I'd be very happy to be wrong on that point).
Regardless of satellite uplink/downlink speeds,
According to some slides from a russian cisco connect event, the upcoming
small-size
NCS 5501 and NCS 5502 will support 1M+ FIB and 50ms per port buffers.
Seem to be killer boxes. 48x100GE in 2RU with large FIB & buffers? Loving it
already.
I wonder what prices will look like for those.
With
6 matches
Mail list logo