Even with fabric enable blades in the c6500, you are going to get massive 
output buffer overflows. Market data has very uneven traffic patterns causing 
microburst effects. What sup-engines/blades are on the boxes? What type of 
multicast replication is being used (ingress/egress). QoS policies typically 
make matters worse. What type of interfaces are on the 6500?


----
Matthew Huff             | 1 Manhattanville Rd
Director of Operations   | Purchase, NY 10577
OTA Management LLC       | Phone: 914-460-4039
aim: matthewbhuff        | Fax:   914-694-5669

> -----Original Message-----
> From: cisco-nsp [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of
> james list
> Sent: Thursday, October 13, 2016 10:45 AM
> To: cisco-nsp@puck.nether.net
> Subject: [c-nsp] huge amount of mcast traffic
> 
> Dear experts,
> 
> I’ve a multicast financial market connected to my infrastructure, I’ve
> been
> informed that a new data multicast flow could reach up to 6 Gbs, so an
> huge
> amount of traffic needs to be replicated.
> 
> Market is connected to an ASR 1001, than to a C6807-XL and customers
> are
> connected to C6500.
> 
> ASR1001 is running 15.3(3)S1 and currently has a license for 2.5Gbs (to
> be
> upgrade)
> 
> C6807 has  Supervisor Engine 2T 10GE and IOS 15.1(2)SY4
> 
> C6500 has Supervisor Engine 720 10GE and IOS 12.2(33)SXI5
> 
> I’d like to understand in your experience if the mentioned
> infrastructure
> could suffer in performance or throughput or other, having to replicate
> the
> mentioned amount of traffic.
> 
> Thanks in advance for any feedback
> 
> Cheers
> 
> James
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to