The 6748 blades are going to be an issue with buffer overruns. Whether this 
will be a minor or major issue depends on the application that uses the 
multicast data.

----
Matthew Huff             | 1 Manhattanville Rd
Director of Operations   | Purchase, NY 10577
OTA Management LLC       | Phone: 914-460-4039
aim: matthewbhuff        | Fax:   914-694-5669

From: james list [mailto:jameslis...@gmail.com]
Sent: Thursday, October 13, 2016 12:25 PM
To: Matthew Huff <mh...@ox.com>
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] huge amount of mcast traffic


Hi



I’m not able to find the multicast replication mode on ASR..



On core routers:



C6807 has  Supervisor Engine 2T 10GE and IOS 15.1(2)SY4



xxx>sh module
Mod Ports Card Type                              Model              Serial No.
 --- ----- -------------------------------------- ------------------ -----------
   1   20  DCEF2T 4 port 40GE / 16 port 10GE      WS-X6904-40G       xx
   2   20  DCEF2T 4 port 40GE / 16 port 10GE      WS-X6904-40G       xx
   3    5  Supervisor Engine 2T 10GE w/ CTS (Acti VS-SUP2T-10G       xx
   5   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6848-GE-TX     xx



Mod  Sub-Module                  Model              Serial       Hw     Status



---- --------------------------- ------------------ ----------- ------- -------



  1  Distributed Forwarding Card WS-F6K-DFC4-E      xxx  1.2    Ok



  2  Distributed Forwarding Card WS-F6K-DFC4-E      xxx  1.2    Ok



  3  Policy Feature Card 4       VS-F6K-PFC4        xxx  3.0    Ok



  3  CPU Daughterboard           VS-F6K-MSFC5       xxx  3.0    Ok



  5  Distributed Forwarding Card WS-F6K-DFC4-A      xxx  1.4    Ok



xxx#sh platform multicast routing replication



Current mode of replication is Egress



Configured mode of replication is Egress





Switch  Slot                Multicast replication capability



 1       1                      Egress



 1       2                      Egress



 1       3                      Egress



 1       5                      Egress



 2       1                      Egress



 2       2                      Egress



 2       3                      Egress



 2       5                      Egress



 4       1                      Ingress



 3       1                      Ingress



 5       1                      Ingress





C6500 has Supervisor Engine 720 10GE and IOS 12.2(33)SXI5





xxx>sh module



Mod Ports Card Type                              Model              Serial No.



--- ----- -------------------------------------- ------------------ -----------



  1   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX     xxx



  2    8  CEF720 8 port 10GE with DFC            WS-X6708-10GE      xxx



  3   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX     xxx



  4   48  CEF720 48 port 1000mb SFP              WS-X6748-SFP       xxx



  5    5  Supervisor Engine 720 10GE (Active)    VS-S720-10G        xxx





Mod  Sub-Module                  Model              Serial       Hw     Status



---- --------------------------- ------------------ ----------- ------- -------



  1  Distributed Forwarding Card WS-F6700-DFC3C     xxx  1.6    Ok



  2  Distributed Forwarding Card WS-F6700-DFC3C     xxx 1.8    Ok



  3  Distributed Forwarding Card WS-F6700-DFC3C     xxx  1.6    Ok



  4  Centralized Forwarding Card WS-F6700-CFC       xxx  4.2    Ok



  5  Policy Feature Card 3       VS-F6K-PFC3C       xxx  1.1    Ok



  5  MSFC3 Daughterboard         VS-F6K-MSFC3       xxx  1.0    Ok





xxx>show mls ip multicast capability



Current mode of replication is Egress



Configured replication mode is Auto





 Slot           Multicast replication capability



    1                        Egress



    2                        Egress



    3                        Egress



    4                        Egress



    5                        Egress



Cheers

2016-10-13 17:59 GMT+02:00 Matthew Huff <mh...@ox.com<mailto:mh...@ox.com>>:
Even with fabric enable blades in the c6500, you are going to get massive 
output buffer overflows. Market data has very uneven traffic patterns causing 
microburst effects. What sup-engines/blades are on the boxes? What type of 
multicast replication is being used (ingress/egress). QoS policies typically 
make matters worse. What type of interfaces are on the 6500?


----
Matthew Huff             | 1 Manhattanville Rd
Director of Operations   | Purchase, NY 10577
OTA Management LLC       | Phone: 914-460-4039<tel:914-460-4039>
aim: matthewbhuff        | Fax:   914-694-5669<tel:914-694-5669>

> -----Original Message-----
> From: cisco-nsp 
> [mailto:cisco-nsp-boun...@puck.nether.net<mailto:cisco-nsp-boun...@puck.nether.net>]
>  On Behalf Of
> james list
> Sent: Thursday, October 13, 2016 10:45 AM
> To: cisco-nsp@puck.nether.net<mailto:cisco-nsp@puck.nether.net>
> Subject: [c-nsp] huge amount of mcast traffic
>
> Dear experts,
>
> I’ve a multicast financial market connected to my infrastructure, I’ve
> been
> informed that a new data multicast flow could reach up to 6 Gbs, so an
> huge
> amount of traffic needs to be replicated.
>
> Market is connected to an ASR 1001, than to a C6807-XL and customers
> are
> connected to C6500.
>
> ASR1001 is running 15.3(3)S1 and currently has a license for 2.5Gbs (to
> be
> upgrade)
>
> C6807 has  Supervisor Engine 2T 10GE and IOS 15.1(2)SY4
>
> C6500 has Supervisor Engine 720 10GE and IOS 12.2(33)SXI5
>
> I’d like to understand in your experience if the mentioned
> infrastructure
> could suffer in performance or throughput or other, having to replicate
> the
> mentioned amount of traffic.
>
> Thanks in advance for any feedback
>
> Cheers
>
> James
> _______________________________________________
> cisco-nsp mailing list  
> cisco-nsp@puck.nether.net<mailto:cisco-nsp@puck.nether.net>
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/

_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to