WS-X6148A-GE-TX performance question

2009-09-10 Thread Scott Spencer
Are the X6148A cards dedicated 1 gb/s uplink for each port ( shared 32 Gb/s bus , as long as each port is it's own 1 gb/s still to the 32gb/s bus and not shared with 7 other ports, so effectively just 125Mb/s per port then if all used at full/even capacity) ? I can't really find anything much on

Re: WS-X6148A-GE-TX performance question

2009-09-10 Thread Bill Blackford
There was a good thread on Cisco-nsp regarding this exact subject recently. My recollection is that both X6148 and X6148A have just 6 1GB ASICs. Therefore the over subscription rate is 8:1. The biggest difference between these LC's is that X6148A will support large MTU whereas X6148 will not. -b

Re: WS-X6148A-GE-TX performance question

2009-09-10 Thread Tim Lampman
http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SXF/native/release/notes/OL_4164.html#wp2563293 Scott Spencer wrote: Are the X6148A cards dedicated 1 gb/s uplink for each port ( shared 32 Gb/s bus , as long as each port is it's own 1 gb/s still to the 32gb/s bus and not

RE: WS-X6148A-GE-TX performance question

2009-09-10 Thread Crooks, Sam
: Thursday, September 10, 2009 4:40 PM To: Scott Spencer Cc: nanog@nanog.org Subject: Re: WS-X6148A-GE-TX performance question There was a good thread on Cisco-nsp regarding this exact subject recently. My recollection is that both X6148 and X6148A have just 6 1GB ASICs. Therefore the over

RE: WS-X6148A-GE-TX performance question

2009-09-10 Thread Holmes,David A
: Thursday, September 10, 2009 4:40 PM To: Scott Spencer Cc: nanog@nanog.org Subject: Re: WS-X6148A-GE-TX performance question There was a good thread on Cisco-nsp regarding this exact subject recently. My recollection is that both X6148 and X6148A have just 6 1GB ASICs. Therefore the over

Re: WS-X6148A-GE-TX performance question

2009-09-10 Thread Nick Hilliard
On 10/09/2009 22:17, Scott Spencer wrote: I can't really find anything much on X6148A internal architecture online, but it would seem that each port gets its own 1gb/s link to the card/backplane, and that the bottleneck then is the 32gb/s backplane (which is fine, as long as it's not 1 gb/s per