Re: [c-nsp] DWDM optics on 6500s

2009-10-20 Thread Church, Charles
Thanks.  I assume that even though the 6509-V-E is available, until the 80gig 
line cards and Sup are available, you'd be stuck at 40gig/slot?

Chuck 


-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Tony Varriale
Sent: Monday, October 19, 2009 5:07 PM
To: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] DWDM optics on 6500s


It will shortly but it won't do you any good with the existing family of 
sups.  The 2T will be the first (and last?) sup that can push the bandwidth 
to all those slots.

You can also reference the 6509-V-E...it's ready for 80gbps/slot.  You can 
order that today.  Note that it's a NEBS chassis.

tv
- Original Message - 
From: Church, Charles cchur...@harris.com
To: Kevin Graham kgra...@industrial-marshmallow.com
Cc: cisco-nsp@puck.nether.net
Sent: Monday, October 19, 2009 1:12 PM
Subject: Re: [c-nsp] DWDM optics on 6500s


 Are you saying a 6513-E chassis exists?  I can't find any reference to it. 
 That would solve a few of the problems we currently have (density issue)

 Chuck


 -Original Message-
 From: cisco-nsp-boun...@puck.nether.net 
 [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Kevin Graham
 Sent: Monday, October 19, 2009 11:45 AM
 To: Nick Hilliard; mti...@globaltransit.net
 Cc: cisco-nsp@puck.nether.net
 Subject: Re: [c-nsp] DWDM optics on 6500s


 As a side issue, there are electrical limitations imposed by the physical

 cross-bar unit inside the actual chassis, but I don't know how much of a 
 problem
 these limitations are in practice.

 6500E was the key for this. Besides nutty amounts of POE capacity, it also 
 picked up
 improved backplane for 20g+ fabric and extending to all 11 LC slots in 
 the 6513.

 (Still need to dig up details, as faster SSO time is also tied to chassis, 
 though
 I can't recall why).
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/ 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-20 Thread Kevin Graham
 I assume that even though the 6509-V-E is available, until the 80gig 

 line cards and Sup are available, you'd be stuck at 40gig/slot?

Correct (nothing special about the 09-V-E in this respect compared to
any other the -E's as far as I know). This is the same as how the
traditional (pre-E) 6500 chassis was capable of doing to do 2x20gb per
slot, but it was a step  ahead of the rest of the system which would
only deliver 8gb (w/ SFM/SFM2) until the Sup720 was released.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-20 Thread Tony Varriale
Yup!  No 80g cards yet.  I haven't been debriefed on the architecture but 
I'm assuming it is going to be 2 x 40g or 4 x 20g backplane lanes.


tv
- Original Message - 
From: Church, Charles cchur...@harris.com

To: Tony Varriale tvarri...@comcast.net; cisco-nsp@puck.nether.net
Sent: Tuesday, October 20, 2009 9:14 AM
Subject: RE: [c-nsp] DWDM optics on 6500s


Thanks.  I assume that even though the 6509-V-E is available, until the 
80gig line cards and Sup are available, you'd be stuck at 40gig/slot?


Chuck


-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Tony Varriale

Sent: Monday, October 19, 2009 5:07 PM
To: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] DWDM optics on 6500s


It will shortly but it won't do you any good with the existing family of
sups.  The 2T will be the first (and last?) sup that can push the bandwidth
to all those slots.

You can also reference the 6509-V-E...it's ready for 80gbps/slot.  You can
order that today.  Note that it's a NEBS chassis.

tv
- Original Message - 
From: Church, Charles cchur...@harris.com

To: Kevin Graham kgra...@industrial-marshmallow.com
Cc: cisco-nsp@puck.nether.net
Sent: Monday, October 19, 2009 1:12 PM
Subject: Re: [c-nsp] DWDM optics on 6500s



Are you saying a 6513-E chassis exists?  I can't find any reference to it.
That would solve a few of the problems we currently have (density issue)

Chuck


-Original Message-
From: cisco-nsp-boun...@puck.nether.net
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Kevin Graham
Sent: Monday, October 19, 2009 11:45 AM
To: Nick Hilliard; mti...@globaltransit.net
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] DWDM optics on 6500s



As a side issue, there are electrical limitations imposed by the physical



cross-bar unit inside the actual chassis, but I don't know how much of a
problem
these limitations are in practice.


6500E was the key for this. Besides nutty amounts of POE capacity, it also
picked up
improved backplane for 20g+ fabric and extending to all 11 LC slots in
the 6513.

(Still need to dig up details, as faster SSO time is also tied to chassis,
though
I can't recall why).
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/ 


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-19 Thread Nick Hilliard

On 05/10/2009 22:41, Mark Tinka wrote:

That said, it's also clear the 6500 isn't done yet, and it's
still got a number of tricks up its sleeve. The question is,
Will you wait?.


The c6500 is just a chassis.  So, if you're referring to the trick of 
upgrading both the line cards and the supervisor engine to something 
better, then yes, it's got more tricks up its sleeve.


As a side issue, there are electrical limitations imposed by the physical 
cross-bar unit inside the actual chassis, but I don't know how much of a 
problem these limitations are in practice.  Perhaps the problem of getting 
reliable 20G+ parallel data transfers across the backplane is greater than 
dealing with the bandwidth limitations imposed by the electrical 
characteristics of the physical crossbar.  Being hardware-related, this 
sort of stuff is well beyond my sphere of knowledge, and for all I know, 
c65k's operate using hoards of maxwell's daemons being slave-driven by 
microscopic evil pixies.


Nick
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-19 Thread Mark Tinka
On Monday 19 October 2009 07:42:10 pm Nick Hilliard wrote:

 The c6500 is just a chassis.  So, if you're referring to
 the trick of upgrading both the line cards and the
 supervisor engine to something better, then yes, it's got
 more tricks up its sleeve.

Of course, that's what I meant :-).

I'd think it's implied that a chassis without any useful 
line cards is just a rock taking up space :-).

My reference was more in terms of the platform than just a 
series of chassis'.

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] DWDM optics on 6500s

2009-10-19 Thread Kevin Graham
 As a side issue, there are electrical limitations imposed by the physical 

 cross-bar unit inside the actual chassis, but I don't know how much of a 
 problem 
 these limitations are in practice.

6500E was the key for this. Besides nutty amounts of POE capacity, it also 
picked up
improved backplane for 20g+ fabric and extending to all 11 LC slots in the 
6513.

(Still need to dig up details, as faster SSO time is also tied to chassis, 
though
I can't recall why).
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-19 Thread Church, Charles
Are you saying a 6513-E chassis exists?  I can't find any reference to it.  
That would solve a few of the problems we currently have (density issue)

Chuck 


-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Kevin Graham
Sent: Monday, October 19, 2009 11:45 AM
To: Nick Hilliard; mti...@globaltransit.net
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] DWDM optics on 6500s


 As a side issue, there are electrical limitations imposed by the physical 

 cross-bar unit inside the actual chassis, but I don't know how much of a 
 problem 
 these limitations are in practice.

6500E was the key for this. Besides nutty amounts of POE capacity, it also 
picked up
improved backplane for 20g+ fabric and extending to all 11 LC slots in the 
6513.

(Still need to dig up details, as faster SSO time is also tied to chassis, 
though
I can't recall why).
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-19 Thread Peter Rathlev
On Mon, 2009-10-19 at 14:12 -0400, Church, Charles wrote:
 Are you saying a 6513-E chassis exists?  I can't find any reference to
 it.  That would solve a few of the problems we currently have (density
 issue)

We've been told about it at a local tech update about the Catalyst
platform about a month ago, but I also can't find any material about it.

-- 
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-19 Thread Kevin Graham


 Are you saying a 6513-E chassis exists?  I can't find any reference to it.

Apparently not yet. (I had never paid attention to availability, as any
places we might use it would depend on full fabric connectivity).

Quick search turned up (the rather depressing):

   http://www.cisco.com/web/AP/partners/assets/docs/Day1_03a_Catalyst_Update.pdf

...it would appear the intention is to release the chassis with the new
supervisor. (Obviously the timelines cited there are out of date, since
that also cites an EARL8-based 720 in '09 and VTOR, which I'm guessing
we'll never see on a 6k).

Several of the used vendors have matches for WS-C6513-E, so it may well
be on the global price list.

 That would solve a few of the problems we currently have (density issue)


My understanding to date is that it won't do any good until the next-gen
sup is out (as presumably there would be no other reason to hold back on
an 11x2 fabric configuration).
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Jeff Bacon
 If you want to cut delay for switching, you may want to consider the
 new top-of-rack 10G boxes, which are typically cut-through.  You may
find

I'm thinking about that for within the datacenter. It's hard finding a
justification for the C-vendor's products though - a N7 is just too
much, and I guess the N5 is OK but even starting out it's not...
inexpensive, esp when it doesn't even include a meaningful layer-3
capability. 

I guess I understand the product reasoning - a large datacenter is built
around N7Ks, with N5K distro - which is great, if you have a massive
datacenter... but what about us poor saps in the middle and lower tiers?
Or aren't we interesting anymore? Oh, I understand, we're supposed to be
virtualizing and buying the 1000v switches...except virtual servers
don't do me crap for good... 


 Personally, I have a bit of a thing against X2, but that's just me.
 Make your own mind up.

Fair enough. 

  3) Does 6500 switching performance blow super-hard, or just so-so
hard?
  (6-15us is ok.) Yes a 4900M might be faster, or a J-product, but I
don't
  want to change platform really, I need NAT and don't want to use
  routers, I want to keep box count down (co-lo), and having a whole
box
  just for passing 10G doesn't IMO make sense because I'd still have
to
  get it into the 6500 anyway.
 The 6500 is a great 1G switch platform, but doesn't excel in the 10G
 range, particularly with 6704 blades.

Admittedly, for the cost, I can buy an arista 1U for wave passthru and
just tap multiple 1Gs over to the 6500. 

Why particularly with 6704 blades? Is there something particularly
wrong with them? 

Thanks,
-bacon


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Nick Hilliard

On 05/10/2009 15:35, Jeff Bacon wrote:

Admittedly, for the cost, I can buy an arista 1U for wave passthru and
just tap multiple 1Gs over to the 6500.


Aristas use SFP+.  Good luck running colours over them. :-)

Actually, Optoway in Taiwan produce CWDM SFP+ transceivers.  I don't know 
anyone using them, but given the power constraints imposed by the SFP+ form 
factor, I wouldn't expect long reach or anything.



Why particularly with 6704 blades? Is there something particularly
wrong with them?


Depends on what you do with them.  They are a first generation blade, and 
are 6yo technology at this stage and, well, things have moved on since 
2003.  XENPAK is moribund as a transceiver type which means that any money 
you invest into buying transceivers will probably be written off when you 
retire the blade.


If you're concerned about storm control (which personally, I am), the 6704 
can only limit to 0.33% of port capacity, which means that if you get a 
broadcast / multicast storm on a 6704 port, it will bang out 33 megs of 
data before storm control even notices.  Most hosts will happily ignore the 
multicast traffic, but the broadcast traffic could cause serious trouble.


If you need to push wire-speed 10G on a 6704, there are conflicting reports 
as to whether this works well.  Some people say yes; others no - there's 
lots of discussion about this in the c-nsp archives.  It can help to use a 
DFC if you're banging out a lot of traffic, but that's extra €€€ on top of 
a product which already has a high cost per port.


The 6708 is lots better than the 6704 if you operate it in non- 
oversubscribe mode, apart from anything else, it has a built-in DFC, which 
means that you don't need to retrofit this for high traffic environments.


As I said, it depends on what you want to do.  If you're running just a 
couple of gigs and don't care about the broadcast traffic problem or, say, 
are using them for L3 traffic instead of L2, then they are great. 
Similarly, the C65k+sup720 platform makes a really nice high density, 
feature rich 1G platform.  But if you're planning to run lots of very high 
bandwidth stuff, it might be better to use a different platform.


Nick
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Azher Mughal
In order to use SFP+ from other vendors in Arista, you need to get them 
enabled first.


-Azher

Nick Hilliard wrote:

On 05/10/2009 15:35, Jeff Bacon wrote:

Admittedly, for the cost, I can buy an arista 1U for wave passthru and
just tap multiple 1Gs over to the 6500.


Aristas use SFP+.  Good luck running colours over them. :-)

Actually, Optoway in Taiwan produce CWDM SFP+ transceivers.  I don't 
know anyone using them, but given the power constraints imposed by the 
SFP+ form factor, I wouldn't expect long reach or anything.



Why particularly with 6704 blades? Is there something particularly
wrong with them?


Depends on what you do with them.  They are a first generation blade, 
and are 6yo technology at this stage and, well, things have moved on 
since 2003.  XENPAK is moribund as a transceiver type which means that 
any money you invest into buying transceivers will probably be written 
off when you retire the blade.


If you're concerned about storm control (which personally, I am), the 
6704 can only limit to 0.33% of port capacity, which means that if you 
get a broadcast / multicast storm on a 6704 port, it will bang out 33 
megs of data before storm control even notices.  Most hosts will 
happily ignore the multicast traffic, but the broadcast traffic could 
cause serious trouble.


If you need to push wire-speed 10G on a 6704, there are conflicting 
reports as to whether this works well.  Some people say yes; others no 
- there's lots of discussion about this in the c-nsp archives.  It can 
help to use a DFC if you're banging out a lot of traffic, but that's 
extra €€€ on top of a product which already has a high cost per port.


The 6708 is lots better than the 6704 if you operate it in non- 
oversubscribe mode, apart from anything else, it has a built-in DFC, 
which means that you don't need to retrofit this for high traffic 
environments.


As I said, it depends on what you want to do.  If you're running just 
a couple of gigs and don't care about the broadcast traffic problem 
or, say, are using them for L3 traffic instead of L2, then they are 
great. Similarly, the C65k+sup720 platform makes a really nice high 
density, feature rich 1G platform.  But if you're planning to run lots 
of very high bandwidth stuff, it might be better to use a different 
platform.


Nick
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Richard A Steenbergen
On Mon, Oct 05, 2009 at 04:06:31PM +0100, Nick Hilliard wrote:
 Depends on what you do with them.  They are a first generation blade, and 
 are 6yo technology at this stage and, well, things have moved on since 
 2003.  XENPAK is moribund as a transceiver type which means that any money 
 you invest into buying transceivers will probably be written off when you 
 retire the blade.

Don't forget they are absurdly under-buffered (16MB per card, compared
to 256MB for 6708), and you can easily cause head of line blocking with
certain traffic profiles. If you want to run anywhere close to line rate
on them you need to monitor for drops or overruns and be prepared to
play the port shuffle game to find an arrangement that works. Passing a
lot of traffic within the same fabric channel (from port 1-2, or
3-4) is the biggest sin, it will start dropping at 7 Gbps.

-- 
Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Jeff Bacon
 Don't forget they are absurdly under-buffered (16MB per card, compared
 to 256MB for 6708), and you can easily cause head of line blocking
with
 certain traffic profiles. If you want to run anywhere close to line
rate
 on them you need to monitor for drops or overruns and be prepared to
 play the port shuffle game to find an arrangement that works. Passing
a
 lot of traffic within the same fabric channel (from port 1-2, or
 3-4) is the biggest sin, it will start dropping at 7 Gbps.

Well that's wonderfully comforting. Though I really probably only need
two ports anyway - ring-in and ring-out. Maybe not so bad. I'd consider
a 720-VS-10G head if I had some confidence that those two ports on the
sup were actually connected to the fabric. 

I don't really need to run line rate - this is more about latency and
burst capacity than sustained throughput. I have loads that burst from 0
to 500Mb/sec (then back) in nothing flat, and multiple of those may run
through the wire at the same time. Or not. 

Someone pointed out that the X2 and SFP+ xcvrs don't have much punch,
and I'm going to be shooting 20-30km through passive MUXes. So that
might matter.

(This is a bit of a roll-yer-own local metro NYC ring, which I'm doing
because I can get the wave for not much more than I'd pay for the
switched gig.) 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Richard A Steenbergen
On Mon, Oct 05, 2009 at 03:47:05PM -0500, Jeff Bacon wrote:
 Well that's wonderfully comforting. Though I really probably only need
 two ports anyway - ring-in and ring-out. Maybe not so bad. I'd consider
 a 720-VS-10G head if I had some confidence that those two ports on the
 sup were actually connected to the fabric. 

Can't tell you anything about the VS-10G, but if you're doing it on 6704
make sure you use 1 and 3, or 2 and 4, not 1-2 etc. Unfortunately I
have to deal with many hundreds of 10GE ports on 6704s (what can I say,
they're cheap :P), so we tend to try to pair them up as port-channels
(i.e. members 1/1 and 1/2, 2/1 and 2/2, etc) since this guarantees
traffic will never go in port 1 and out port 2 on any given fabric
channel.

 I don't really need to run line rate - this is more about latency and
 burst capacity than sustained throughput. I have loads that burst from 0
 to 500Mb/sec (then back) in nothing flat, and multiple of those may run
 through the wire at the same time. Or not. 

Yeah ok that won't challenge pretty much any hardware. :)

 Someone pointed out that the X2 and SFP+ xcvrs don't have much punch,
 and I'm going to be shooting 20-30km through passive MUXes. So that
 might matter.

X2 is nothing more than a physically smaller XENPAK case, the interface
and for the most part the components (if you take apart a modern XENPAK,
you'll see most of it is empty space) are exactly the same. Basically X2
only exists so lazy companies who don't want to redesign their boards
(Hi Cisco!) can keep using the same components from their old XENPAK
designs.

SFP+ is an entirely different beast, two generations removed from XENPAK 
(XENPAK-XFP-SFP), and with very low max power caps which prevent it 
from being used for most long reach/DWDM applications. Basically SFP+ 
only exists so you can stuff 48 10GE ports into a blade or 1U switch, 
but it's really only useful if you need to do a large number of short 
reach ports (i.e. datacenter aggregation). The only redeeming quality of 
SFP+ is you can finally get LR for them (I won't touch SR outside of 
same-rack applications, way too many problems) at not unreasonable 
prices.

XFP is still the best all-around optics platform for the full range of 
features, but unfortunately you'll see less and less focus here as 
everyone jumps on the SFP+ bandwagon as the next new thing even when 
it is completely unnecessary and infact only serves to limit function.

Slightly dated now (from feb 08) but mostly still accurate:

http://www.nanog.org/meetings/nanog42/presentations/pluggables.pdf

-- 
Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Tim Durack
 Well that's wonderfully comforting. Though I really probably only need
 two ports anyway - ring-in and ring-out. Maybe not so bad. I'd consider
 a 720-VS-10G head if I had some confidence that those two ports on the
 sup were actually connected to the fabric.

The 10Gig ports on the  VS-S720 are fabric attached.

 I don't really need to run line rate - this is more about latency and
 burst capacity than sustained throughput. I have loads that burst from 0
 to 500Mb/sec (then back) in nothing flat, and multiple of those may run
 through the wire at the same time. Or not.

I think lots of people are in the latency not bandwidth situation.
That's probably why most vendors aren't producing dense 10Gig cards
yet. We have the situation where GigE latency is too high for some
apps, but 10Gig is okay.

 Someone pointed out that the X2 and SFP+ xcvrs don't have much punch,
 and I'm going to be shooting 20-30km through passive MUXes. So that
 might matter

Opnext claims ER SFP+. Haven't seen anyone doing ZR or anything more
exotic yet. Sure it will come though. Something FEC/EFEC in the 200km
range would be interesting for many people.


-- 
Tim:
Sent from New York, NY, United States
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Tim Durack
We've selected the 6708 for our 10Gig installs. DFCs and good sized
buffers. Lots of availability on the used market. Can be run in
line-rate or over-subscribed mode, which might suit your deployment.

I have hopes for SFP+ linecards to drive 10Gig costs down, but I don't
think much is going to happen until 40Gig/100Gig is the new backbone.

Tim:

On Mon, Oct 5, 2009 at 4:47 PM, Jeff Bacon ba...@walleyesoftware.com wrote:
 Don't forget they are absurdly under-buffered (16MB per card, compared
 to 256MB for 6708), and you can easily cause head of line blocking
 with
 certain traffic profiles. If you want to run anywhere close to line
 rate
 on them you need to monitor for drops or overruns and be prepared to
 play the port shuffle game to find an arrangement that works. Passing
 a
 lot of traffic within the same fabric channel (from port 1-2, or
 3-4) is the biggest sin, it will start dropping at 7 Gbps.

 Well that's wonderfully comforting. Though I really probably only need
 two ports anyway - ring-in and ring-out. Maybe not so bad. I'd consider
 a 720-VS-10G head if I had some confidence that those two ports on the
 sup were actually connected to the fabric.

 I don't really need to run line rate - this is more about latency and
 burst capacity than sustained throughput. I have loads that burst from 0
 to 500Mb/sec (then back) in nothing flat, and multiple of those may run
 through the wire at the same time. Or not.

 Someone pointed out that the X2 and SFP+ xcvrs don't have much punch,
 and I'm going to be shooting 20-30km through passive MUXes. So that
 might matter.

 (This is a bit of a roll-yer-own local metro NYC ring, which I'm doing
 because I can get the wave for not much more than I'd pay for the
 switched gig.)

 ___
 cisco-nsp mailing list  cisco-...@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/




-- 
Tim:
Sent from New York, NY, United States
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-05 Thread Mark Tinka
On Monday 05 October 2009 11:06:31 pm Nick Hilliard wrote:

 As I said, it depends on what you want to do.  If you're
 running just a couple of gigs and don't care about the
 broadcast traffic problem or, say, are using them for L3
 traffic instead of L2, then they are great. Similarly,
 the C65k+sup720 platform makes a really nice high
 density, feature rich 1G platform.  But if you're
 planning to run lots of very high bandwidth stuff, it
 might be better to use a different platform.

From Cisco, I think that if the goal is to aggregate n x 
10Gbps Ethernet in abundance, for pure Layer 2 core 
switching over a limited distance (within the data centre), 
the Nexus 5000 might not be such a bad consideration.

Preliminary pricing for this vs. a couple of WS-X6708 is 
comparable, and better in certain cases. YMMV.

That said, it's also clear the 6500 isn't done yet, and it's 
still got a number of tricks up its sleeve. The question is, 
Will you wait?.

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

[c-nsp] DWDM optics on 6500s

2009-10-02 Thread Jeff Bacon
I am looking at getting some metro waves (mostly 20-40km) between sites;
I'm working with a provider who is using passive splitters on dark runs
and they're willing to split me out a wave for near the same cost as
just running a gig switched.

Currently, I am thinking of using a 6704 blade with a DFC3B (I'm using
sup7203Bs - no call for the C model in my environ) and buying tuned
optics. 

The goal is primarily serialization latency delay reduction, not
actually running 10G of traffic - I'll be lucky to run 1-2GB (though
it'll mostly be 60-100byte packets). 

1) The cisco optics appear to be in short supply and damn expensive. I
may have to go third-party. I know it's a gamble. Any other issues I
Should think about besides what's been discussed here? 

2) Is XENPAK on 6704 viable? Any gotchas I should know about with
XENPAKs vs X2? 

3) Does 6500 switching performance blow super-hard, or just so-so hard?
(6-15us is ok.) Yes a 4900M might be faster, or a J-product, but I don't
want to change platform really, I need NAT and don't want to use
routers, I want to keep box count down (co-lo), and having a whole box
just for passing 10G doesn't IMO make sense because I'd still have to
get it into the 6500 anyway. 

4) What's reliability on the tuned optics (vs say using a freq-shift box
with a normal 10G optic)? Is this of the level that I should expect
significant BER, or keep a spare on the shelf, or is it pretty much
rock-solid?

As you might guess, this is my first foray into actually implementing
DWDM runs - I've studied it and planned it, but this is now at the buy
stuff level, and I don't want to assume I know everything. Relevant
document pointers appreciated. 

Thanks,
-bacon

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] DWDM optics on 6500s

2009-10-02 Thread Nick Hilliard

On 02/10/2009 17:44, Jeff Bacon wrote:

I am looking at getting some metro waves (mostly 20-40km) between sites;
I'm working with a provider who is using passive splitters on dark runs
and they're willing to split me out a wave for near the same cost as
just running a gig switched.


I went through this some while back, and on the basis that:

- coloured xenpaks are exotic, expensive and only produced by a single 
manufacturer in the world (opnext, as you ask)
- coloured xenpaks will only last as long as your 6704 card, meaning that 
when you retire this kit, your entire coloured optics investment is lost
- transponders were not hugely more expensive than the prices I was quoted 
for cisco coloured optics


... I decided that coloured xenpaks, while marginally cheaper in the short 
term, were actually a bad strategic move in the long term.   Given the way 
that our network has changed since we made that decision, it turns out that 
it was a good decision to make, as we're completely flexible about what kit 
we use at each end of the link, and have chosen to exercise that flexibility.


There is also a much better selection of coloured XFPs on the market than 
coloured xenpak.



The goal is primarily serialization latency delay reduction, not
actually running 10G of traffic - I'll be lucky to run 1-2GB (though
it'll mostly be 60-100byte packets).


If you want to cut delay for switching, you may want to consider the new 
top-of-rack 10G boxes, which are typically cut-through.  You may find that 
these boxes + SR SFP+ + wdm transponders is quite cost favourable compared 
to c6500 chassis space + 6704 + coloured xenpak.  Cisco N5K may be a good 
option here.  But other vendors have similar style boxes (Brocade Ti24X, 
Extreme X650, F10 S2410, Arista Networks *.*, etc).  Oh, and the SFP+ boxes 
will also run 1G ethernet on SFPs (although the N2K has some limitations). 
 This is a nice feature win.



1) The cisco optics appear to be in short supply and damn expensive. I
may have to go third-party. I know it's a gamble. Any other issues I
Should think about besides what's been discussed here?


coloured xenpaks are a single vendor product and I have heard that they are 
mostly made to order, hence the delay.



2) Is XENPAK on 6704 viable? Any gotchas I should know about with
XENPAKs vs X2?


X2 == xenpak version 2.  Their power draw is slightly less than xenpak, but 
lots more than xfp / sfp+. Only HP and Cisco use X2 for ethernet switches - 
everyone else uses XFP and latterly SFP+, which means that there is less 
pricing pressure and and more vendor lock-in if you go down the X2 route.


Personally, I have a bit of a thing against X2, but that's just me.  Make 
your own mind up.



3) Does 6500 switching performance blow super-hard, or just so-so hard?
(6-15us is ok.) Yes a 4900M might be faster, or a J-product, but I don't
want to change platform really, I need NAT and don't want to use
routers, I want to keep box count down (co-lo), and having a whole box
just for passing 10G doesn't IMO make sense because I'd still have to
get it into the 6500 anyway.


The 6500 is a great 1G switch platform, but doesn't excel in the 10G range, 
particularly with 6704 blades.


Nick
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/