Re: Routers in Data Centers

2010-09-27 Thread Ingo Flaschberger

But it seems, that NetFPGA has not enough memory to hold a full view
(current 340k routes).


It's just a development platform for prototyping designs, not
something you would use in production...
I want to use it to implement and test ideas that I have, and play
with some different forwarding architectures, not use it as a final
product :)


also, does a datacenter router/switch need a full table? isn't that
the job of the peering/transit routers in your scheme?



In my small network the datacenter router is also the peering/transit 
router.





Re: Routers in Data Centers

2010-09-26 Thread Chris Adams
Once upon a time, Joel Jaeggli joe...@bogus.com said:
 On Sep 25, 2010, at 9:05, Seth Mattinen se...@rollernet.us wrote:
  From the datacenter operator prospective, it would be nice if some of 
  these vendors would acknowledge the need for front-to-back cooling. I 
  mean, it is 2010.
 
 Bakplanes make direct front to back cooling hard. non-modular platforms can 
 do it just fine however.

There are servers and storage arrays that have a front that is nothing
but hot-swap hard drive bays (plugged into backplanes), and they've been
doing front-to-back cooling since day one.  Maybe the router vendors
need to buy a Dell, open the case, and take a look.

The server vendors also somehow manage to make an empty case that costs
less than $10,000 (they'll even fill it up with useful stuff for less
than that).

-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.



Re: Routers in Data Centers

2010-09-26 Thread Joel Jaeggli


On Sep 26, 2010, at 8:26, Chris Adams cmad...@hiwaay.net wrote:
 Once upon a time, Joel Jaeggli joe...@bogus.com said:
 On Sep 25, 2010, at 9:05, Seth Mattinen se...@rollernet.us wrote:
 From the datacenter operator prospective, it would be nice if some of 
 these vendors would acknowledge the need for front-to-back cooling. I 
 mean, it is 2010.
 
 Bakplanes make direct front to back cooling hard. non-modular platforms can 
 do it just fine however.
 
 There are servers and storage arrays that have a front that is nothing
 but hot-swap hard drive bays (plugged into backplanes), and they've been
 doing front-to-back cooling since day one.  Maybe the router vendors
 need to buy a Dell, open the case, and take a look.

The backplane for a sata disk array is 8 wires per drive plus a common power 
bus.
  
 
 The server vendors also somehow manage to make an empty case that costs
 less than $10,000 (they'll even fill it up with useful stuff for less
 than that).

Unit volume is little higher, and the margins kind of suck. There's a reason 
why hp would rather sell you a blade server chassis than 16 1us.

Equating servers and routers is like equating bouncy castle prices with renting 
an oil platform.

 -- 
 Chris Adams cmad...@hiwaay.net
 Systems and Network Administrator - HiWAAY Internet Services
 I don't speak for anybody but myself - that's enough trouble.
 



Re: Routers in Data Centers

2010-09-26 Thread Chris Adams
Once upon a time, Joel Jaeggli joe...@bogus.com said:
 On Sep 26, 2010, at 8:26, Chris Adams cmad...@hiwaay.net wrote:
  There are servers and storage arrays that have a front that is nothing
  but hot-swap hard drive bays (plugged into backplanes), and they've been
  doing front-to-back cooling since day one.  Maybe the router vendors
  need to buy a Dell, open the case, and take a look.
 
 The backplane for a sata disk array is 8 wires per drive plus a common power 
 bus.

Server vendors managed cooling just fine for years with 80 pin SCA
connectors.  Hard drives are also harder to cool, as they are a solid
block, filling the space, unlike a card of chips.

I'm not saying the problems are the same, but I am saying that a
backplane making cooling hard is not a good excuse, especially when
the small empty chassis costs $10K+.
-- 
Chris Adams cmad...@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.



Re: Routers in Data Centers

2010-09-26 Thread Joel Jaeggli


Joel's widget number 2

On Sep 26, 2010, at 10:47, Chris Adams cmad...@hiwaay.net wrote:

 Once upon a time, Joel Jaeggli joe...@bogus.com said:
 On Sep 26, 2010, at 8:26, Chris Adams cmad...@hiwaay.net wrote:
 There are servers and storage arrays that have a front that is nothing
 but hot-swap hard drive bays (plugged into backplanes), and they've been
 doing front-to-back cooling since day one.  Maybe the router vendors
 need to buy a Dell, open the case, and take a look.
 
 The backplane for a sata disk array is 8 wires per drive plus a common power 
 bus.
 
 Server vendors managed cooling just fine for years with 80 pin SCA
 connectors.  Hard drives are also harder to cool, as they are a solid
 block, filling the space, unlike a card of chips.

It's the same 80 wires on every single drive in the string.

There are fewer conductors embedded in 12 drive sca backplane as there are in a 
12 drive sata backplane, in both cases they are generally two layer pcbs. 
Compared to what 10+ layer pcbs that are a approaching 1/4 thick on the 
router. 

Hard drives are 6-12w each, a processor complex that's north of 200w per card 
is a rather different cooling exercise.  

 I'm not saying the problems are the same, but I am saying that a
 backplane making cooling hard is not a good excuse, especially when
 the small empty chassis costs $10K+.
 -- 
 Chris Adams cmad...@hiwaay.net
 Systems and Network Administrator - HiWAAY Internet Services
 I don't speak for anybody but myself - that's enough trouble.
 



Re: Routers in Data Centers

2010-09-26 Thread Seth Mattinen
On 9/26/10 11:09 AM, Joel Jaeggli wrote:
 
 
 Joel's widget number 2
 
 On Sep 26, 2010, at 10:47, Chris Adams cmad...@hiwaay.net wrote:
 
 Once upon a time, Joel Jaeggli joe...@bogus.com said:
 On Sep 26, 2010, at 8:26, Chris Adams cmad...@hiwaay.net wrote:
 There are servers and storage arrays that have a front that is nothing
 but hot-swap hard drive bays (plugged into backplanes), and they've been
 doing front-to-back cooling since day one.  Maybe the router vendors
 need to buy a Dell, open the case, and take a look.

 The backplane for a sata disk array is 8 wires per drive plus a common 
 power bus.

 Server vendors managed cooling just fine for years with 80 pin SCA
 connectors.  Hard drives are also harder to cool, as they are a solid
 block, filling the space, unlike a card of chips.
 
 It's the same 80 wires on every single drive in the string.
 
 There are fewer conductors embedded in 12 drive sca backplane as there are in 
 a 12 drive sata backplane, in both cases they are generally two layer pcbs. 
 Compared to what 10+ layer pcbs that are a approaching 1/4 thick on the 
 router. 

Aw come on, that's no reason you can't just drill it full of holes. I
mean, it is 2010. It should be wireless by now.

~Seth



Re: Routers in Data Centers

2010-09-26 Thread Adam Armstrong

 On 24/09/2010 11:22, Venkatesh Sriram wrote:

Hi,

Can somebody educate me on (or pass some pointers) what differentiates
a router operating and optimized for data centers versus, say a router
work in the metro ethernet space? What is it thats required for
routers operating in data centers? High throughput, what else?


Depending upon the specific requirements of the scenario at each type of 
site, the optimal devices could be either identical, or completely 
different.


:)

adam.



Re: Routers in Data Centers

2010-09-26 Thread ym1r . jr
As far as I know open source solutions doesn't have support for fabric or high 
speed asics. So the throughput will always be a big difference. Unless you are 
comparing a pure packet software interrupt platform.
--Original Message--
From: Adam Armstrong
To: Venkatesh Sriram
To: nanog@nanog.org
Subject: Re: Routers in Data Centers
Sent: Sep 25, 2010 7:18 PM

  On 24/09/2010 11:22, Venkatesh Sriram wrote:
 Hi,

 Can somebody educate me on (or pass some pointers) what differentiates
 a router operating and optimized for data centers versus, say a router
 work in the metro ethernet space? What is it thats required for
 routers operating in data centers? High throughput, what else?

Depending upon the specific requirements of the scenario at each type of 
site, the optimal devices could be either identical, or completely 
different.

:)

adam.



Sent via my BlackBerry® device from Claro

Re: Routers in Data Centers

2010-09-26 Thread Adrian Chadd
On Sun, Sep 26, 2010, ym1r...@gmail.com wrote:
 As far as I know open source solutions doesn't have support for fabric or 
 high speed asics. So the throughput will always be a big difference. Unless 
 you are comparing a pure packet software interrupt platform.

Hasn't there been a post about this to the contrary?

Isn't someone from Google presenting at NANOG about this?



Adrian




Re: Routers in Data Centers

2010-09-26 Thread Rubens Kuhl
On Sun, Sep 26, 2010 at 8:54 PM,  ym1r...@gmail.com wrote:
 As far as I know open source solutions doesn't have support for fabric or 
 high speed asics. So the throughput will always be a big difference. Unless 
 you are comparing a pure packet software interrupt platform.

Not high speed ASICs, but there are hardware-forwarding open-source(in
a broad definition) solutions:
http://netfpga.org

There are 3 related presentations on NANOG 50, which suggests these
solutions are reaching real ops quality.

Rubens



Re: Routers in Data Centers

2010-09-26 Thread Adrian Chadd
On Sun, Sep 26, 2010, Rubens Kuhl wrote:

 Not high speed ASICs, but there are hardware-forwarding open-source(in
 a broad definition) solutions:
 http://netfpga.org
 
 There are 3 related presentations on NANOG 50, which suggests these
 solutions are reaching real ops quality.

I hate to sound (more) like a broken record but if people want
to see open source hardware forwarding platforms succeeding
(and the software platforms get better), then look at trying to be
involved in their development.

Too many companies seem to think open source equates to free stuff
that I can use and not pay for; rather than thinking of it as
a normal product (with development cycles, resources, etc that any
commercial development requires)  that gives them the ability to
choose their own direction rather than be beholden to the whims
of a vendor.

One of the fun divides in open source at times is the big gap between
works and works in practice. The only way to get ops ready stuff
is to work with open source people to make it actually work in your
environment rather than what works for them. :-)

(Or you could wait for Google - but doesn't that make you beholden
to them as your vendor? :)


Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $24/pm+GST entry-level VPSes w/ capped bandwidth charges available in WA -



Re: Routers in Data Centers

2010-09-26 Thread Heath Jones
I'm more than interested in developing a much cheaper, hardware
forwarding router..
I think there is a lot of room for innovation - especially at the
target market in this thread.
If anyone wants to work with me on this, just let me know!
I've got a tonne of ideas and a bit of free time..

NetFPGA is a good platform, im saving my pennies to buy one and do
some development.
Its only a 4 port device, so not a device you would really use in
production however.


 I hate to sound (more) like a broken record but if people want
 to see open source hardware forwarding platforms succeeding
 (and the software platforms get better), then look at trying to be
 involved in their development.



RE: Routers in Data Centers

2010-09-26 Thread Alex Rubenstein

 
 I'm not saying the problems are the same, but I am saying that a
 backplane making cooling hard is not a good excuse, especially when
 the small empty chassis costs $10K+.


And, not to mention that some vendors do it sometimes.

The 9-slot Cisco Catalyst 6509 Enhanced Vertical Switch (6509-V-E) provides 
[stuff]. It also provides front-to-back airflow that is optimized for hot and 
cold aisle designs in colocated data center and service provider deployments 
and is compliant with Network Equipment Building Standards (NEBS) deployments.

It only took 298 years from the inception of the 6509 to get a front-to-back 
version. If you can do it with that oversized thing, it certainly can be done 
on a 7200, XMR, juniper whatever, or whatever else you fancy.

There is no good excuse. The datacenter of today (and yesterday) really needs 
front to back cooling; the datacenter of tomorrow requires and demands it.

If vendors cared, they'd do it. Problem is, there is a disconnect between 
datacenter designer, datacenter builder, datacenter operator, IT operator, and 
IT manufacturer. No one is smart enough, yet, to say, if you want to put that 
hunk of crap in my datacenter, it needs to suck in the front and put out in the 
back, otherwise my PUE will be 1.3 instead of 1.2 and you will be to blame for 
my oversized utility bills.

Perhaps when a bean-counter paying the power bill sees the difference, it will 
matter. I dunno.

I'll crawl back under my rock now.









Re: Routers in Data Centers

2010-09-26 Thread Ingo Flaschberger

I'm more than interested in developing a much cheaper, hardware
forwarding router..
I think there is a lot of room for innovation - especially at the
target market in this thread.
If anyone wants to work with me on this, just let me know!
I've got a tonne of ideas and a bit of free time..

NetFPGA is a good platform, im saving my pennies to buy one and do
some development.
Its only a 4 port device, so not a device you would really use in
production however.


But it seems, that NetFPGA has not enough memory to hold a full view 
(current 340k routes).





Re: Routers in Data Centers

2010-09-26 Thread Richard A Steenbergen
On Sun, Sep 26, 2010 at 09:24:54PM -0400, Alex Rubenstein wrote:
 
 And, not to mention that some vendors do it sometimes.
 
 The 9-slot Cisco Catalyst 6509 Enhanced Vertical Switch (6509-V-E) 
 provides [stuff]. It also provides front-to-back airflow that is 
 optimized for hot and cold aisle designs in colocated data center and 
 service provider deployments and is compliant with Network Equipment 
 Building Standards (NEBS) deployments.

A classic 6509 is under 15U, a 6509-V-E is 21U. Anyone can do front to 
back airflow if they're willing to bloat the size of the chassis (in 
this case by 40%) to do all the fans and baffling, but then you'd have 
people whining about the size of the box. :)

 It only took 298 years from the inception of the 6509 to get a 
 front-to-back version. If you can do it with that oversized thing, it 
 certainly can be done on a 7200, XMR, juniper whatever, or whatever 
 else you fancy.

Well, a lot of people who buy 7200s, baby XMRs, etc, are doing it for 
the size. Lord knows I certainly bought enough 7606s instead of 6509s 
over the years for that very reason. I'm sure the vendors prefer to 
optimize the size footprint on the smaller boxes, and only do front to 
back airflow on the boxes with large thermal loads (like all the modern 
16+ slot chassis that are rapidly approaching 800W/card). Also, remember 
the 6509 has been around since its 9 slots were lucky to see 100W/card, 
which is a far cry from a box loaded with 6716s at 400W/card or other 
power hungry configs.

Remember the original XMR 32 chassis, which had side to side airflow? 
They quickly disappeared that sucker and replaced it with the much 
larger version they have today, I can only imagine how bad that was. :)

-- 
Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)



Re: Routers in Data Centers

2010-09-26 Thread Christian Martin

On Sep 26, 2010, at 10:29 PM, Richard A Steenbergen r...@e-gerbil.net wrote:

 On Sun, Sep 26, 2010 at 09:24:54PM -0400, Alex Rubenstein wrote:
 
 And, not to mention that some vendors do it sometimes.
 
 The 9-slot Cisco Catalyst 6509 Enhanced Vertical Switch (6509-V-E) 
 provides [stuff]. It also provides front-to-back airflow that is 
 optimized for hot and cold aisle designs in colocated data center and 
 service provider deployments and is compliant with Network Equipment 
 Building Standards (NEBS) deployments.
 
 A classic 6509 is under 15U, a 6509-V-E is 21U. Anyone can do front to 
 back airflow if they're willing to bloat the size of the chassis (in 
 this case by 40%) to do all the fans and baffling, but then you'd have 
 people whining about the size of the box. :)

I would point out that it is quite possible to build a compact (say 4-5 U) 
front-back airflow platform if vendors were willing to pay to engineer a small 
midplane and leverage modular I/O cards in a single vertical arrangement.  As 
an example, envisage an M10i with 8 single height PIC slots, rear mounted 
RE/PFE combos, and a top/bottom impeller.  Same for a 72/73xx, or whatever 
platform you fancy.  But would it make business sense?  You'd lose rack space 
in favor of thermal efficiency.  

I think the push toward cloud computing and the re-emergence of big datacenters 
with far more stringent power and heat restrictions may actually drive such a 
move.   I guess we'll see...


C

 
 It only took 298 years from the inception of the 6509 to get a 
 front-to-back version. If you can do it with that oversized thing, it 
 certainly can be done on a 7200, XMR, juniper whatever, or whatever 
 else you fancy.
 
 Well, a lot of people who buy 7200s, baby XMRs, etc, are doing it for 
 the size. Lord knows I certainly bought enough 7606s instead of 6509s 
 over the years for that very reason. I'm sure the vendors prefer to 
 optimize the size footprint on the smaller boxes, and only do front to 
 back airflow on the boxes with large thermal loads (like all the modern 
 16+ slot chassis that are rapidly approaching 800W/card). Also, remember 
 the 6509 has been around since its 9 slots were lucky to see 100W/card, 
 which is a far cry from a box loaded with 6716s at 400W/card or other 
 power hungry configs.
 
 Remember the original XMR 32 chassis, which had side to side airflow? 
 They quickly disappeared that sucker and replaced it with the much 
 larger version they have today, I can only imagine how bad that was. :)
 
 -- 
 Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
 GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
 



Re: Routers in Data Centers

2010-09-26 Thread Christopher Morrow
On Sun, Sep 26, 2010 at 11:02 PM, Heath Jones hj1...@gmail.com wrote:
 But it seems, that NetFPGA has not enough memory to hold a full view
 (current 340k routes).

 It's just a development platform for prototyping designs, not
 something you would use in production...
 I want to use it to implement and test ideas that I have, and play
 with some different forwarding architectures, not use it as a final
 product :)

also, does a datacenter router/switch need a full table? isn't that
the job of the peering/transit routers in your scheme?



Re: Routers in Data Centers

2010-09-26 Thread James P. Ashton


- Original Message -
On Sun, Sep 26, 2010 at 11:02 PM, Heath Jones hj1...@gmail.com wrote:
 But it seems, that NetFPGA has not enough memory to hold a full view
 (current 340k routes).

 It's just a development platform for prototyping designs, not
 something you would use in production...
 I want to use it to implement and test ideas that I have, and play
 with some different forwarding architectures, not use it as a final
 product :)

also, does a datacenter router/switch need a full table? isn't that
the job of the peering/transit routers in your scheme?


Sometimes, but often you get odd results when internal gateway routers only see 
a pair of default gateways via OSPF or IS-IS. Sometimes the only real fix is to 
have a full table on these routers as well as your border/peering routers. 

James



RE: Routers in Data Centers

2010-09-26 Thread Simon Lyall


A few Blog posts on Datacentre network equipment that people may find 
interesting and relivant:



http://perspectives.mvdirona.com/2009/12/19/NetworkingTheLastBastionOfMainframeComputing.aspx

http://mvdirona.com/jrh/TalksAndPapers/JamesHamilton_CleanSlateCTO2009.pdf

http://perspectives.mvdirona.com/2010/08/01/EnergyProportionalDatacenterNetworks.aspx




--
Simon Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.




Re: Routers in Data Centers

2010-09-25 Thread Steven King
 Cisco uses their own ASICS is their higher end flag ship devices.
Devices such as the Catalyst 6500 series or the 2960 switches. You
pretty much singled out all the major players, including those who have
been bought out (Foundry by HP) and claimed they do not provide their
own, yet 3rd party flawed ASICS. I am actually surprised you didn't
mention HP, Linksys or Dell as they are the most guilty of using 3rd
party ASICS and shotty software. If you are buying data center grade
equipment from these vendors, it will be quality hardware backed by
their support (if purchased) such as Cisco's SmartNet agreements.

Moral of the story, do your research on the devices you plan to
implement and ask for data sheets on how the features you need are
handled (in software or hardware). I know Juniper and Cisco provide such
documentation for their devices. Quality hardware, however more
expensive, will give you less trouble in the long run. You truly get
what you pay for in the networking industry.

On 9/24/10 9:28 PM, Richard A Steenbergen wrote:
 On Fri, Sep 24, 2010 at 03:52:22PM +0530, Venkatesh Sriram wrote:
 Hi,

 Can somebody educate me on (or pass some pointers) what differentiates
 a router operating and optimized for data centers versus, say a router
 work in the metro ethernet space? What is it thats required for
 routers operating in data centers? High throughput, what else?
 A datacenter router is a box which falls into a particular market 
 segment, characterized by extremely low cost, low latency, and high 
 density ethernet-centric boxes, at the expense of advanced features 
 typically found in more traditional routers. For example, these boxes 
 tend to lack any support for non-ethernet interfaces, MPLS, advanced 
 VLAN tag manipulation, advanced packet filters, and many have limited 
 FIB sizes. These days it also tends to mean you'll be getting a box with 
 only (or mostly) SFP+ interfaces, which are cheaper and easier to do 
 high density 10GE with, but at the expense of long reach optic 
 availability.

 A metro ethernet box also implies a particular market segment, 
 typically a smaller box (1-2U) that has certain advanced features which 
 are typically not found in other small boxes. Specifically, you're 
 likely to see advanced VLAN tag manipulation and stacking capabilities, 
 MPLS support for doing pseudowire/vpn PE termination, etc, that you 
 might normally only expect to see on a large carrier-class router.

 Also, an interesting side-effect of the quest for high density 10GE at 
 low prices is that modern datacenter routers are largely built on third 
 party commodity silicon rather than the traditional in-house ASIC 
 designs. Many of the major router vendors (Cisco, Juniper, Foundry, 
 Force10, etc) are currently producing datacenter routers which are 
 actually just their software (or worse, someone else's software with a 
 little search and replace action on a few strings) wrapped around third 
 party ASICs (EZchip, Marvell, Broadcom, Fulcrum, etc). These boxes can 
 definitely offer some excellent price/performance numbers, but one 
 unfortunate side effect is that many (actually, most) of these chips 
 have not been fully baked by the years of experience the more 
 traditional router vendors have developed. Many of them have some very 
 VERY serious design flaws, causing everything from preventing them from 
 fully implementing some of the features you would normally except from a 
 quality rouer (multi-label stack MPLS, routed vlan interface counters, 
 proper control-plane DoS filter/policing capabilities, etc), or worse 
 (in some cases, much, much worse). YYMV, but the 30 second summary is 
 that many vendors consider datacenter users and/or use cases to be 
 unsophisticated, and they're hoping you won't notice or care about some 
 of these serious design flaws, just the price per port. Depending on 
 your application, that may or may not be true. :)


-- 
Steve King

Senior Linux Engineer - Advance Internet, Inc.
Cisco Certified Network Associate
CompTIA Linux+ Certified Professional
CompTIA A+ Certified Professional




Re: Routers in Data Centers

2010-09-25 Thread Seth Mattinen
On 9/24/10 5:28 PM, Alex Rubenstein wrote:
 While this question has many dimensions and there is no real
 definition of either I suspect that what many people mean when they
 talk about a DC routers is:
 
From the datacenter operator prospective, it would be nice if some of these 
vendors would acknowledge the need for front-to-back cooling. I mean, it is 
2010.
 


Well, if you look at the hardware it's dead obvious: airflow goes across
the linecards. Nexus 7k 10-slot has front bottom to back top airflow
because it uses vertically oriented cards.

~Seth



Re: Routers in Data Centers

2010-09-25 Thread Steven King


On 9/25/10 5:35 AM, Richard A Steenbergen wrote:
 On Sat, Sep 25, 2010 at 03:11:25AM -0400, Steven King wrote:
 Cisco uses their own ASICS is their higher end flag ship devices. 
 Devices such as the Catalyst 6500 series or the 2960 switches. You 
 pretty much singled out all the major players, including those who 
 have been bought out (Foundry by HP) and claimed they do not provide 
 their own, yet 3rd party flawed ASICS. I am actually surprised you 
 didn't mention HP, Linksys or Dell as they are the most guilty of 
 using 3rd party ASICS and shotty software. If you are buying data 
 center grade equipment from these vendors, it will be quality hardware 
 backed by their support (if purchased) such as Cisco's SmartNet 
 agreements.
 My point was that every major vendor, even the ones who normally make 
 their own in-house ASICs, are also actively selling third party silicon 
 (or in some cases complete third party boxes) in order to compete in the 
 cheap datacenter optimized space. Folks like HP and Dell were never 
 in the business of making real routers to begin with, so them selling a 
 Broadcom reference design with 30 seconds of search and replace action 
 on the bundled software is not much of a shocker. The guys who do a 
 better job of it, like Foundry (who was bought by Brocade, not HP), at 
 least manage to use their own OS as a wrapper around the third party 
 hardware. But my other major point was that almost all of these third 
 party ASICs are sub-par in some way compared to the more traditional 
 in-house hardware. Many of them have critical design flaws that will 
 limit them greatly, and many of these design flaws are only just now 
 being discovered by the router vendors who are selling them.

 BTW, Cisco is actually the exception to the datacenter optimized boxes 
 being third party, as their Nexus 7K is an evolution of the 6500/7600 
 EARL ASICs, and their third party hw boxes are EZchip based ASR9k's. Of 
 course their Nexus software roadmap looks surprisingly similar to other 
 vendors doing it with third party hw, go figure. :)
Cisco definitely is doing some interesting things with the Nexus. Have
you seen the virtualized version?
 Moral of the story, do your research on the devices you plan to 
 implement and ask for data sheets on how the features you need are 
 handled (in software or hardware). I know Juniper and Cisco provide 
 such documentation for their devices. Quality hardware, however more 
 expensive, will give you less trouble in the long run. You truly get 
 what you pay for in the networking industry.
 It takes a pretty significant amount of experience and inside knowledge 
 to know who is producing the hardware and what the particular issues 
 are, which is probably well beyond most people. The vendors aren't going 
 to come out and tell you Oh woops we can't actually install a full 
 routing table in our FIB like we said we could, or Oh btw this box 
 can't filter control-plane traffic and any packet kiddie with a T1 can 
 take you down, or FYI you won't be able to bill your customers 'cause 
 the vlan counters don't work, or just so you know, this box can't load 
 balance for shit, and L2 netflow won't work, or yeah sorry you'll 
 never be able to do a double stack MPLS VPN. The devil is in the 
 caveats, and the commodity silicon that's all over the datacenter space 
 right now is certainly full of them.
I agree it takes a significant amount of experience to know that
informatin off the top of your head, but I am able to find block
diagrams, and part information for 98% of Cisco's hardware. Old or new.
One needs to do their research on the device to know if it meets their
needs. The caveats are everywhere I agree, even some of the experienced
network guys get tripped up with them if they aren't careful. Planning
is the key to overcoming these problems.

-- 
Steve King

Senior Linux Engineer - Advance Internet, Inc.
Cisco Certified Network Associate
CompTIA Linux+ Certified Professional
CompTIA A+ Certified Professional




Re: Routers in Data Centers

2010-09-24 Thread Valdis . Kletnieks
On Fri, 24 Sep 2010 15:52:22 +0530, Venkatesh Sriram said:

 Can somebody educate me on (or pass some pointers) what differentiates
 a router operating and optimized for data centers versus, say a router
 work in the metro ethernet space? What is it thats required for
 routers operating in data centers? High throughput, what else?

There's corporate data centers and there's colo data centers. The two are
sufficiently different that the requirements are divergent.  For starters,
in a colo, the guy on blade 3 port 5 is quite possibly a competitor of
the guy who's got blade 2 port 17.  In the corporate data center, we
maintain the polite fiction that those two are working together for
a common goal.  This has implications for security features, billing,
bandwidth engineering, and almost every other feature on a router.




pgpkr9OTfoe0F.pgp
Description: PGP signature


Re: Routers in Data Centers

2010-09-24 Thread James P. Ashton
The biggest difference that I see is that you generally use different resources 
in a Datacenter. (Colo Datacenter).

 For example, I run out of HSRP groups on a 6500 long before I run out of ports 
or capacity.  I don't need to worry about QoS much but a less complex rate 
limit command (As opposed to Policing) is very useful. Also, Front to back 
cooling is optimal in a Datacenter and often not available.

James



- Original Message -
From: Venkatesh Sriram vnktshsri...@gmail.com
To: nanog@nanog.org
Sent: Friday, September 24, 2010 6:22:22 AM
Subject: Routers in Data Centers

Hi,

Can somebody educate me on (or pass some pointers) what differentiates
a router operating and optimized for data centers versus, say a router
work in the metro ethernet space? What is it thats required for
routers operating in data centers? High throughput, what else?

Thanks, Venkatesh




Re: Routers in Data Centers

2010-09-24 Thread Marshall Eubanks

On Sep 24, 2010, at 6:22 AM, Venkatesh Sriram wrote:

 Hi,
 
 Can somebody educate me on (or pass some pointers) what differentiates
 a router operating and optimized for data centers versus, say a router
 work in the metro ethernet space? What is it thats required for
 routers operating in data centers? High throughput, what else?
 
 Thanks, Venkatesh

Well, they generally have to be rack mountable. Besides that, I have seen 
everything from 
tiny Linux boxes to big refrigerator sized units (of course, the latter
may be on the floor). I don't think you are going to find much commonality 
there, so you
need to refine what it is you want to do. (For example, to move 10 Mbps or 100 
Gbps or... ? 
Run BGP or NAT or ... ?)

Regards
Marshall 


 
 




Re: Routers in Data Centers

2010-09-24 Thread Warren Kumari


On Sep 24, 2010, at 6:22 AM, Venkatesh Sriram wrote:


Hi,

Can somebody educate me on (or pass some pointers) what differentiates
a router operating and optimized for data centers versus, say a router
work in the metro ethernet space? What is it thats required for
routers operating in data centers? High throughput, what else?



While this question has many dimensions and there is no real  
definition of either I suspect that what many people mean when they  
talk about a DC routers is:

Primarily Ethernet interfaces
High port density
Designed to deal with things like VRRP / VLAN / ethernet type features.
Possibly CAM based, possibly smaller buffers.
Less likely to be taking full routes.

This is very similar to the religious debate about What's the  
difference between a 'real' router and a L3 switch?


Just my 2 cents.
W




Thanks, Venkatesh



--
Consider orang-utans.
In all the worlds graced by their presence, it is suspected that they  
can talk but choose not to do so in case humans put them to work,  
possibly in the television industry. In fact they can talk. It's just  
that they talk in Orang-utan. Humans are only capable of listening in  
Bewilderment.

-- Terry Practhett





Re: Routers in Data Centers

2010-09-24 Thread bmanning


the power/cooling budget for a rack full of router vs a rack
full of cores might be distinction to make.  I know that 
historically, the data center operator made no distinction
and a client decided to push past the envelope and replaced
their kit with space heaters.  most data centers now are fairly
restrictive on the power/cooling budget for a given footprint.


--bill



On Fri, Sep 24, 2010 at 01:08:23PM -0400, Warren Kumari wrote:
 
 On Sep 24, 2010, at 6:22 AM, Venkatesh Sriram wrote:
 
 Hi,
 
 Can somebody educate me on (or pass some pointers) what differentiates
 a router operating and optimized for data centers versus, say a router
 work in the metro ethernet space? What is it thats required for
 routers operating in data centers? High throughput, what else?
 
 
 While this question has many dimensions and there is no real  
 definition of either I suspect that what many people mean when they  
 talk about a DC routers is:
 Primarily Ethernet interfaces
 High port density
 Designed to deal with things like VRRP / VLAN / ethernet type features.
 Possibly CAM based, possibly smaller buffers.
 Less likely to be taking full routes.
 
 This is very similar to the religious debate about What's the  
 difference between a 'real' router and a L3 switch?
 
 Just my 2 cents.
 W
 
 
 
 Thanks, Venkatesh
 
 
 --
 Consider orang-utans.
 In all the worlds graced by their presence, it is suspected that they  
 can talk but choose not to do so in case humans put them to work,  
 possibly in the television industry. In fact they can talk. It's just  
 that they talk in Orang-utan. Humans are only capable of listening in  
 Bewilderment.
 -- Terry Practhett