Re: [ppml] too many variables

2007-08-14 Thread Leo Bicknell
In a message written on Mon, Aug 13, 2007 at 05:53:00PM -0700, Scott Whyte 
wrote:
 Pick a newly released Core 2 Duo.  How long will Intel be selling it?
 How does that compare with getting it into your RP design, tested,
 produced, OS support integrated, sold, and stocked in your depots?

Intel designates chips for long life.  That's why Vendor J is
still souring P-III 600's, which were new almost 8 years ago now.
Which one of the current chips is in that bucket, I don't know, but
the vendors could find out.

Plus, your argument doesn't hold for the simple reason that servers
have the same lifespan as routers in most companies.  HP, Dell,
IBM, they don't seem to be going under with changes in Intel's line
of chips.  They don't seem to have support issues.  As the vendors
move to off the shelf parts the arguments about testing, stocking,
and so forth start to go out the window.

More importantly, why specialize?  Vendor J's RE is basically a PC
connected to the backplane with FastEthernet.  They did a lot of
engineering in airflow, sheet metal, and other packaging issues to
put it in a Juniper package, but to what end?

Compare with Avaya.  When they moved to a Linux brain in their phone
switch line they moved the brain out of the specialized forwarding
hardware (the old Definity PBX) and into a, wait for it, PC!  Yes,
an off the shelf 2U PC they source from a third party, connected
to the backplane with Gigabit Ethernet.

Vendors also kill themselves on the depot side because they hate
to give you a free upgrade.  If vendors changed their maintenance
policies to be what you have or faster, when it became cost
prohibitive to stock P-III 600's they could stop, giving you the
P-III 1.2g that came afterwards when you RMA a part.  It's probably
cheaper to stop stocking multiple parts and provide free upgrades
on failure than to stock all the varieties.

Of course, I think if the RE were an external 2RU PC that they sold
for $5,000 (which is still highway robbery) ISP's might upgrade
more than once every 10 years

The problem here is that large companies don't like to take risk,
and any change is perceived as a risk.  Cisco and Juniper will not
be creative in finding a solution, particularly when it may reduce
cost (and thus, revenue).  Small startups that might take the risk
can't play in the specialized forwarding side of things.  We can exist
in this state,  primarily because we're not pushing the cutting edge.

Route Processors are WAY behind current technology.

Fordwarding hardware is WAY ahead of current need.

Cost/bit is the problem, and has been for some number of years.  We
have OC-768, but can't afford to deploy the DWDM systems to support
it.  We have 32 way Xenon boxes, but we can't afford to change the
design to use them as route processors.

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org


pgpbPBueRSuyK.pgp
Description: PGP signature


Re: [ppml] too many variables

2007-08-14 Thread Adrian Chadd

On Tue, Aug 14, 2007, Leo Bicknell wrote:

 Of course, I think if the RE were an external 2RU PC that they sold
 for $5,000 (which is still highway robbery) ISP's might upgrade
 more than once every 10 years

Sounds like an experiment. Anyone have a spare J M40?

(*duck*)





Adrian



Re: [ppml] too many variables

2007-08-13 Thread Eliot Lear


Leo Bicknell wrote:

To Bill's original e-mail.  Can we count on 2x every 18 months going
forward?  No.  But betting on 2x every 24 months, and accounting for the
delta between currently shipping and currently available hardware seems
completely reasonable when assessing the real problem.
  


This assumes the real problem is CPU performance, where many have 
argued that the real problem is memory bandwidth.  Memory doesn't track 
Moore's Law.  Besides, Moore's Law isn't a law.  What's your Plan B?  
This is where a lot of RRG/RAM work is going on right now.


Eliot





Re: [ppml] too many variables

2007-08-13 Thread Eliot Lear


Leo Bicknell wrote:

Now, once the FIB is computed, can we push it into line cards, is
there enough memory on them, can they do wire rate lookups, etc are
all good questions and all quickly drift into specialized hardware.
There are no easy answers at that step...
  


I think we're agreeing that it's the FIB management that's going to kill 
you with all the entropy that Paul keeps alluding to.


Eliot


RE: [ppml] too many variables

2007-08-12 Thread michael.dillon

 And yet people still say the sky is falling with 
 respect to routing convergence and FIB size.  
 Probably a better comparison BTW, would be with a

Actually, the better comparison is with the power of current processors
used in Juniper and Cisco gear with the current Moore's law power of
common off-the-shelf PC processors. Then go back to the point in time
when there were real actual issues with FIB size on routers, and look at
the same relative power. 

Today, is there a bigger or a smaller gap than way back when there were
real problems on the net. If the gap is bigger or the same, then we are
nuts to worry about it. If the gap is significantly smaller then we
should get some serious researchers to figure out the real limits of
current technology and the future technology at the point where the gap
has gone to zero.

--Michael Dillon





Re: [ppml] too many variables

2007-08-10 Thread Leo Bicknell
In a message written on Thu, Aug 09, 2007 at 04:21:37PM +, [EMAIL 
PROTECTED] wrote:
 (1) there are technology factors we can't predict, e.g.,
 moore's law effects on hardware development

Some of that is predictable though.  I'm sitting here looking at a
heavily peered exchange point router with a rather large FIB.  It
has in it a Pentium III 700Mhz processor.  Per Wikipedia
(http://en.wikipedia.org/wiki/Pentium_III) it appears they were
released in late 1999 to early 2000.  This box is solidly two,
perhaps three, and maybe even 4 doublings behind things that are
already available at your local best buy off the shelf.

Heck, this chip is slower than the original Xbox chip, a $400
obsolete game console.

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org


pgpBlRhs1cHBd.pgp
Description: PGP signature


Re: [ppml] too many variables

2007-08-10 Thread vijay gill
On 8/10/07, John Paul Morrison [EMAIL PROTECTED] wrote:

  And yet people still say the sky is falling with respect to routing
 convergence and FIB size.  Probably a better comparison BTW, would be with a
 Nintendo or Playstation, as they are MIPS and PowerPC based. Even the latest
 route processor for a decent peering box is only a 1.2 GHz PowerPC with 2
 GB RAM (RSP720) - so basically an old iBook is enough for the BGP control
 plane load these days? I think this has something to do with the vendors
 giving you just enough to keep you going, but not so much that you delay
 hardware upgrades :-)

 There have been big gains in silicon for the fast switched path, but the
 route processors even on high end routers are still pretty low end in
 comparison to what's common on the average desktop.
 I would say that when control plane/processor power becomes critical, I
 would hope to see better processors inside.

 With the IETF saying that speed and forwarding path are the bottlenecks
 now, not FIB size, perhaps there just isn't enough load to push Core Duo
 processors in your routers. (If Apple can switch, why not Cisco?)
 http://www3.ietf.org/proceedings/07mar/slides/plenaryw-3.pdf



I guess people are still spectacularly missing the real point. The point
isn't that  the latest generation hardware cpu du jour you can pick up from
the local hardware store is doubling processing power every n months. The
point is that getting them qualified, tested, verified, and then deployed is
a non trivial task. We need to be substantially behind moores observation to
be economically viable. I have some small number of route processors in my
network and it is a major hassle to get even those few upgraded. In other
words, if you have a network that you can upgrade the RPs on every 18
months, let me know.

/vijay


John Paul Morrison, CCIE 8191

 A better comparison would be with a Playstation or Nintendo,

 Leo Bicknell wrote:

 In a message written on Thu, Aug 09, 2007 at 04:21:37PM +, [EMAIL 
 PROTECTED] wrote:

  (1) there are technology factors we can't predict, e.g.,
 moore's law effects on hardware development

  Some of that is predictable though.  I'm sitting here looking at a
 heavily peered exchange point router with a rather large FIB.  It
 has in it a Pentium III 700Mhz processor.  Per Wikipedia
 (http://en.wikipedia.org/wiki/Pentium_III) it appears they were
 released in late 1999 to early 2000.  This box is solidly two,
 perhaps three, and maybe even 4 doublings behind things that are
 already available at your local best buy off the shelf.

 Heck, this chip is slower than the original Xbox chip, a $400
 obsolete game console.

   --

 ___
 PPML
 You are receiving this message because you are subscribed to the ARIN Public 
 Policy
 Mailing List ([EMAIL PROTECTED]).
 Unsubscribe or manage your mailing list subscription at:
 http://lists.arin.net/mailman/listinfo/ppml Please contact the ARIN Member 
 Services
 Help Desk at [EMAIL PROTECTED] if you experience any issues.



 ___
 PPML
 You are receiving this message because you are subscribed to the ARIN
 Public Policy
 Mailing List ([EMAIL PROTECTED]).
 Unsubscribe or manage your mailing list subscription at:
 http://lists.arin.net/mailman/listinfo/ppml Please contact the ARIN Member
 Services
 Help Desk at [EMAIL PROTECTED] if you experience any issues.




Re: [ppml] too many variables

2007-08-10 Thread Leo Bicknell
In a message written on Fri, Aug 10, 2007 at 11:08:26AM -0700, vijay gill wrote:
substantially behind moores observation to be economically viable. I
have some small number of route processors in my network and it is a
major hassle to get even those few upgraded. In other words, if you
have a network that you can upgrade the RPs on every 18 months, let me

You're mixing problems.

Even though you may only be able to put in a new route processor
every 3-5 years doesn't mean the vendor shouldn't have a faster
version every 18 months, or even sooner.  It's the addition of the
two that's the problem.  You're 5 year cycle may come a year before
the vendors 5 year cycle, putting you on 9 year old gear before you
refresh next.

Vendor J got it half right.  The RP is a separately replaceable
component based on a commodity motherboard, hooked in with commodity
ethernet, using the most popular CPU and ram on the market.  And
yes, I understand needing to pay extra for the sheet metal, cooling
calculations, and other items.

But, they still cost 10x a PC based on the same components, and are
upgraded perhaps every 3 years, at best.  They don't even take
advantage of perhaps going from a 2.0Ghz processor to a 2.4, using
the same motherboard, RAM, disk, etc.

But I think the point still stands, I bet Vendor J in particular
could pop out a Core 2 Duo based RP with 8 gig of ram and a 300+
gig hard drive in under 6 months while holding the price point if
BGP convergence demanded it, and their customers made it a priority.

To Bill's original e-mail.  Can we count on 2x every 18 months going
forward?  No.  But betting on 2x every 24 months, and accounting for the
delta between currently shipping and currently available hardware seems
completely reasonable when assessing the real problem.

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org


pgpRoMmRfNM6s.pgp
Description: PGP signature


Re: [ppml] too many variables

2007-08-10 Thread Steven M. Bellovin

On Fri, 10 Aug 2007 18:42:23 +
Paul Vixie [EMAIL PROTECTED] wrote:

 
   ... is that system level (combinatorial) effects would limit
   Internet routing long before moore's law could do so.
  
  It is an easy derivative/proxy for the system level effect is all.
  Bandwidth for updates (inter and intra system) are another choking
  point but folks tend to be even less aware of those than cpu.
 
 is bandwidth the only consideration?  number of graph nodes and
 number of advertised endpoints and churn rate per endpoint don't
 enter into the limits? at what system size does speed of light begin
 to enter into the equation?
 
Right.  What is the computational complexity of the current algorithm?



--Steve Bellovin, http://www.cs.columbia.edu/~smb


Re: [ppml] too many variables

2007-08-09 Thread Randy Bush

the fib in a heavily peered dfz router does not often converge now.  the
question is when will the router not be able to process the volume of
churn, i.e. fall behind further and further?  as there is non-trivial
headroom in the algorithms, moore's law on the processors, etc. etc.,
your message is as operationally meaningful as dave and john telling us
they can handle 2m prefixes today.

randy


Re: too many variables

2007-08-09 Thread Leigh Porter



Yes a very big unless. Multi-core processors are already available that 
would make very large BGP convergence possible. Change the algorithm as 
well and perhaps add some multi-threading to it and it's even better.



--
Leigh Porter


Patrick Giagnocavo wrote:



On Aug 9, 2007, at 12:21 PM, [EMAIL PROTECTED] wrote:


so putting a stake in the ground, BGP will stop working @ around
2,500,000 routes - can't converge...  regardless of IPv4 or IPv6.
unless the CPU's change or the convergence algorithm changes.


That is a pretty big unless .

Cordially

Patrick Giagnocavo
[EMAIL PROTECTED]




Re: too many variables

2007-08-09 Thread Steve Atkins



On Aug 9, 2007, at 12:09 PM, Leigh Porter wrote:




Yes a very big unless. Multi-core processors are already available  
that would make very large BGP convergence possible. Change the  
algorithm as well and perhaps add some multi-threading to it and  
it's even better.


Anyone have a decent pointer to something that covers the
current state of the art in algorithms and (silicon) router
architecture, and maybe an analysis that shows the reasoning
to get from those to realistic estimates of routing table size limits?

Cheers,
  Steve




--
Leigh Porter


Patrick Giagnocavo wrote:



On Aug 9, 2007, at 12:21 PM, [EMAIL PROTECTED] wrote:


so putting a stake in the ground, BGP will stop working @ around
2,500,000 routes - can't converge...  regardless of IPv4 or  
IPv6.

unless the CPU's change or the convergence algorithm changes.


That is a pretty big unless .

Cordially

Patrick Giagnocavo
[EMAIL PROTECTED]






Re: too many variables

2007-08-09 Thread Patrick Giagnocavo



On Aug 9, 2007, at 3:47 PM, Tony Li wrote:



On Aug 9, 2007, at 12:09 PM, Leigh Porter wrote:




Yes a very big unless. Multi-core processors are already available  
that would make very large BGP convergence possible. Change the  
algorithm as well and perhaps add some multi-threading to it and  
it's even better.



Not necessarily.  BGP convergence is strongly dependent on memory  
bandwidth and multiple cores do not increase that.


Tony


Sun just released the T2 chip, claimed 60GB/s memory bandwidth, on- 
board 10GbE interface etc.


Pricing under $1000 for an 8-core chip with 64 threads.

Cordially

Patrick Giagnocavo
[EMAIL PROTECTED]





Re: [ppml] too many variables

2007-08-09 Thread Randy Bush

 the fib in a heavily peered dfz router does not often converge now.
 never?  or over some predefined period of time?

not often

 as there is non-trivial headroom in the algorithms,
 the BGP algorithm does not change  (BGP-5, BGP-6 etc anyone)

algorithm != protocol

randy


RE: too many variables

2007-08-09 Thread Lincoln Dale

  I asked this question to a couple of folks:
 
   at the current churn rate/ration, at what size doe the FIB need to
  be before it will not converge?
 
  and got these answers:
 
 - jabber log -
 a fine question, has been asked many times, and afaik noone has
 provided any empirically grounded answer.
 
 a few realities hinder our ability to answer this question.
 
 (1) there are technology factors we can't predict, e.g.,
 moore's law effects on hardware development

Moore's Law is only half of the equation. It is the part that deals with route
churn  the rate at which those can be processed (both peer notification and
control-plane programming data-plane in the form of FIB changes).

Moore's Law almost has zero relevance to FIB sizes. It doesn't map to growth in
SRAM or innovations/mechanisms for how to reduce the requirements for SRAM
while growing FIB sizes.


cheers,

lincoln.




Re: too many variables

2007-08-09 Thread Joel Jaeggli

Lincoln Dale wrote:
  I asked this question to a couple of folks:

  at the current churn rate/ration, at what size doe the FIB need to
  be before it will not converge?

  and got these answers:

 - jabber log -
 a fine question, has been asked many times, and afaik noone has
 provided any empirically grounded answer.

 a few realities hinder our ability to answer this question.

 (1) there are technology factors we can't predict, e.g.,
 moore's law effects on hardware development
 
 Moore's Law is only half of the equation. It is the part that deals with route
 churn  the rate at which those can be processed (both peer notification and
 control-plane programming data-plane in the form of FIB changes).

Moore's law just makes an observation that the transistor count feasible
for a minimum cost component doubles every 24 months. It actually says
nothing about the performance of those components or their speed.

 Moore's Law almost has zero relevance to FIB sizes. It doesn't map to growth 
 in
 SRAM or innovations/mechanisms for how to reduce the requirements for SRAM
 while growing FIB sizes.

sram components are following their own trajectory and you can fairly
easily at this point project how big a cam you'll be able to buy and
what it's power consumption will be out a couple years from the products
currently in your routers (which are for the most part not state of the
art). That said, not all forwarding engines in line cards  utilize
ternary cams or srams so assumptions that involve sram and sram-like
components being the only game in town for fib storage are dangerous.

 
 cheers,
 
 lincoln.