Re: [c-nsp] full routing table / provider-class chassis

2009-06-17 Thread Jo Rhett

 On Jun 15, 2009, at 11:29 AM, Kevin Graham wrote:
Given the 192 ports of 10/100/1000, presumably this is aggregating  
customers,
in which case it'd be best to roll these up on 7600/RSP720 (along  
with their
associated BGP, since most of them would probably be suitable for  
peer-groups).
uRPF wouldn't be a problem, and hopefully ACL's would be uniform  
enough across

customers to share most of the ACE entries.

With that compromise (namely loosing customer-customer netflow  
detail), the
remaining requirements for full netflow exports and the balance of  
the BGP

workload are feasible for any of ASR1k, GSR, or CRS-1.


We don't have core and edge -- our switches do both.   Every port on  
the switch is either a BGP peer/uplink/downlink or a customer.  Every  
port layer3-routed with only a few handfuls of customers with dual  
links.


Purchasing a switch to be the edge and then another to handle BGP  
seems a bit of overkill for our fairly small datacenters  (largest  
will have around 300 customers ~ 360 ports).   I'd prefer something  
that can handle both edge and core duties.


--
Jo Rhett
Net Consonance : consonant endings by net philanthropy, open source  
and other randomness


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] full routing table / provider-class chassis

2009-06-12 Thread Jo Rhett

On Jun 12, 2009, at 8:42 AM, Kevin Loch wrote:
Ɓukasz has already addressed this; suffice to say he's right, and  
the above is not correct. A TCAM lookup *is* the forwarding  
operation, and the DFC has all information required locally to  
switch the packet (via the fabric) to the output linecard, and does  
so.


I shouldn't have said PFC. The fabric is on the supervisor card itself
not the PFC.  What I meant was the packet is always sent to the
centralized switch fabric on the active supervisor card regardless of
where the lookups/acl are done.


Just for information, I know very intimately how this stuff works and  
don't need you to explain it to me.  I haven't objected yet because  
others might find this interesting.  (and FYI, your last sentence is  
wrong too if DFCs exist on each card)



The important point is that the lookup limitations (mpps) are
different than the fabric bandwidth limitations (gbps) because of how
these functions are separated on the cef720/dcef720 platform.

A 6509 should not fall over without DFC's unless you are doing more
than 30mpps.  That is 15gbit/s of 64 byte packets or 360gbit/s of
1500 byte packets.



Sorry, let me back up and explain again.  I've been dealing with Cisco  
for 20 years now.   And I very well know Cisco's ability to super- 
inflate their packet handling ability.  And specifically, I have run  
6509 systems into the ground with a mere 500mb/sec of traffic.


Their whole MPPS statistics are based on perfect-world scenarios that  
don't exist.  And honestly, I have on 5 different occasions had the  
opportunity to push Cisco to prove those numbers, and they have failed  
to do so IN A LAB THEY DESIGNED JUST TO DO SO.


So ... yeah.   Don't go believing those statistics.

Now let's talk about reality: 1/10 inbound/outbound ratios, 5% of  
received traffic is Internet cruft requiring (wasted) TCAM lookups,  
etc and such forth than any provider peering router observes, and  
you're down to a much lower ratio.  Fail to install DFCs and you'll  
find your 6509s falling over with just a few gigabits of traffic.
30mpps versus 48mpps gives an illusion that DFCs only give you another  
50%, but that's not reality on the ground.  Don't try and persuade me  
otherwise, I've seen this repeatedly in real life environments.


Now, let's stop talking about non-DFC cards and start talking about  
equipment which can handle uRPF on every port, full Netflow analysis  
of up to 8 ports at a time, every port layer 3, every port filtered,  
colo facility core/peering.


--
Jo Rhett
Net Consonance : consonant endings by net philanthropy, open source  
and other randomness




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] full routing table / provider-class chassis

2009-06-12 Thread Jo Rhett

Now, let's stop talking about non-DFC cards and start talking about
equipment which can handle uRPF on every port, full Netflow analysis
of up to 8 ports at a time, every port layer 3, every port filtered,
colo facility core/peering.


On Jun 12, 2009, at 3:03 PM, Peter Rathlev wrote:

If this is the target then 6500/7600 isn't really the best tool IMHO.



I suspected as much.  Honestly, I'm aiming for an MX480 ;-)  But I  
need to determine the comparable Cisco product(s) and get them listed  
on the comparison sheet.


--
Jo Rhett
Net Consonance : consonant endings by net philanthropy, open source  
and other randomness




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] full routing table / provider-class chassis

2009-06-10 Thread Jo Rhett
I've been trying to spec Cisco for an upgrade of our Force10 backbone  
for nearly 2 months now.  I'm just trying to clarify which platform  
Cisco recommends for full routing table/hardware forwarding/provider- 
class environments.


Unfortunately every time I get through to the supposed right group, I  
mention our requirements and Cisco never follows up.  It's almost like  
they realize they have nothing on Juniper and they don't even bother.   
They are about to be eliminated from the choices for lack of having an  
answer.


Until they decide to care, is there anyone on here willing to propose  
a basic platform for provider-class environment?  By which I mean


* Full IPv4  v6 routing table  (Cisco has 760k v4/260k v6 I know with  
SUP720/3CXL)
* ASIC-based line-rate forwarding (SUP720-3CXL and DFC-3CXL on each  
line card, right?)

* 196 ports copper 10/100/1000
* 40 ports SFP 1g  (on two line cards, not one)
* 96+ BGP peers, 8-10 full routing table peers

Unfortunately, Cisco's partners are useless.  They propose 6509s  
without the DFCs, which we know will fall over.   And as I understand  
it, the 6509 even with the 3CXL cards can't handle 5 full peers,  
nevermind 96 total peers.   Most people suggest the 7600 platform, but  
at least two comments on the mailing list indicate it isn't much better.


What are people using today for this kind of environment?  Does it work?

--
Jo Rhett
Net Consonance : consonant endings by net philanthropy, open source  
and other randomness




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/