I am doing 8 parallel full tables to the same provider on an rsp720
with no issues.  You can barely do 6 full tables on a sup720-3bxl.
The limitation is processor memory not tcam.

Here is what 6 looks like with 12.2SXF16:

Head Total(b) Used(b) Free(b) Lowest(b) Largest(b) Processor 44B0D4B0 927902544 815304080 112598464 88132216 77785200


FIB TCAM usage: Total Used %Used 72 bits (IPv4, MPLS, EoM) 524288 281823 54%

As you can see memory is very tight with 6 parallel full tables
but tcam usage is normal. I would not expect any problems with 2
however.

- Kevin

Brad Hedlund (brhedlun) wrote:
Better to use 'ebgp multihop' and peer to provider router's loopback. Then have equal cost static routes to provider's loopback via the two physical interface next hop IP addresses.

Cheers,

Brad Hedlund
bhedl...@cisco.com
http://www.internetworkexpert.org


On May 20, 2009, at 9:47 PM, "Peter Kranz" <pkr...@unwiredltd.com> wrote:

Setup is as follows; 2 edge routers, each with a BGP session receiving full routes to the same provider router. The provider is load balancing inbound traffic to our AS nicely, 50/50 between the edge routers.. I would also like to load balance the outbound traffic.. I've considered adding 'maximum-paths 2' to install the two equal paths, but an concerned about FIB TCAM impacts. Will adding this command cause each equal cost route to take one additional TCAM entry, i.e. full routing table x 2 > 524k TCAM limit = EPIC meltdown?



Current FIB TCAM:

L3 Forwarding Resources

            FIB TCAM usage:                     Total        Used
%Used

                 72 bits (IPv4, MPLS, EoM)     524288      285506
54%

                144 bits (IP mcast, IPv6)      262144           5
1%



Peter Kranz
<http://www.UnwiredLtd.com> www.UnwiredLtd.com
Desk: 510-868-1614 x100

Mobile: 510-207-0000
<mailto:pkr...@unwiredltd.com> pkr...@unwiredltd.com



_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to