Dan Brisson wrote the following on 2/12/2014 9:06 PM:
My Cisco SE brought up an interesting alternative. This summer we're
replacing our 6513 Sup720 with a pair of 6807 with redundant Sup 2Ts.
It is where all our internal Fiber terminates and where internal
routing happens. He said we can
On Thursday, February 13, 2014 05:08:02 AM Mikael
Abrahamsson wrote:
> A lot of people use SUP720-3BXL and RSP720-3CXL for full
> BGP table routing. This will work just fine until the
> IPv4 routing table reaches 800k entries or something (if
> you want to do IPv6 at the same time, you probably d
On Thursday, February 13, 2014 12:28:47 AM Vlade Ristevski
wrote:
> My Cisco SE brought up an interesting alternative. This
> summer we're replacing our 6513 Sup720 with a pair of
> 6807 with redundant Sup 2Ts. It is where all our
> internal Fiber terminates and where internal routing
> happens.
On Wed, 12 Feb 2014, Vlade Ristevski wrote:
My Cisco SE brought up an interesting alternative. This summer we're
replacing our 6513 Sup720 with a pair of 6807 with redundant Sup 2Ts. It
is where all our internal Fiber terminates and where internal routing
happens. He said we can add extra memo
My Cisco SE brought up an interesting alternative. This summer we're
replacing our 6513 Sup720 with a pair of 6807 with redundant Sup 2Ts.
It is where all our internal Fiber terminates and where internal
routing happens. He said we can add extra memory and terminate our
BGP sessions here a
Thanks for all the responses. It's been very helpful. Based on your
collective feedback, I'm definitely going to retire the 7206 this
summer. I'm looking at the ASR-1002-X and Juniper MX-5, MX-10. I may as
well go with something 10Gig capable.
My Cisco SE brought up an interesting alternative.
I generally spec the NPE-G1 as "up to 1Gbps" if you're using the onboard
ports. This assumes ISP type loads with little upstream, lots of
downstream, and relatively large flows (mostly 1500 byte packets) on
ethernet. It sounds like this fits your usage case well. If one were to
throw in ATM or
Our G2 with BGP full-view and sampled netflow 1:100 doing 1,2Gbit with
about 88% load.
On 12.02.2014 1:03, Mark Walters wrote:
> Side note - our G2s at that same 800Mbps traffic rate run at approx 60%
> CPU.
We run 7206 NPE-G1s on some GigE peering points. At about 800Mbps of
aggregate Internet traffic (inbound + outbound, as measured from Cacti)
the CPU sits around 70%.
Setup:
- inbound and outbound Internet-facing ACLs (50 lines and 25 lines
respectively, turbo ACL)
- Inbound Internet-facing policy
Or assuming your using an Ethernet of some sort as your upstream connections
you could grab something like a CCR from mikrotik for < $1k and sleep easy
knowing you're only using 6% of it's capacity.
Sent from my iPhone
> On 11/02/2014, at 3:52 pm, Octavio Alvarez wrote:
>
>> On 02/10/2014 0
On 02/10/2014 06:05 PM, Vlade Ristevski wrote:
> Are you suggesting getting the default gateway from both providers or
> getting the full table from one and using the default as a backup on the
> other (7206)?
Whatever suits you best. Test and see. I'd just receive the full table
anyway but filte
Are you suggesting getting the default gateway from both providers or
getting the full table from one and using the default as a backup on the
other (7206)?
Thanks,
On 2/10/2014 1:27 PM, Octavio Alvarez wrote:
On 02/10/2014 08:05 AM, Vlade Ristevski wrote:
The ACL is a recent addition and w
Cisco once implemented and released this feature to use the second core of the
NPE-G1, most notably to manage the BRAS & en/decapsulations tasks for
LAC/LNS/PTA (PPPoE, L2TP...), effectively offering such 1.6 factor.
It was called MPF, and was released in special 12.3-YM IOS (in 2004/2005 I
gues
On Mon, 10 Feb 2014, Vlade Ristevski wrote:
Answers on and off list are appreciated.
At 700-800 megabit/s aggregated througput (in+out), you're very clsoe to
the max performance envelope of the G1. If you're going down this route,
be prepared to purchase new hardware at short notice in case
On Monday, February 10, 2014 07:58:16 PM Nick Hilliard
wrote:
> in fact, the npe-g1 uses a BCM1250 which is a dual CPU
> unit but vanilla IOS is not able to use the second CPU
> for packet forwarding. Unsubstantiated rumour claimed
> that modular IOS (QNX kernel) could push about 1.6x the
> thro
On Monday, February 10, 2014 06:08:42 PM Nicolas Chabbey
wrote:
> I do remember we were able to forward around ~700Mbps of
> 1500 bytes traffic with old IOS images and no ACLs.
The trick is some of those additional features are better
optimized in more modern IOS releases (SRE, 15S). Quagmire.
On 10/02/2014 19:44, Nikolay Shopik wrote:
> You mean IOS XR? Which was never released for software based routers,
> right? as it QNX in core.
no, I meant modular IOS, not XR. This was an attempt to run a non
bare-metal IOS. The kernel was based on qnx (http://goo.gl/9RSwHn), and
cisco released
On Monday, February 10, 2014 05:43:04 PM Vlade Ristevski
wrote:
> We're still on the 12.4 train. I do use an ACL with less
> than 100 entries which handle BCP38 and block a few bad
> actors and private IPs on the Internet. I will be moving
> the BCP38 ACL closer to the hosts before the upgrade so
On Monday, February 10, 2014 05:40:04 PM Alain Hebert wrote:
> Also the entire platform is rate for 1.8Gbs
> aggregated which mean depending on which interface you
> have, and which bus they are connected to, 900Mbps might
> be its limit.
I've done 900Mbps on an NPE-G2 with 95% CPU utilizatio
On Monday, February 10, 2014 05:17:09 PM Vlade Ristevski
wrote:
> This is the interface that connects to our provider. As
> you can see its almost all download traffic. Our ASR1002
> handles it without a sweat but I'm a little skeptical of
> whether the 7206 will hold up.
An NPE-G2 has a better
On 10.02.2014 21:58, Nick Hilliard wrote:
> Unsubstantiated
> rumour claimed that modular IOS (QNX kernel) could push about 1.6x the
> throughput of vanilla IOS, as it was smp capable. Pity it was never released.
You mean IOS XR? Which was never released for software based routers,
right? as it Q
On 02/10/2014 08:05 AM, Vlade Ristevski wrote:
> The ACL is a recent addition and we can probably do away with it. I
> didn't notice a significant increase in CPU or drops since adding it.
> But we usually peak at about 200Mbps on this link. The full routing
> table is a must since we're dual homed
On 10/02/2014 15:30, Remco Bressers wrote:
> This depends on multiple variables. The 7200 is a single-CPU platform
> where CPU can go sky-high when using features like ACL's, QoS, IPv6 and
> you name it.. Also, changing from IOS 12.4 to 15 increased our CPU usage
> with another 10%+. Stick to the b
: Re: 7206 VXR NPE-G1 throughput
On 02/10/2014 04:43 PM, Vlade Ristevski wrote:
> We're still on the 12.4 train. I do use an ACL with less than 100
> entries which handle BCP38 and block a few bad actors and private IPs
> on the Internet. I will be moving the BCP38 ACL closer to th
On 2/10/14, 7:57 AM, Vlade Ristevski wrote:
> Thanks for the link. When I looked at it, the PPS and bandwidth didn't
> really match what I see on my network so I'm curious to see what people
> are actually seeing. It looks like their test is done using very small
> packets (64K). Our traffic is mos
On 2/10/14, 7:43 AM, Vlade Ristevski wrote:
> We're still on the 12.4 train. I do use an ACL with less than 100
> entries which handle BCP38 and block a few bad actors and private IPs on
> the Internet. I will be moving the BCP38 ACL closer to the hosts before
> the upgrade so the ACL will be a bit
On 02/10/2014 04:30 PM, Remco Bressers wrote:
On 02/10/2014 04:17 PM, Vlade Ristevski wrote:
We are looking to double the bandwidth on one of our circuits from 300Mbps to
600Mbps. We currently use a Cisco 7206VXR with an NPE-G1 card. These seem like
very popular routers so I'm hoping a few
peo
The ACL is a recent addition and we can probably do away with it. I
didn't notice a significant increase in CPU or drops since adding it.
But we usually peak at about 200Mbps on this link. The full routing
table is a must since we're dual homed.
On 2/10/2014 10:55 AM, Remco Bressers wrote:
On
Thanks for the link. When I looked at it, the PPS and bandwidth didn't
really match what I see on my network so I'm curious to see what people
are actually seeing. It looks like their test is done using very small
packets (64K). Our traffic is mostly web with a lot of Video (netflix ,
Hulu, yo
On 02/10/2014 04:43 PM, Vlade Ristevski wrote:
> We're still on the 12.4 train. I do use an ACL with less than 100 entries
> which handle BCP38 and block a few bad actors and private IPs on the
> Internet. I will be moving the BCP38 ACL closer to the
> hosts before the upgrade so the ACL will be
Both the inside and outside interfaces are on the same NPE-G1 card.
Thanks,
On 2/10/2014 10:40 AM, Alain Hebert wrote:
I have one but I never ran that much BW thru mine.
But the CPU usage is what will kill you.
Also the entire platform is rate for 1.8Gbs aggregated which mean
On 2/10/14, 7:17 AM, Vlade Ristevski wrote:
> We are looking to double the bandwidth on one of our circuits from
> 300Mbps to 600Mbps. We currently use a Cisco 7206VXR with an NPE-G1
> card. These seem like very popular routers so I'm hoping a few people on
> this list have them deployed. If you or
We're still on the 12.4 train. I do use an ACL with less than 100
entries which handle BCP38 and block a few bad actors and private IPs on
the Internet. I will be moving the BCP38 ACL closer to the hosts before
the upgrade so the ACL will be a bit shorter in the future. We won't be
doing any QO
I have one but I never ran that much BW thru mine.
But the CPU usage is what will kill you.
Also the entire platform is rate for 1.8Gbs aggregated which mean
depending on which interface you have, and which bus they are connected
to, 900Mbps might be its limit.
-
Alain Hebert
On 02/10/2014 04:17 PM, Vlade Ristevski wrote:
> We are looking to double the bandwidth on one of our circuits from 300Mbps to
> 600Mbps. We currently use a Cisco 7206VXR with an NPE-G1 card. These seem
> like very popular routers so I'm hoping a few
> people on this list have them deployed. If y
35 matches
Mail list logo