On Tue, Apr 10, 2018 at 08:37:41AM -0700, mike+j...@willitsonline.com wrote:
> On 04/09/2018 08:07 PM, Chris via juniper-nsp wrote:
> > For the MX104 (and the MX80) the main limitation they have is that the
> > CPU on the routing engine is terribly slow. This can be a problem for
> > you if you are taking multiple full tables with BGP. Even without
> > taking full tables, the RE CPU on the MX104's I have is basically
> > always at 100%. Commits are pretty slow as well. This shouldn't be
> > such an issue with the MX240 as it has a wider range of routing
> > engines available with much better specs.
> I know it can be set up and run like a champ and do some (undefined)
> number of gigabits without issue. What concerns me is that there are
> performance limitations in these software only platforms based on your
> processor/bus/card choices, and of course the performance of a software
> hash vs a hardware cam for forwarding table lookups. And usually
> (imho),  you hit the platform limits like a truck running into a brick
> wall. However, if I knew I was only going to have just a few gbps (3?),
> I likely would be very interested doing a live deployment. However, with
> that said, it certainly is interesting enough to investigate and I'd
> love to see your writeup. At a minimum it sounds very useful and I may
> use vMX for pre-deployment testing purposes. 
> On your mx104 you said cpu was pegged at %100 - operationally does this
> cause you any grief? How long does it take for your routes to recover
> after a peer flaps? (eg: your sending traffic to a dead peer before
> forwarding is updated to remove those). If you are logged in doing
> normal network stuff like looking up routes or making minor config
> updates, is the cli slogwash or can you move around and work?

MX104 and MX80 are PPC chips, not Intel.  That is a big reason why
they are slow.  I have problems like VRRP flapping due to CPU
starvation.  This is in a lab, so it doesn't matter too much, but I
wouldn't want to put them into a production network.
juniper-nsp mailing list juniper-nsp@puck.nether.net

Reply via email to