Hi,
On 10/04/2018 11:37 PM, [email protected] wrote:
I know it can be set up and run like a champ and do some (undefined)
number of gigabits without issue. What concerns me is that there are
performance limitations in these software only platforms based on your
processor/bus/card choices, and of course the performance of a software
hash vs a hardware cam for forwarding table lookups. And usually
(imho), you hit the platform limits like a truck running into a brick
wall. However, if I knew I was only going to have just a few gbps (3?),
I likely would be very interested doing a live deployment. However, with
that said, it certainly is interesting enough to investigate and I'd
love to see your writeup. At a minimum it sounds very useful and I may
use vMX for pre-deployment testing purposes.
The write up has been linked in one of the previous posts, so feel free
to take a look at it there (I published it on my website as I suspect it
will come in handy for other people trying to set this up).
As for the actual performance limitations of the vMX, I have not managed
to hit them at all and I suspect I won't ever for the size of the
networks that they are deployed in. When I was testing I got up to 40G
of throughput through it which was much more than enough (using various
packet sizes to simuluate real traffic), I didn't get to test any
further though. The CPU difference between pushing 500M/60k PPS and
9G/7m PPS was non existant - the graphs I have for my FPC on the vMX's
is steady regardless of the traffic going through.
On your mx104 you said cpu was pegged at %100 - operationally does this
cause you any grief? How long does it take for your routes to recover
after a peer flaps? (eg: your sending traffic to a dead peer before
forwarding is updated to remove those). If you are logged in doing
normal network stuff like looking up routes or making minor config
updates, is the cli slogwash or can you move around and work?
There are a few problems:
* I use Ansible to deploy the configurations for network devices. The
commit times are now quite bad (it can take almost 1.5 minutes) I had to
adjust the timeout otherwise it would constantly fail.
* For the actual traffic going through the devices I don't have any issues
* Using the CLI to view routes, if you have a full table, can be a
painful process.
* Traffic going to dead peers is a problem, I have given up taking full
tables on all of my MX80/MX104 devices. Sometimes this could mean an
outage of over 30 minutes if BGP is flapping as well in my case.
* Since I use Ansible to deploy the configs, I don't touch the CLI a lot
directly. When I do, its responsive enough when making changes/viewing
the config, the slow part is the commit process as well as viewing routes.
Since the MX104 has user replacable RE's I really wish Juniper would at
least offer a different option with a more beefy CPU/RAM but I don't
think that would ever happen...
_______________________________________________
juniper-nsp mailing list [email protected]
https://puck.nether.net/mailman/listinfo/juniper-nsp