For better or worse I do remember those days, though I was referring to recent hardware switches/bridges (should have clarified that).  To my knowledge that only applies to things like the STP protocols, but I could be wrong, again would need to read through the specifications again to be sure as that is not a use case I have needed thus far.

Scaling the vMX for testing/lab/PoC deployments can be challenging, but I have been able to get large topologies off of a single older model dual Xeon E5-2670 server using logical systems, total of seven vMX instances and 84 routers using the aforementioned logical systems, at that point I run into thermal limits because of the VFP CPU usage (even in lite-mode, which only appears to be related to the number of interfaces rather than the packet processing, which seems to be the same in lite and performance modes hence the CPU usage).  Some months back I tried to see how large of a topology I could build with five servers I had access to, I was able to get to seven vMX instances per server with 12 logical systems per vMX instance which gave me a 420 router topology using the trial license, so you can scale a lab/PoC setup quite nicely.  Only downside to using logical systems is they do not support everything that a non-LS would support, the biggest missing feature in my case being EVPN.

Another item I have been testing is the vQFX which has much less CPU demand since the interfaces are bound to the VCP instead of the VFP, but I have run into many other issues with that and have not tested it as thoroughly, it is also just a alpha/beta release from Juniper at the moment.

-C


On 12/02/2017 03:40 AM, adamv0...@netconsultings.com wrote:
Hey,

local link and not forwarded by the soft bridge by default (I do not know of
any hardware bridges that allow you to disabled this restriction, if you know
of any I would be interested.

My understanding is that Carrier-Ethernet grade switches/routers should allow 
you to peer/drop/tunnel/forward L2 protocols.
If you're in be business long enough you may remember migrations from 
leased-lines to frame-relay and then from FR to MPLS and then from L3VPNs to 
L2VPNs to complete the circle.
These L2 services especially the point-to-point ones, that's where customers 
pretty much expect the same properties as they used to have in leased-lines or 
FR services, basically just a pipe where MTU is not an issue and can transport 
anything from L2 up so they can run their own MPLS/DC networks over these pipes.
Out of curiosity what is your use case that you need to use LACP to
communicate with VMs?

Large scale ISP network simulations (for proof of concept testing of various 
designs/migrations/etc).
This allows me to verify my designs, how the technology works on selected code 
versions -if there are any bugs, interworking between vendors.
And there are the provisioning and network monitoring systems, new SDN 
approaches that can be tested in this virtual environment, you name it.
Since it's all virtual one can simulate complete networks rather than scaled 
down slices used in physical labs, so I can see the effects of topology-based 
route-reflection in terms of routes distribution, the effects of node or link 
failures on traffic-engineering and possible congestion as a result across the 
whole backbone all in 1:1 scale, but the important point is to make the 
simulated control-plane as close to reality as possible hence the need for LACP.
Speaking of scale, the fact that the VFP is always at 100% CPU is not helping -reminds me of the good old Dynamips -but at least there you could fix it with an idle value.
Having hundreds of these VNFs running is not very green.
adam




_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to