> From: Saku Ytti [mailto:s...@ytti.fi]
> Sent: Saturday, August 18, 2018 12:15 PM
> 
> On Sat, 18 Aug 2018 at 14:02, <adamv0...@netconsultings.com> wrote:
> 
> > Really? Interesting, didn't know that, are these features documented
> anywhere? I could not find anything looking for multi instance RPD.
> > Are the RPD instances as ships in night, each maintaining its own set of
> tables and protocols?
> 
> Yes. It's old old feature:
> 
> https://www.juniper.net/documentation/en_US/junos/topics/concept/logi
> cal-systems-overview-solutions.html
> 
> The ability to run multiple FreeBSD KVM on Linux guest is very new. It also
> allows you to connect the separate instances via virtual fabric interface, so
> you don't need to eat physical interface to connect the separate instances.
> This is really 100% separate JunOS, they only share the Linux hypervisor from
> software POV.
> 
> https://www.juniper.net/documentation/en_US/junos/information-
> products/pathway-pages/junos-node-slicing/junos-node-slicing.html
> 
> Use case might be buy 128GB RE, collapse edge + pure-core without having
> hardware.
> Or Megaport on steroids, drop Calient optical switch in pops and MX, have
> customers build via API their entire global backbone in seconds, with active
> devices and optical network to connect them.
> 
Aah these two!  I feel embarrassed now haha :) 

I actually have high hopes for the node-slicing thing -in particular the 
ability to run CP externally on x86 based COTS HW, hopefully it will get 
adopted by all the vendors soon (and if the API between CP and DP gets 
standardized well be living in an SDN utopia).

I'm actually having this "architect personality crises" cause I can't make up 
my mind between centralizing and decentralizing on the PE edge i.e. whether to 
have couple big monolithic chassis to fit/scale for all service types (thus 
scale up/vertically) or rather decentralize and create many smaller maybe even 
specialized node types and scale by adding more types or more of the same from 
each type (thus scale out/horizontally).
And I'm very aware of this whole industry swinging back and forth between the 
two extremes over the years.

There is a finite limit to how much CP state can one fit onto these huge 
monolithic chassis you can have a big N slot chassis full even though there are 
just a few line-cards in it, just because you ran out of CP resources because 
of too many routes/VRFs/BGP sessions.  (this problem gets even more pronounced 
with multi-chassis systems). This is the problem we're facing currently.   
I mean this idea of additional CP HW is not new, it's been around since juniper 
T Matrix and cisco CRS1.
CRS1 -introducing Distributed Route Processors (DRPs) onto which one could 
offload say an instance of additional BGP process (but one could have just one 
additional per main system or just two DRPs per SDR).
Another example from past was the T Matrix with JCS1200  so you could have 
separate CP for each logical system (but again one could have just two REs per 
logical system).
And then with the advent of MX and ASR9k -we kind of lost the ability to scale 
out CP.

So I really welcome this flexibility of being able to scale CP up (assigning 
more CPUs to the CP VM) or scale CP out (more CP VMs), while maintaining single 
monolithic DP.
This would help me address one of the biggest drawbacks of decentralization 
(many smaller PEs) -that is separate cooling/power/chassis for each PE -really 
eats up rack space and is power inefficient.
On the other hand I'm mindful of added complexity in terms of maintaining 
resiliency in this external CP environment.

Biggest problem is though that this node-slicing/external x86 based CP is very 
fresh and I'm not aware of same thing available in cisco or other vendors and 
unfortunately I need to provide solutions now,



adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to