Go read about HSRP and VRRP. What you propose is akin to turning off one physical switch port and turning on another when you want to switch from an active physical server to a standby, and this is not how it's done in practice; instead, you connect the two VMs to the same network and let them decide which gets the primary address.
On 28 October 2014 10:27, A, Keshava <keshav...@hp.com> wrote: > Hi Alan and Salvatore, > > > > Thanks for response and I also agree we need to take small steps. > > However I have below points to make. > > > > It is very important how the Service VM needs will be deployed w.r.t HA. > > As per current discussion, you are proposing something like below kind of > deployment for Carrier Grade HA. > > Since there is a separate port for Standby-VM also, then the corresponding > standby-VM interface address should be globally routable also. > > Means it may require the Standby Routing protocols to advertise its > interface as Next-HOP for prefix it routes. > > However external world should not be aware of the standby-routing running > in the network. > > > > > > > > > > Instead if we can think of running Standby on same stack with Passive > port, ( as shown below) then external world will be unaware of the > standing Service Routing running. > > *This may be something very basic requirement from Service-VM (NFV HA > perspective) for Routing/MPLS/Packet processing domain. * > > *I am brining this issue now itself, because you are proposing to change > the basic framework of packer delivering to VM’s.* > > *(Of course there may be other mechanism of supporting redundancy, > however it will not be as efficient as that of handing at packet level). * > > > > > > > > Thanks & regards, > > Keshava > > > > *From:* Alan Kavanagh [mailto:alan.kavan...@ericsson.com] > *Sent:* Tuesday, October 28, 2014 6:48 PM > > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking > blueprints > > > > Hi Salvatore > > > > Inline below. > > > > *From:* Salvatore Orlando [mailto:sorla...@nicira.com > <sorla...@nicira.com>] > *Sent:* October-28-14 12:37 PM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking > blueprints > > > > Keshava, > > > > I think the thread is not going a bit off its stated topic - which is to > discuss the various proposed approaches to vlan trunking. > > Regarding your last post, I'm not sure I saw either spec implying that at > the data plane level every instance attached to a trunk will be implemented > as a different network stack. > > AKà Agree > > Also, quoting the principle earlier cited in this thread - "make the > easy stuff easy and the hard stuff possible" - I would say that unless five > 9s is a minimum requirement for a NFV application, we might start worrying > about it once we have the bare minimum set of tools for allowing a NFV > application over a neutron network. > > AKà five 9’s is a 100% must requirement for NFV, but lets ensure we don’t > mix up what the underlay service needs to guarantee and what openstack > needs to do to ensure this type of service. Would agree, we should focus > more on having the right configuration sets for onboarding NFV which is > what Openstack needs to ensure is exposed then what is used underneath > guarantee the 5 9’s is a separate matter. > > I think Ian has done a good job in explaining that while both approaches > considered here address trunking for NFV use cases, they propose > alternative implementations which can be leveraged in different way by NFV > applications. I do not see now a reason for which we should not allow NFV > apps to leverage a trunk network or create port-aware VLANs (or maybe you > can even have VLAN aware ports which tap into a trunk network?) > > AKà Agree, I think we can hammer this out once and for all in > Paris…….this feature has been lingering too long. > > We may continue discussing the pros and cons of each approach - but to me > it's now just a matter of choosing the best solution for exposing them at > the API layer. At the control/data plane layer, it seems to me that trunk > networks are pretty much straightforward. VLAN aware ports are instead a > bit more convoluted, but not excessively complicated in my opinion. > > AKà My thinking too Salvatore, lets ensure the right elements are exposed > at API Layer, I would also go a little further to ensure we get those > feature sets to be supported into the Core API (another can of worms > discussion but we need to have it). > > Salvatore > > > > > > On 28 October 2014 11:55, A, Keshava <keshav...@hp.com> wrote: > > Hi, > > Pl find my reply .. > > > > > > Regards, > > keshava > > > > *From:* Alan Kavanagh [mailto:alan.kavan...@ericsson.com] > *Sent:* Tuesday, October 28, 2014 3:35 PM > > > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking > blueprints > > > > Hi > > Please find some additions to Ian and responses below. > > /Alan > > > > *From:* A, Keshava [mailto:keshav...@hp.com <keshav...@hp.com>] > *Sent:* October-28-14 9:57 AM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking > blueprints > > > > *Hi,* > > *Pl fine the reply for the same.* > > > > *Regards,* > > *keshava* > > > > *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk <ijw.ubu...@cack.org.uk>] > > *Sent:* Tuesday, October 28, 2014 1:11 PM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking > blueprints > > > > This all appears to be referring to trunking ports, rather than anything > else, so I've addressed the points in that respect. > > On 28 October 2014 00:03, A, Keshava <keshav...@hp.com> wrote: > > Hi, > > 1. How many Trunk ports can be created ? > > Why would there be a limit? > > Will there be any Active-Standby concepts will be there ? > > I don't believe active-standby, or any HA concept, is directly > relevant. Did you have something in mind? > > *For the NFV kind of the scenario, it is very much required to run the > ‘Service VM’ in Active and Standby Mode.* > > *AK**à** We have a different view on this, the “application runs as a > pair” of which the application either runs in active-active or active > standby…this has nothing to do with HA, its down to the application and how > its provisioned and configured via Openstack. So agree with Ian on this.* > > *Standby is more of passive entity and will not take any action to > external network. It will be passive consumer of the packet/information.* > > *AK**à** Why would we need to care?* > > *In that scenario it will be very meaningful to have* > > *“Active port – connected to “Active Service VM”.* > > *“Standby port – connected to ‘Standby Service VM’. Which will turn Active > when old Active-VM goes down ?* > > *AK**à** Cant you just have two VM’s and then via a controller decide how > to address MAC+IP_Address control…..FYI…most NFV Apps have that built-in > today.* > > *Let us know others opinion about this concept.* > > *AK**à**Perhaps I am miss reading this but I don’t understand what this > would provide as opposed to having two VM’s instantiated and running, why > does Neutron need to care about the port state between these two VM’s? > Similarly its better to just have 2 or more VM’s up and the application > will be able to address when failover occurs/requires. Lets keep it simple > and not mix up with what the apps do inside the containment.* > > > > *Keshava: * > > *Since this is solution is more for Carrier Grade NFV Service VM, I have > below points to make.* > > *Let’s us say Service-VM running is BGP or BGP-VPN or ‘MPLS + LDP + > BGP-VPN’.* > > *When such kind of carrier grade service are running, how to provide the > Five-9 HA ?* > > *In my opinion, * > > * Both (Active,/Standby) Service-VM to hook same underlying > OpenStack infrastructure stack (br-ext->br-int->qxx-> VMa) * > > *However ‘active VM’ can hooks to ‘active-port’ and ‘standby VM’ hook to > ‘passive-port’ with in same stack.* > > > > *Instead if Active and Standby VM hooks to 2 different stack > (br-ext1->br-int1 **à**qxx1-> VM-active) and (br-ext2->br-int2->qxx2-> > VM-Standby) can those Service-VM achieve the 99.99999 reliability ? * > > > > * Yes I may be thinking little complicated way from > open-stack perspective..* > > 2. Is it possible to configure multiple IP address configured on > these ports ? > > Yes, in the sense that you can have addresses per port. The usual > restrictions to ports would apply, and they don't currently allow multiple > IP addresses (with the exception of the address-pair extension). > > In case IPv6 there can be multiple primary address configured will this > be supported ? > > No reason why not - we're expecting to re-use the usual port, so you'd > expect the features there to apply (in addition to having multiple sets of > subnet on a trunking port). > > 3. If required can these ports can be aggregated into single one > dynamically ? > > That's not really relevant to trunk ports or networks. > > 4. Will there be requirement to handle Nested tagged packet on > such interfaces ? > > For trunking ports, I don't believe anyone was considering it. > > > > > > > > > > > > Thanks & Regards, > > Keshava > > > > *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk] > *Sent:* Monday, October 27, 2014 9:45 PM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking > blueprints > > > > On 25 October 2014 15:36, Erik Moe <erik....@ericsson.com> wrote: > > Then I tried to just use the trunk network as a plain pipe to the > L2-gateway and connect to normal Neutron networks. One issue is that the > L2-gateway will bridge the networks, but the services in the network you > bridge to is unaware of your existence. This IMO is ok then bridging > Neutron network to some remote network, but if you have an Neutron VM and > want to utilize various resources in another Neutron network (since the one > you sit on does not have any resources), things gets, let’s say non > streamlined. > > > > Indeed. However, non-streamlined is not the end of the world, and I > wouldn't want to have to tag all VLANs a port is using on the port in > advance of using it (this works for some use cases, and makes others > difficult, particularly if you just want a native trunk and are happy for > Openstack not to have insight into what's going on on the wire). > > > > Another issue with trunk network is that it puts new requirements on > the infrastructure. It needs to be able to handle VLAN tagged frames. For a > VLAN based network it would be QinQ. > > > > Yes, and that's the point of the VLAN trunk spec, where we flag a network > as passing VLAN tagged packets; if the operator-chosen network > implementation doesn't support trunks, the API can refuse to make a trunk > network. Without it we're still in the situation that on some clouds > passing VLANs works and on others it doesn't, and that the tenant can't > actually tell in advance which sort of cloud they're working on. > > Trunk networks are a requirement for some use cases independent of the > port awareness of VLANs. Based on the maxim, 'make the easy stuff easy and > the hard stuff possible' we can't just say 'no Neutron network passes VLAN > tagged packets'. And even if we did, we're evading a problem that exists > with exactly one sort of network infrastructure - VLAN tagging for network > separation - while making it hard to use for all of the many other cases in > which it would work just fine. > > In summary, if we did port-based VLAN knowledge I would want to be able to > use VLANs without having to use it (in much the same way that I would like, > in certain circumstances, not to have to use Openstack's address allocation > and DHCP - it's nice that I can, but I shouldn't be forced to). > > My requirements were to have low/no extra cost for VMs using VLAN > trunks compared to normal ports, no new bottlenecks/single point of > failure. Due to this and previous issues I implemented the L2 gateway in a > distributed fashion and since trunk network could not be realized in > reality I only had them in the model and optimized them away. > > > > Again, this is down to your choice of VLAN tagged networking and/or the > OVS ML2 driver; it doesn't apply to all deployments. > > > > But the L2-gateway + trunk network has a flexible API, what if someone > connects two VMs to one trunk network, well, hard to optimize away. > > > > That's certainly true, but it wasn't really intended to be optimised away. > > Anyway, due to these and other issues, I limited my scope and switched > to the current trunk port/subport model. > > > > The code that is for review is functional, you can boot a VM with a trunk > port + subports (each subport maps to a VLAN). The VM can send/receive VLAN > traffic. You can add/remove subports on a running VM. You can specify IP > address per subport and use DHCP to retrieve them etc. > > > > I'm coming to realise that the two solutions address different needs - the > VLAN port one is much more useful for cases where you know what's going on > in the network and you want Openstack to help, but it's just not broad > enough to solve every problem. It may well be that we want both solutions, > in which case we just need to agree that 'we shouldn't do trunk networking > because VLAN aware ports solve this problem' is not a valid argument during > spec review. > -- > > Ian. > > > _______________________________________________ > OpenStack-dev mailing list > OpenStackemail@example.com > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStackfirstname.lastname@example.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStackemail@example.com > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
_______________________________________________ OpenStack-dev mailing list OpenStackfirstname.lastname@example.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev