Importantly, if you're in a multi-hypervisor setup, you need to account for what'll happen when all clusters of a hypervisor type go down. In your case, if the KVM cluster goes down and you've pinned the systems VMs to the KVM cluster, then CloudStack won't be able to restart them on either XenServer cluster. The net result of this will be an outage which may be difficult to recover from.
Trust me, that was one of the first outages I needed to attend to when I started with CloudStack a few years ago. At the time I was running clusters of vSphere, XenServer and KVM managed by the same management server, and there was a bug where the system VMs could get stuck on a hypervisor. That bug is long since fixed, but highlighted that in multi-hypervisor you need to plan for the outages of a given hypervisor type in addition to planning for the "normal" outages. -ti On Mon, Feb 20, 2017 at 11:48 AM, Rafael Weingärtner < rafaelweingart...@gmail.com> wrote: > [UPDATE] > > According to this method > "com.cloud.resource.ResourceManagerImpl.getAvailableHypervisor(long)", if > you do not configure the “system.vm.default.hypervisor” parameter, the > hypervisor selected for system VMs is random (among the ones available). > > On Mon, Feb 20, 2017 at 11:37 AM, Rafael Weingärtner < > rafaelweingart...@gmail.com> wrote: > > > Hi Engelmann, > > I still quite not understand your problem yet. I see that you have three > > different clusters; one KVM, One XenServer 6.5 and one 7.0. You want to > use > > the KVM as the hypervisor to host your system VMs, right? > > > > Have you configured the parameter “system.vm.default.hypervisor”? > > > > On Mon, Feb 20, 2017 at 11:25 AM, Engelmann Florian < > > florian.engelm...@everyware.ch> wrote: > > > >> Hi Rafael, > >> > >> We do use the following setup (Test environment currently): > >> > >> ACS 4.9.2 > >> 1x Xenserver 6.5 Cluster (3 nodes) > >> 1x Xenserver 7.0 Cluster (1 node) > >> 1x Ubuntu 16.04 KVM Cluster (3 nodes) > >> > >> Networking = Advanced Zone VPC and Virtual Router > >> > >> I noticed the error Message was just informational an not the real > >> problem. The problem we got is: > >> > >> [...] > >> Allocating the VR with id=4185 in datacenter > >> com.cloud.dc.DataCenterVO$$EnhancerByCGLIB$$caa6c375@2 with the > >> hypervisor type XenServer > >> [...] > >> Cluster: 6 has HyperVisorType that does not match the VM, skipping this > >> cluster > >> [...] > >> > >> We tried to force ACS to use the KVM systemVM template but for some > >> reason ACS refuses to use that System offering. > >> > >> Hosts: > >> Name ewcstack-vh023-test > >> Host Tags="kvm" > >> > >> Primary storage: > >> Name: ewcstack-vh023-test,Local Storage: Storage Tags="vol-local-kvm" > >> > >> System offering: > >> Name custom-local-sm-kvm > >> Storage Tags="vol-local-kvm" > >> Host Tags="kvm" > >> > >> Network offering: > >> Name custom local kvm > >> System offering="custom-local-sm-kvm" > >> > >> Disk offering: > >> Name custom local kvm > >> Storage Tags="vol-local-kvm" > >> > >> > >> Creating a Instance with a network offering "custom-local-sm-kvm" > doesn't > >> stop ACS from using a XenServer systemVM template. Why? > >> > >> All the best, > >> Florian > >> > >> > >> ________________________________________ > >> From: Rafael Weingärtner <rafaelweingart...@gmail.com> > >> Sent: Friday, February 17, 2017 4:08 PM > >> To: users@cloudstack.apache.org > >> Subject: Re: Ubuntu 16.04, Openvswitch networking issue > >> > >> I think we may need more information. ACS version, network deployment > >> type, > >> and hypervisors? > >> > >> On Fri, Feb 17, 2017 at 10:02 AM, Engelmann Florian < > >> florian.engelm...@everyware.ch> wrote: > >> > >> > Hi, > >> > > >> > sorry I ment "I am NOT able to solve".... > >> > > >> > ________________________________________ > >> > From: Engelmann Florian <florian.engelm...@everyware.ch> > >> > Sent: Friday, February 17, 2017 3:36 PM > >> > To: users@cloudstack.apache.org > >> > Subject: Ubuntu 16.04, Openvswitch networking issue > >> > > >> > Hi, > >> > > >> > another error I am able to solve: > >> > > >> > 2017-02-17 15:24:36,097 DEBUG [c.c.a.ApiServlet] > (catalina-exec-26:ctx- > >> > 30020483) (logid:d303f8ef) ===START=== 192.168.252.76 -- GET > >> > command=createNetwork&response=json&zoneId=e683eeaa- > >> > 92c9-4651-91b9-165939f9000c&name=net-kvm008&displayText= > >> > net-kvm008&networkOf > >> > 2017-02-17 15:24:36,135 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Refusing to > >> > design this network, the physical isolation type is not BCF_SEGMENT > >> > 2017-02-17 15:24:36,136 DEBUG [o.a.c.n.c.m.ContrailGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Refusing to > >> > design this network > >> > 2017-02-17 15:24:36,137 DEBUG [c.c.a.m.DirectAgentAttache] > >> > (DirectAgent-144:ctx-b2cdad73) (logid:eb129204) Seq > >> > 179-6955246674520311671: Response Received: > >> > 2017-02-17 15:24:36,137 DEBUG [c.c.a.t.Request] > >> (StatsCollector-5:ctx-4298a591) > >> > (logid:eb129204) Seq 179-6955246674520311671: Received: { Ans: , > >> MgmtId: > >> > 345049101620, via: 179(ewcstack-vh003-test), Ver: v1, Flags: 10, { > >> > GetStorageStatsAnswer } } > >> > 2017-02-17 15:24:36,137 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) design called > >> > 2017-02-17 15:24:36,138 DEBUG [c.c.h.o.r.Ovm3HypervisorGuru] > >> > (StatsCollector-5:ctx-4298a591) (logid:eb129204) > >> > getCommandHostDelegation: class com.cloud.agent.api.GetStorage > >> StatsCommand > >> > 2017-02-17 15:24:36,138 DEBUG [c.c.h.XenServerGuru] > >> (StatsCollector-5:ctx-4298a591) > >> > (logid:eb129204) getCommandHostDelegation: class com.cloud.agent.api. > >> > GetStorageStatsCommand > >> > 2017-02-17 15:24:36,139 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Refusing to > >> > design this network, the physical isolation type is not MIDO > >> > 2017-02-17 15:24:36,139 DEBUG [c.c.a.m.DirectAgentAttache] > >> > (DirectAgent-72:ctx-656a03ae) (logid:dd7ada9e) Seq > >> 217-8596245788743434945: > >> > Executing request > >> > 2017-02-17 15:24:36,141 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Refusing to > >> > design this network > >> > 2017-02-17 15:24:36,142 DEBUG [o.a.c.n.o. > OpendaylightGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Refusing to > >> > design this network > >> > 2017-02-17 15:24:36,144 DEBUG [c.c.n.g.OvsGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Refusing to > >> > design this network > >> > 2017-02-17 15:24:36,163 DEBUG [o.a.c.n.g.SspGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) SSP not > >> > configured to be active > >> > 2017-02-17 15:24:36,164 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Refusing to > >> > design this network > >> > 2017-02-17 15:24:36,165 DEBUG [c.c.n.g.NuageVspGuestNetworkGuru] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Refusing to > >> > design network using network offering 54 on physical network 200 > >> > 2017-02-17 15:24:36,166 DEBUG [o.a.c.e.o.NetworkOrchestrator] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) Releasing > >> > lock for Acct[3426fb73-70ad-47d9-9c5d-355f34891438-fen] > >> > 2017-02-17 15:24:36,188 DEBUG [c.c.a.ApiServlet] > >> > (catalina-exec-26:ctx-30020483 <3002-0483> ctx-430b6ae1) > >> (logid:d303f8ef) ===END=== > >> > 192.168.252.76 -- GET command=createNetwork& > >> > response=json&zoneId=e683eeaa-92c9-4651-91b9-165939f9000c& > >> > name=net-kvm008&displayText=net-kvm00 > >> > > >> > > >> > We do not use BigSwitch or anything like this, just plain Openvswitch > >> with > >> > Ubuntu 16.04. Any idea whats going on? > >> > > >> > All the best, > >> > Florian > >> > > >> > EveryWare AG > >> > Florian Engelmann > >> > Systems Engineer > >> > Zurlindenstrasse 52a > >> > CH-8003 Zürich > >> > > >> > T +41 44 466 60 00 <+41%2044%20466%2060%2000> > >> > F +41 44 466 60 10 <+41%2044%20466%2060%2010> > >> > > >> > florian.engelm...@everyware.ch > >> > www.everyware.ch > >> > > >> > >> > >> > >> -- > >> Rafael Weingärtner > >> > > > > > > > > -- > > Rafael Weingärtner > > > > > > -- > Rafael Weingärtner >