until we use "xe pool-designate-new-master", the master of the pool will
not change.

Unless, you have configured the HA features in your XenServer Pool/cluster,
the reboot of the master may trigger a change of master server.

Bottom line: today, the code does not change the master server of a
XenServer pool. The maintenance process of ACS does not change the master
of the cluster/XenServer pool

On Sat, Jan 2, 2016 at 9:36 PM, Davide Pala <davide.p...@gesca.it> wrote:

> No. The upgrade must be done with a cold reboot without the maintenance.
> XS pool master mist be the master again when it boot
>
>
>
> Inviato dal mio dispositivo Samsung
>
>
> -------- Messaggio originale --------
> Da: Rafael Weingärtner <rafaelweingart...@gmail.com>
> Data: 03/01/2016 00:25 (GMT+01:00)
> A: users@cloudstack.apache.org
> Oggetto: Re: A Story of a Failed XenServer Upgrade
>
> That is true it is not putting the host in maintenance. Not just the
> master, but any XenServer host.
>
> The question is, should it? If so, we should open a Jira ticket.
>
> On Sat, Jan 2, 2016 at 9:18 PM, Davide Pala <davide.p...@gesca.it> wrote:
>
> > So i dont know anything about cloud on xenserver and for this reason i
> > think the cloudstack dont put in maintenance xenserver pool master
> >
> >
> >
> > Inviato dal mio dispositivo Samsung
> >
> >
> > -------- Messaggio originale --------
> > Da: Rafael Weingärtner <rafaelweingart...@gmail.com>
> > Data: 02/01/2016 22:48 (GMT+01:00)
> > A: users@cloudstack.apache.org
> > Oggetto: Re: A Story of a Failed XenServer Upgrade
> >
> > Hi all,
> >
> > There is nothing better than looking at the source code.
> >
> > After the VM migration (or restart for LCX ?!), when the host is put in
> > maintenance mode, for Xenserver it will remove a tag called “cloud”.
> >
> > On Sat, Jan 2, 2016 at 7:38 PM, Davide Pala <davide.p...@gesca.it>
> wrote:
> >
> > > i don't know what do the maintenance mode in CS but if it put in
> > > maintenance also the pool master this article is wrong!
> > >
> > >
> > > Davide Pala
> > > Infrastructure Specialist
> > > Gesca srl
> > >
> > > Via degli Olmetti, 18
> > > 00060 Formello (Roma)
> > > Office:  +39 06 9040661
> > > Fax:     +39 06 90406666
> > > E-mail: davide.p...@gesca.it
> > > Web:    www.gesca.it<http://www.gesca.it>
> > >
> > >
> > >
> > >
> > > ________________________________________
> > > Da: Alessandro Caviglione [c.alessan...@gmail.com]
> > > Inviato: sabato 2 gennaio 2016 22.27
> > > A: users@cloudstack.apache.org
> > > Oggetto: Re: A Story of a Failed XenServer Upgrade
> > >
> > > No guys,as the article wrote, my first action was to put in Maintenance
> > > Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
> > XenServer
> > > Pool Master first before any of the Slaves.  To do so you need to empty
> > the
> > > Pool Master of all CloudStack VMs, and you do this by putting the Host
> > into
> > > Maintenance Mode within CloudStack to trigger a live migration of all
> VMs
> > > to alternate Hosts"
> > >
> > > This is exactly what I've done and after the XS upgrade, no hosts was
> > able
> > > to communicate with CS and also with the upgraded host.
> > >
> > > Putting an host in Maint Mode within CS will trigger MM also on
> XenServer
> > > host or just will move the VMs to other hosts?
> > >
> > > And again.... what's the best practices to upgrade a XS cluster?
> > >
> > > On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
> > rberg...@schubergphilis.com>
> > > wrote:
> > >
> > > > CloudStack should always do the migration of VM's not the Hypervisor.
> > > >
> > > > That's not true. You can safely migrate outside of CloudStack as the
> > > power
> > > > report will tell CloudStack where the vms live and the db gets
> updated
> > > > accordingly. I do this a lot while patching and that works fine on
> 6.2
> > > and
> > > > 6.5. I use both CloudStack 4.4.4 and 4.7.0.
> > > >
> > > > Regards, Remi
> > > >
> > > >
> > > > Sent from my iPhone
> > > >
> > > > On 02 Jan 2016, at 16:26, Jeremy Peterson <jpeter...@acentek.net
> > <mailto:
> > > > jpeter...@acentek.net>> wrote:
> > > >
> > > > I don't use XenServer maintenance mode until after CloudStack has put
> > the
> > > > Host in maintenance mode.
> > > >
> > > > When you initiate maintenance mode from the host rather than
> CloudStack
> > > > the db does not know where the VM's are and your UUID's get jacked.
> > > >
> > > > CS is your brains not the hypervisor.
> > > >
> > > > Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
> > > > Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
> > hypervisor
> > > if
> > > > needed and then CS and move on to the next Host.
> > > >
> > > > CloudStack should always do the migration of VM's not the Hypervisor.
> > > >
> > > > Jeremy
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: Davide Pala [mailto:davide.p...@gesca.it]
> > > > Sent: Friday, January 1, 2016 5:18 PM
> > > > To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>
> > > > Subject: R: A Story of a Failed XenServer Upgrade
> > > >
> > > > Hi alessandro. If u put in maintenance mode the master you force the
> > > > election of a new pool master. Now when you have see the upgraded
> host
> > as
> > > > disconnected you are connected to the new pool master and the host
> (as
> > a
> > > > pool member) cannot comunicate with a pool master of an earliest
> > version.
> > > > The solution? Launche the upgrade on the pool master without enter in
> > > > maintenance mode. And remember a consistent backup!!!
> > > >
> > > >
> > > >
> > > > Inviato dal mio dispositivo Samsung
> > > >
> > > >
> > > > -------- Messaggio originale --------
> > > > Da: Alessandro Caviglione <c.alessan...@gmail.com<mailto:
> > > > c.alessan...@gmail.com>>
> > > > Data: 01/01/2016 23:23 (GMT+01:00)
> > > > A: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>
> > > > Oggetto: A Story of a Failed XenServer Upgrade
> > > >
> > > > Hi guys,
> > > > I want to share my XenServer Upgrade adventure to understand if I did
> > > > domething wrong.
> > > > I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the
> VRs
> > > > has been upgraded I start the upgrade process of my XenServer hosts
> > from
> > > > 6.2 to 6.5.
> > > > I do not already have PoolHA enabled so I followed this article:
> > > >
> > > >
> > >
> >
> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
> > > >
> > > > The cluster consists of n° 3 XenServer hosts.
> > > >
> > > > First of all I added manage.xenserver.pool.master=false
> > > > to environment.properties file and restarted cloudstack-management
> > > service.
> > > >
> > > > After that I put in Maintenance Mode Pool Master host and, after all
> > VMs
> > > > has been migrated, I Unmanaged the cluster.
> > > > At this point all host appears as "Disconnected" from CS interface
> and
> > > > this should be right.
> > > > Now I put XenServer 6.5 CD in the host in Maintenance Mode and start
> a
> > > > in-place upgrade.
> > > > After XS6.5 has been installed, I istalled the 6.5SP1 and reboot
> again.
> > > > At this point I expected that, after click on Manage Cluster on CS,
> all
> > > > the hosts come back to "UP" and I could go ahead upgrading the other
> > > > hosts....
> > > >
> > > > But, instead of that, all the hosts still appears as "Disconnected",
> I
> > > > tried a couple of cloudstack-management service restart without
> > success.
> > > >
> > > > So I opened XenCenter and connect to Pool Master I upgraded to 6.5
> and
> > it
> > > > appear in Maintenance Mode, so I tried to Exit from Maint Mode but I
> > got
> > > > the error: The server is still booting
> > > >
> > > > After some investigation, I run the command "xe task-list" and this
> is
> > > the
> > > > result:
> > > >
> > > > uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
> > > > name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
> > > > status ( RO): pending
> > > > progress ( RO): 0.000
> > > >
> > > > I tried a couple of reboot but nothing changes.... so I decided to
> shut
> > > > down the server, force raise a slave host to master with emergency
> > mode,
> > > > remove old server from CS and reboot CS.
> > > >
> > > > After that, I see my cluster up and running again, so I installed XS
> > > > 6.2SP1 on the "upgraded" host and added again to the cluster....
> > > >
> > > > So after an entire day of work, I'm in the same situation! :D
> > > >
> > > > Anyone can tell me if I made something wrong??
> > > >
> > > > Thank you very much!
> > > >
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>
>
>
> --
> Rafael Weingärtner
>



-- 
Rafael Weingärtner

Reply via email to