So the proper way to do this would be, xe pool-emergency-transition-to-master xe pool-recover-slaves
xe host-list params=uuid,name-label,host-metrics-live xe vm-reset-powerstate resident-on=UUID-OF-FAILED-MASTER --force --multiple /opt/xensource/sm/resetvdis.py NEW-POOl-MASTER-UUID SR-UUID master ? On Fri, Apr 26, 2013 at 3:43 PM, Jonathan Ludlam < [email protected]> wrote: > No, the fix is to run the scrip 'resetvdis.py' - see this bit of the > xenserver docs: > http://docs.vmd.citrix.com/XenServer/6.1.0/1.0/en_gb/reference.html#pool_failures > > This is not obvious, but has been made better in the version of xapi under > development at the moment, so in the next release it ought to be a bit > smoother. > > Jon > > Sent from my iPad > > On 26 Apr 2013, at 19:16, "[email protected]" <[email protected]> wrote: > > Hello, > > I had a master of the pool fail. I did the following on the slave in the > pool, > > xe pool-emergency-transition-to-master > xe pool-recover-slaves > > xe host-list params=uuid,name-label,host-metrics-live > > xe vm-reset-powerstate resident-on=UUID-OF-FAILED-MASTER --force --multiple > > now when I try to start a VM on the old slave that is now the master, I > get the following, > > Error code: SR_BACKEND_FAILURE_46 > Error parameters: , The VDI is not available [opterr=VDI > b4354ed7-3042-4874-93b4-59a392c43027 already attached RW], > > I'd tried everything I can find online to fix this, the only way I'd been > able to fix the important VMs was to vdi-forget them, then relabel/readd > them to the VMs and start. That works, but is that REALLY the only way to > fix this problem? > > Also the VDIs were created shared, however, they are showing false under > xe commands. > > Any insight? > > _______________________________________________ > Xen-api mailing list > [email protected] > http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api > >
_______________________________________________ Xen-api mailing list [email protected] http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
