Hi,
I have been running an instance of ONE 3.8.3. I have got two main nodes providing HA. Today I was updating VNETs .. (adding one new bridge). Unfortunately this bridge lies over the management interface. So when I changed configuration HA reacted and switched. So far so good. But when everything was done I could see that some VMs are in FAILED state. But when I checked them with virsh list they were installed and running properly.. I let ONE be but after a while all FAILED VMs stayed in FAILED state. Worst thing happen when I tried to resubmit them, It let the original instance running on the original node and ran a new instance on different node. As long as I dont have CLVM and my datastore uses shared_lvm it allows to start a VM on multiple hosts at same time.. Thankfully I stopped one instance before it could damage FS.

Is there any way how to make hypervisor announce proper state since they are running but ONE assumes it is FAILED? Or avoid this situation? I find that ONE doesnt update the state upon it finds it in FAILED.

Thanks, Milos
_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to