Just a follow-up to my original post.

There are a couple of ways to solve this issue, I think.

I manually modified the database (mysql) to place the VMs into the 'running' state, thusly:

mysql -B -e "update opennebula.vm_pool set etime='0' where oid='<vmid>'"
mysql -B -e "update opennebula.vm_pool set lcm_state='3' where oid='<vmid>'"
mysql -B -e "update opennebula.vm_pool set state='3' where oid='<vmid>'"

where <vmid> is the ID of the VM as determined by using 'onevm list'.


The second way to resolve the issue is to again modify the database, but place the status of the VM into 'unknown' and then restart the VM:

mysql -B -e "update opennebula.vm_pool set lcm_state='3' where oid='<vmid>'"
mysql -B -e "update opennebula.vm_pool set state='16' where oid='<vmid>'"
onevm restart <vmid>

Cheers,
Dan

On 08/15/2011 04:08 PM, Dan Yocum wrote:
Hi,

We've encountered a new "failure" mode which we don't know how to
recover from. Help!

libvirtd daemon dies on a host node. oned can't successfully query the
libvirtd daemon as to the state of the VMs, so all VMs enter "unknown"
state. User doesn't realize that libvirtd is the problem, so they
attempt to 'onevm restart <vmid>' which results in a state of 'fail.'
Sysadmin comes along and restarts libvirtd on the host machine and all
VMs are now visible and oned can successfully query the state of the
VMs, however, oned still thinks that the other VMs are in 'fail' state
so the user can't restart or stop or anything to the VMs (which are
still running).

Is there a way to force oned to rescan all VMs, even if they're in a
'fail' state?

Thanks,
Dan


--
Dan Yocum
Fermilab  630.840.6509
[email protected], http://fermigrid.fnal.gov
"I fly because it releases my mind from the tyranny of petty things."
_______________________________________________
Users mailing list
[email protected]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to