Hello guys, I have the following problem - I had two cluster PCs and a Host PC where opennebula already was installed and worked as expected. However, I've been told to change the Hard Disk in one of the clusters, which meant installing new Ubuntu on this HD and installing the cluster-part of ONE on it.
I thought that this should create no problems if I just configure this cluster correctly which I did. However, now when I try to migrate or livemigrate any deployed VM, this VM stays in SAVE or MIGRATE respectfully. Nothing else happens. After issuing a "virsh list" command on both clusters I can see that the source-cluster still shows the VM as "running", and the destination-cluster shows "paused" which is as expected. But still the migration just never finishes. Another interesting thing is that if I deploy another VM, this one stays in BOOT all the time.without failing or anything. The vm.log files don't show anything suspicious. Do you have any idea where I could keep on searching for the error? Could it be that the known_hosts file doesn't fit anymore because of the change? But still, the "NoStrictHostChecking" is set to no, so this shouldn't produce the problem either, right??? Please help!
_______________________________________________ Users mailing list [email protected] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
