Dear all,

                While testing different setups and also deleting them I noticed 
that it is occasional/common occurrence that orphaned system VMs or hosts 
remain in the database.

                For example - during removal of a zone (system VMs, secondary 
storages, host, cluster, pod, networks and then the zone) if not executed in 
the correct order system VMs are left orphaned in the DB (no seen in the GUI) 
and thus preventing deletion of the POD. The error (quoted by memory) says 
"there are existing hosts so operation cannot proceed". Other times the 
orphaned VMs lock public IPs preventing deletion of zone networks.

                What I did to go around the issues is go inside the DB and 
tweak the cloud -> vm_instance  & hosts table settings for the particular 
system instance to mimic the one for other already removed instances (changing 
the status to "removed", setings modification date etc).

                What is the best way to approach such issue in production?
                Also what is the reasoning for the system VMs are both present 
in VM_instance table hosts tables at the same time? It feels counter intuitive 
to look for/insert VMs in the hosts table.

Best regards,
Jordan Kostov

Reply via email to