Hi all,

I have a dual node self hosted cluster v4.3 using gluster as storage so as
to test an actual scenario which will need to be followed at production.
The purpose is to rename the cluster FQDN to a new one, wiping out any
reference to the old previous FQDN. I was not successful in using the
engine-rename tool or other means as there are leftovers from previous FQDN
that cause issues.

The cluster has a data storage domain with one guest VM running on it which
has one snapshot.
I am testing a destructive scenario as below and I find out that when
importing the storage domain to the newly configured cluster, while the
guest VM is imported fine, I do not see the guest VM disk snapshots.

Steps that I follow for this scenario:

*Initial status: *
I have an ovirt cluster with two hosts named v0 and v1.
The gluster storage domain is configured at a separate network where the
hosts are named gluster0 and gluster1.
The cluster has an engine and data storage domain named "engine" and "vms"
respectively.
The "vms" storage domain hosts one guest VM with one guest VM disk
snapshot.
All are configured with fqdn *localdomain.local*

*# Steps to rename all cluster to new fqdn lab.local and import "vms"
storage domain*
1. Set v1 ovirt host at maintenance then remove it from GUI.
2.  At v1 install fresh CentOS7 using the new FQDN lab.local
3.  at v0 set global maintenance and shutdown engine. Remove the engine
storage data. (complete wipe of any engine related data. What is important
is only VM guests and their snapshots).
4.  at v0, remove bricks belonging to "engine" and "vms" gluster volumes of
v1 and detach gluster peer v1.

gluster volume remove-brick engine replica 1 gluster1:/gluster/engine/brick
force
gluster volume remove-brick vms replica 1 gluster1:/gluster/vms/brick force
gluster peer detach gluster1

5.  On v1, prepare gluster service, reattach peer and add bricks from v0.
At this phase all data from vms gluster volume will be synced to the new
host. Verify with `gluster heal info vms`.
from v0 server run:

gluster peer probe gluster1
gluster volume add-brick engine replica 2 gluster1:/gluster/engine/brick
gluster volume add-brick vms replica 2 gluster1:/gluster/vms/brick

At this state all gluster volume are up and in sync. We confirm "vms" sync
with
gluster volume heal info vms

6.  At freshly installed v1 install engine using the same clean gluster
engine volume:
hosted-engine --deploy --config-append=/root/storage.conf
--config-append=answers.conf (use new FQDN!)

7.  Upon completion of engine deployment and after having ensured the vms
gluster volume is synced (step 5) remove bricks of v0 host (v0 now should
not be visible at ovirt GUI) and detach gluster peer v0.
at v1 host run:
gluster volume remove-brick engine replica 1 gluster0:/gluster/engine/brick
force
gluster volume remove-brick vms replica 1 gluster0:/gluster/vms/brick force
gluster peer detach gluster0

8. Install fresh CentOS7 on v0 and prepare it with ovirt node packages,
networking and gluster.
9. At v0, attach gluster bricks from v1. Confirm sync with gluster volume
heal info.
at v1 host:
gluster peer probe gluster0
gluster volume add-brick engine replica 2 gluster0:/gluster/engine/brick
gluster volume add-brick vms replica 2 gluster0:/gluster/vms/brick

10. at engine, add entry for v0 host at /etc/hosts. At ovirt GUI, add v0.
/etc/hosts:
10.10.10.220 node0 v0.lab.local
10.10.10.221 node1 v1.lab.local
10.10.10.222 engine.lab.local engine

10.100.100.1 gluster0
10.100.100.2 gluster1

11. At ovirt GUI import vms gluster volume as vms storage domain.
At this step I have to approve operation:
[image: image.png]


12. At ovirt GUI, import VMs from vms storage domain.
At this step the VM is found and imported from the imported storage domain
"vms", but the VM does not show the previously available disk snapshot.

The import of the storage domain should have retained the guest VM
snapshot.
How can this be troubleshooted? Do I have to keep some type of engine DB
backup so as to make the snapshots visible? If yes, is it possible to
restore this backup to a fresh engine that has a new FQDN?
Thanx very much for any advise and hint.

Alex
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SL2M3NDU6GH2AQZ22QD4A4XRTUQ7UY26/

Reply via email to