Greetings all,

 

Full disclosure, complete OVIRT novice here. I inherited an OVIRT system and
had a complete ovirt-engine back in December-January. Because of time and my
inexperience with OVIRT, I had to resort to hiring consultants to rebuild my
OVIRT engine from backups. That's a situation I never want to repeat.

 

Anyway, we were able to piece it together and at least get most
functionality back. The previous setup had a ISO storage domain called
'ISO-COLO' that seems to have been hosted on the engine server itself. The
engine hostname is 'mydesktop'. We restored the engine from backups I had
taken of the SQL DB and various support files using the built in OVIRT
backup tool. 

 

So now when looking into the OVIRT console, I see the storage domain listed.
It has a status of 'inactive' showing in the list of various storage domains
we have setup for this. We tried to 'activate' it and it fails activation.
The path listed for the domain is mydesktop:/gluster/colo-iso. On the host
however there is no mountpoint that equates to that path:

 

[root@mydesktop ~]# df -h

Filesystem               Size  Used Avail Use% Mounted on

devtmpfs                  47G     0   47G   0% /dev

tmpfs                     47G   12K   47G   1% /dev/shm

tmpfs                     47G  131M   47G   1% /run

tmpfs                     47G     0   47G   0% /sys/fs/cgroup

/dev/mapper/centos-root   50G  5.4G   45G  11% /

/dev/sda2               1014M  185M  830M  19% /boot

/dev/sda1                200M   12M  189M   6% /boot/efi

/dev/mapper/centos-home  224G   15G  210G   7% /home

tmpfs                    9.3G     0  9.3G   0% /run/user/0

 

The original layout looked like this on the broken engine:

 

[root@mydesktop ~]# df -h

Filesystem                            Size  Used Avail Use% Mounted on

/dev/mapper/centos_mydesktop-root      50G   27G   20G  58% /

devtmpfs                               24G     0   24G   0% /dev

tmpfs                                  24G   28K   24G   1% /dev/shm

tmpfs                                  24G   42M   24G   1% /run

tmpfs                                  24G     0   24G   0% /sys/fs/cgroup

/dev/mapper/centos_mydesktop-home      25G   45M   24G   1% /home

/dev/sdc1                            1014M  307M  708M  31% /boot

/dev/mapper/centos_mydesktop-gluster  177G  127G   42G  76% /gluster

tmpfs                                 4.7G     0  4.7G   0% /run/user/0

 

So it seems the orphaned storage domain is just point to a path that does
not exist on the new Engine host. 

 

Also noticed some of the hosts are trying to aces this storage domain and
getting errors:

 

The error message for connection mydesktop:/gluster/colo-iso returned by
VDSM was: Problem while trying to mount target

3/17/2010:47:05 AM

 

Failed to connect Host vm-host-colo-2 to the Storage Domains ISO-Colo.

3/17/2010:47:05 AM

 

So it seems hosts are trying to be connected to this storage domain but
cannot because its not there. Any of the files from the original path are
not available so I am not even sure what we are missing if anything.

 

So what are my options here. Destroy the current ISO domain and recreate it,
or somehow provide the correct path on the engine server? Currently the
storage space I can use is mounted with /home, which is a different path
than the original one. Not sure if anything can be done with the disk layout
at this point to correct this on the engine server itself to get the gluster
path back. Right now we cannot attach CDs to VMs for booting. No choices
show up for use when doing a 'run once' on an existing VM so I would like to
get this working so I can fix a broken VM that I need to boot off of ISO
media.

 

Thanks in advance for any help you can provide.

_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/XGH6FSSXVBG7MU6ZROURIZQALP7YCAFU/

Reply via email to