Hi Roger,

you can not import the snapshot because you already have an active storage domain that contains the same (meta)data. Try detaching the storage domain that you take snapshots from, and remove it (do not select to wipe the data), and then try to import the snapshot. You will see a warning that the domain is already registered within ovirt engine and you can force the import to continue. After that, you should see the domain registered in ovirt webadmin. Before detaching the domain, make sure you have another domain active so it can become a master domain, and create exports of the VM's that are part of that domain just in case.

I had the same problem while trying to import a replicated storage domain, you can see that oVirt tries to import the domain, but it just returns to import domain dialog. It actually mounts the domain for a few seconds, then disconnects and removes the mount point under /rhev/data-center/ and then it tries to unmount it and fails because mount point doesn't exist anymore.

Mentioned it here recently: http://lists.ovirt.org/pipermail/users/2015-November/036027.html

Maybe there is a workaround for importing the clone/snap of the domain while the source is still active (messing with the metadata), but I haven't tried it so far (there are several implications that needs to be taken into the account). However, I'm also interested if there is a way to do such a thing, especially when you have two separate datacenters registered within the same engine, and it would be great for us to be able to import snaps/replicated storage domains and/or VM's that reside on it while still having the original VM's active and running.

Something similar to the third RFE here (this is only for VM's): http://www.ovirt.org/Features/ImportUnregisteredEntities#RFEs

In any case, I'll try this ASAP, always an interesting topic. Any insights on this is highly appreciated.


On 11/24/2015 12:40 PM, Roger Meier wrote:
Hi All,

I don't know if this is a Bug or an error on my side.

At the moment, i have a oVirt 3.6 installation with two Nodes and Two Storage Server, which are configured themselfs als master/slave (solaris zfs snapshot copy from master to slave all 2 hours)

Now i try to do some tests for some failure use cases like, master storage isn't available anymore or
one of the virtual machines must be restored from the snapshot.

Because the data n the slave is a snapshot copy, all data which are on the Data Domain NFS Storage,
are also on the slave NFS Storage.

I tried it to add over WebUI over the option "Import Domain" (Import Pre-Configured Domain) with both Domain Functions (Data and Export) but nothing happens, expect some errors in the vdsm.log Logfile.

Something like this

Thread-253746::ERROR::2015-11-24 11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2545, in disconnectStorageServer
  File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect
    return self._mountCon.disconnect()
  File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect
    self._mount.umount(True, True)
  File "/usr/share/vdsm/storage/mount.py", line 256, in umount
    return self._runcmd(cmd, timeout)
  File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
    raise MountError(rc, ";".join((out, err)))
MountError: (32, ';umount: /rhev/data-center/mnt/ mountpoint not found\n')

I checked with nfs-check.py if all permissions are ok, the tool say this:

Konsole output
[root@lin-ovirt1 contrib]# python ./nfs-check.py
Current hostname: lin-ovirt1 - IP addr
Trying to /bin/mount -t nfs
Executing NFS tests..
Removing vdsmTest file..
Status of tests [OK]
Disconnecting from NFS Server..

Roger Meier

Users mailing list

Users mailing list

Reply via email to