Hi,

I just rebooted my NFS server which hosts a ISO domain which is used by
RHEV 3.1 and oVirt 3.2. In both environments the ISO domain became
inactive and I tried to activate it again.
In RHEV this worked fine, but in oVirt it didn't.

Just for your information: both - RHEV and oVirt use "Local on host" as
datacenter storage type.
I normally use a local NFS server and mount the share locally (dc with
storage type NFS) as "Local on host" is making troubles in most of my
setups (in both - oVirt and RHEV) and the issues are mainly missing
(sometimes lost) volume groups.
As my oVirt 3.2 setup is a testing environment and my test vms are still
running, I'll keep it in this state for finding a solution for this. As
said, have seen issues like this more then one time before, so maybe
this is a bug and need some attention (can also open a bug report if
needed).


vdsm.log is telling me that the storage domain doesn't exist (as
complained by vdsm daemon):

2013-05-07 16:44:18,493 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] 
(pool-3-thread-47) [210a0bcb] START, ActivateStorageDomainVDSCommand( 
storagePoolId = 484e62d7-7a01-4b5e-aec8-59d366100281, ignoreFailoverLimit = 
false, compatabilityVersion = null, storageDomainId = 
a4c43175-ce34-49a5-8608-cac573bf7647), log id: 33c54af7
2013-05-07 16:44:20,770 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-47) [210a0bcb] Failed in ActivateStorageDomainVDS method
2013-05-07 16:44:20,781 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-47) [210a0bcb] Error code StorageDomainDoesNotExist and
error message IRSGenericException: IRSErrorException: Failed to
ActivateStorageDomainVDS, error = Storage domain does not exist:
('a4c43175-ce34-49a5-8608-cac573bf7647',)
2013-05-07 16:44:20,791 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-3-thread-47) [210a0bcb]
IrsBroker::Failed::ActivateStorageDomainVDS due to: IRSErrorException:
IRSGenericException: IRSErrorException: Failed to
ActivateStorageDomainVDS, error = Storage domain does not exist:
('a4c43175-ce34-49a5-8608-cac573bf7647',)


More interesting is the behavior of my host (CentOS 6.4) which tries to
find a logical volume and is doing some iSCSI-scans:

Thread-224450::DEBUG::2013-05-07
16:44:19,333::task::957::TaskManager.Task::(_decref)
Task=`ab00697e-32df-43b1-822b-a94bef55909d`::ref 0 aborting False
Thread-1129392::DEBUG::2013-05-07
16:44:20,555::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
-n /sbin/multipath' (cwd None)
Thread-1129392::DEBUG::2013-05-07
16:44:20,608::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-1129392::DEBUG::2013-05-07
16:44:20,609::lvm::477::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,609::lvm::479::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,609::lvm::488::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' got the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::lvm::490::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' released the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::lvm::508::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::lvm::510::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::misc::1064::SamplingMethod::(__call__) Returning last
result
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,612::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
-n /sbin/lvm vgs --config " devices { preferred_names = [\
\"^/dev/mapper/\\"
] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] }  global {
locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  back
up {  retain_min = 50  retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_fr
ee a4c43175-ce34-49a5-8608-cac573bf7647' (cwd None)
Thread-1129392::DEBUG::2013-05-07
16:44:20,759::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> =
'  Volume group "a4c43175-ce34-49a5-8608-cac573bf7647" not found\n';
<rc> =
 5
Thread-1129392::WARNING::2013-05-07
16:44:20,760::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
['  Volume group "a4c43175-ce34-49a5-8608-cac573bf7647" not found']
Thread-1129392::DEBUG::2013-05-07
16:44:20,760::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-1129392::ERROR::2013-05-07
16:44:20,767::task::833::TaskManager.Task::(_setError)
Task=`589ebcc8-2255-4a29-b995-5e76f3697ec8`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 840, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 1144, in
activateStorageDomain
    pool.activateSD(sdUUID)
  File "/usr/share/vdsm/storage/securable.py", line 68, in wrapper
    return f(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1042, in activateSD
    dom = sdCache.produce(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 97, in produce
    domain.getRealDomain()
  File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
    return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 121, in _realProduce
    domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 152, in _findDomain
    raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
('a4c43175-ce34-49a5-8608-cac573bf7647',)
Thread-1129392::DEBUG::2013-05-07
16:44:20,768::task::852::TaskManager.Task::(_run)
Task=`589ebcc8-2255-4a29-b995-5e76f3697ec8`::Task._run:
589ebcc8-2255-4a29-b995-5e76f3697ec8
('a4c43175-ce34-49a5-8608-cac573bf7647',
'484e62d7-7a01-4b5e-aec8-59d366100281') {} failed - stopping task
Thread-1129392::DEBUG::2013-05-07
16:44:20,768::task::1177::TaskManager.Task::(stop)
Task=`589ebcc8-2255-4a29-b995-5e76f3697ec8`::stopping in state preparing
(force False)
Thread-1129392::DEBUG::2013-05-07
16:44:20,768::task::957::TaskManager.Task::(_decref)
Task=`589ebcc8-2255-4a29-b995-5e76f3697ec8`::ref 1 aborting True
Thread-1129392::INFO::2013-05-07
16:44:20,769::task::1134::TaskManager.Task::(prepare)
Task=`589ebcc8-2255-4a29-b995-5e76f3697ec8`::aborting: Task is aborted:
'Storage domain does not exist' - code 358


It's clear to me that my host can't find a volume group named
a4c43175-ce34-49a5-8608-cac573bf7647, because this is the ID of my ISO
domain which resists on NFS and not iSCSI or FC or local on host.


I use RPM for RHEL from ovirt-stable repository on my CentOS 6.4 host:

[root@centos-hyp01 ~]# yum list vdsm* | grep '@'
vdsm.x86_64                                  4.10.3-10.el6
@ovirt-stable
vdsm-cli.noarch                              4.10.3-10.el6
@ovirt-stable
vdsm-gluster.noarch                          4.10.3-10.el6
@ovirt-stable
vdsm-python.x86_64                           4.10.3-10.el6
@ovirt-stable
vdsm-xmlrpc.noarch                           4.10.3-10.el6
@ovirt-stable


Please let me know if you need further information.
Thanks a lot!


-- 
Best Regards,

René Koch
Senior Solution Architect

============================================
ovido gmbh - "Das Linux Systemhaus"
Brünner Straße 163, A-1210 Wien

Phone:   +43 720 / 530 670
Mobile:  +43 660 / 512 21 31
E-Mail:  r.k...@ovido.at
============================================
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to