----- Original Message -----
> From: "Royce Lv" <lvroyce0...@gmail.com>
> To: vdsm-devel@lists.fedorahosted.org
> Sent: Monday, February 27, 2012 10:05:28 AM
> Subject: [vdsm] create StorageDomain failed of LOCALFS in vdsm-4.9.4-0.18
> 
> 
> Guys,
> 
> Tried create StorageDomain in vdsm-4.9.4-0.18 with following
> parameters:
> sddef = '{ "id": "1ef32ac7-1e12-4823-8e8c-8c887333fe46",
> "type": "LOCALFS",
> "class":"Data",
> "version":"0",
> "name": "Test Domain",
> "remotePath": "/var/lib/vdsm/storagetmp1" }'
> 
> and the Path "/var/lib/vdsm/storagetmp1" is newly created
> 
> Things goes well in vdsm4.9.0 but doesn't work in vdsm4.9.4, The
> error message is:
> StorageServerAccessPermissionError: Permission settings on the
> specified path do not allow access to the storage. Verify permission
> settings on the specified storage path.: 'path =
> /rhev/data-center/mnt/_var_lib_vdsm_storagetmp1'
> 
> I verified all access permission and make sure it is right.Trace is
> below:
> File "/usr/share/vdsm/storage/task.py", line 863, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/storage/hsm.py", line 1929, in
> createStorageDomain
> domClass, typeSpecificArg, storageType, domVersion)
> File "/usr/share/vdsm/storage/localFsSD.py", line 66, in create
> cls._preCreateValidation(sdUUID, mntPoint, remotePath, version)
> File "/usr/share/vdsm/storage/localFsSD.py", line 39, in
> _preCreateValidation
> validateDirAccess(domPath)
> File "/usr/share/vdsm/storage/storage_connection.py", line 40, in
> validateDirAccess
> getProcPool().fileUtils.validateAccess(dirPath)
> File "/usr/share/vdsm/storage/processPool.py", line 53, in wrapper
> return self.runExternally(func, *args, **kwds)
> File "/usr/share/vdsm/storage/processPool.py", line 64, in
> runExternally
> return self._procPool.runExternally(*args, **kwargs)
> 
> the following cmd caused this failure according to my log:
> CP Server Thread-18::DEBUG::2012-02-27
> 10:51:58,788::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' got the operation mutex
> CP Server Thread-18::DEBUG::2012-02-27
> 10:51:58,789::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
> /sbin/lvm vgs --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [ \\"a%35000c500382d9c53%\\",
> \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } "
> --noheadings --units b --nosuffix --separator | -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
> 1ef32ac7-1e12-4823-8e8c-8c887333fe46' (cwd None)
> CP Server Thread-18::DEBUG::2012-02-27
> 10:51:59,178::lvm::287::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
> Volume group "1ef32ac7-1e12-4823-8e8c-8c887333fe46" not found\n';
> <rc> = 5
> CP Server Thread-18::WARNING::2012-02-27
> 10:51:59,182::lvm::356::Storage.LVM::(_reloadvgs) lvm vgs failed: 5
> [] [' Volume group "1ef32ac7-1e12-4823-8e8c-8c887333fe46" not
> found']
> 
> since I use "LOCALFS" ,I think it should have nothing to do with lvm
> and I have no vg in my env.This log makes me confused.Will somebody
> pls take a look and help with it?
> Thank you in advance!

You are right it's unrelated to lvm. You problem is probably ownership and/or 
permissions of
your target folder "/var/lib/vdsm/storagetmp1".
Your target should be own by vdsm:kvm with 'rwx' permissions.
Actually it a little bit odd to create storage domain under /var/lib, it's not 
a proper place for this.
Try to use something like /data/...

Regards,
    Igor
 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/vdsm-devel
> 
_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel

Reply via email to