On 08/23/2012 06:20 PM, Scotto Alberto wrote:
Here you are

thanks, can you run the following?

- ls -l /sys/block/
- dmsetup table
- lsblk (if exists)

it appears that vdsm fails to handle a device with '!' in it (cciss!c0d1), but 
let's make sure its indeed the case.



Thread-47346::DEBUG::2007-06-30 
00:37:10,268::clientIF::239::Storage.Dispatcher.Protect::(wrapper) 
[10.16.250.216]
Thread-47346::INFO::2007-06-30 
00:37:10,269::dispatcher::94::Storage.Dispatcher.Protect::(run) Run and 
protect: getDeviceList, args: ()
Thread-47346::DEBUG::2007-06-30 
00:37:10,269::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: moving from state init -> state preparing
Thread-47346::DEBUG::2007-06-30 
00:37:10,269::misc::1010::SamplingMethod::(__call__) Trying to enter sampling 
method (storage.sdc.refreshStorage)
Thread-47346::DEBUG::2007-06-30 
00:37:10,270::misc::1012::SamplingMethod::(__call__) Got in to sampling method
Thread-47346::DEBUG::2007-06-30 
00:37:10,270::misc::1010::SamplingMethod::(__call__) Trying to enter sampling 
method (storage.iscsi.rescan)
Thread-47346::DEBUG::2007-06-30 
00:37:10,270::misc::1012::SamplingMethod::(__call__) Got in to sampling method
Thread-47346::DEBUG::2007-06-30 
00:37:10,271::iscsi::699::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n 
/sbin/iscsiadm -m session -R' (cwd None)
Thread-47346::DEBUG::2007-06-30 00:37:10,300::iscsi::699::Storage.Misc.excCmd::(rescan) 
FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21
Thread-47346::DEBUG::2007-06-30 
00:37:10,301::misc::1020::SamplingMethod::(__call__) Returning last result
Thread-47346::DEBUG::2007-06-30 
00:37:10,661::multipath::61::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n 
/sbin/multipath' (cwd None)
Thread-47346::DEBUG::2007-06-30 00:37:10,785::multipath::61::Storage.Misc.excCmd::(rescan) 
SUCCESS: <err> = ''; <rc> = 0
Thread-47346::DEBUG::2007-06-30 
00:37:10,786::lvm::547::OperationMutex::(_invalidateAllPvs) Operation 'lvm 
invalidate operation' got the operation mutex
Thread-47346::DEBUG::2007-06-30 
00:37:10,786::lvm::549::OperationMutex::(_invalidateAllPvs) Operation 'lvm 
invalidate operation' released the operation mutex
Thread-47346::DEBUG::2007-06-30 
00:37:10,786::lvm::559::OperationMutex::(_invalidateAllVgs) Operation 'lvm 
invalidate operation' got the operation mutex
Thread-47346::DEBUG::2007-06-30 
00:37:10,787::lvm::561::OperationMutex::(_invalidateAllVgs) Operation 'lvm 
invalidate operation' released the operation mutex
Thread-47346::DEBUG::2007-06-30 
00:37:10,787::lvm::580::OperationMutex::(_invalidateAllLvs) Operation 'lvm 
invalidate operation' got the operation mutex
Thread-47346::DEBUG::2007-06-30 
00:37:10,788::lvm::582::OperationMutex::(_invalidateAllLvs) Operation 'lvm 
invalidate operation' released the operation mutex
Thread-47346::DEBUG::2007-06-30 
00:37:10,788::misc::1020::SamplingMethod::(__call__) Returning last result
Thread-47346::DEBUG::2007-06-30 
00:37:10,788::lvm::406::OperationMutex::(_reloadpvs) Operation 'lvm reload 
operation' got the operation mutex
Thread-47346::DEBUG::2007-06-30 00:37:10,791::lvm::374::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs 
--config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 
write_cache_state=0 disable_after_error_count=3 filter = [ 
\\"a%3600508b1001035333920202020200005|3600601601cde1d0066b2fb054dece111%\\", \\"r%.*%\\" ] }  
global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } 
" --noheadings --units b --nosuffix --separator | -o 
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)
Thread-47346::DEBUG::2007-06-30 00:37:10,997::lvm::374::Storage.Misc.excCmd::(cmd) SUCCESS: 
<err> = '  /dev/sdh: read failed after 0 of 4096 at 0: Input/output error\n  
/dev/sdh: read failed after 0 of 4096 at 697932120064: Input/output error\n  /dev/sdh: read 
failed after 0 of 4096 at 697932177408: Input/output error\n  WARNING: Error counts reached 
a limit of 3. Device /dev/sdh was disabled\n'; <rc> = 0
Thread-47346::DEBUG::2007-06-30 
00:37:10,998::lvm::429::OperationMutex::(_reloadpvs) Operation 'lvm reload 
operation' released the operation mutex
MainProcess|Thread-47346::DEBUG::2007-06-30 
00:37:11,005::devicemapper::144::Storage.Misc.excCmd::(_getPathsStatus) 
'/sbin/dmsetup status' (cwd None)
MainProcess|Thread-47346::DEBUG::2007-06-30 
00:37:11,014::devicemapper::144::Storage.Misc.excCmd::(_getPathsStatus) SUCCESS: 
<err> = ''; <rc> = 0
MainProcess|Thread-47346::DEBUG::2007-06-30 
00:37:11,019::multipath::159::Storage.Misc.excCmd::(getScsiSerial) 
'/sbin/scsi_id --page=0x80 --whitelisted --export --replace-whitespace 
--device=/dev/dm-0' (cwd None)
MainProcess|Thread-47346::DEBUG::2007-06-30 
00:37:11,026::multipath::159::Storage.Misc.excCmd::(getScsiSerial) SUCCESS: <err> = 
''; <rc> = 0
Thread-47346::WARNING::2007-06-30 
00:37:11,027::multipath::261::Storage.Multipath::(pathListIter) Problem getting 
hbtl from device `cciss!c0d1`
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/multipath.py", line 259, in pathListIter
   File "/usr/share/vdsm/storage/multipath.py", line 182, in getHBTL
OSError: [Errno 2] No such file or directory: 
'/sys/block/cciss!c0d1/device/scsi_disk/'
Thread-47346::ERROR::2007-06-30 
00:37:11,029::task::868::TaskManager.Task::(_setError) Unexpected error
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/task.py", line 876, in _run
   File "/usr/share/vdsm/storage/hsm.py", line 696, in public_getDeviceList
   File "/usr/share/vdsm/storage/hsm.py", line 759, in _getDeviceList
KeyError: 'hbtl'
Thread-47346::DEBUG::2007-06-30 
00:37:11,030::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: Task._run: 
0be1d461-f8fa-4c20-861d-27fde8124408 () {} failed - stopping task
Thread-47346::DEBUG::2007-06-30 
00:37:11,030::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: stopping in state preparing (force False)
Thread-47346::DEBUG::2007-06-30 
00:37:11,030::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: ref 1 aborting True
Thread-47346::INFO::2007-06-30 00:37:11,031::task::1171::TaskManager.Task::(prepare) 
aborting: Task is aborted: "'hbtl'" - code 100
Thread-47346::DEBUG::2007-06-30 
00:37:11,031::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: Prepare: aborted: 'hbtl'
Thread-47346::DEBUG::2007-06-30 
00:37:11,031::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: ref 0 aborting True
Thread-47346::DEBUG::2007-06-30 
00:37:11,032::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: Task._doAbort: force False
Thread-47346::DEBUG::2007-06-30 
00:37:11,032::resourceManager::821::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-47346::DEBUG::2007-06-30 
00:37:11,032::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: moving from state preparing -> state 
aborting
Thread-47346::DEBUG::2007-06-30 
00:37:11,033::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: _aborting: recover policy none
Thread-47346::DEBUG::2007-06-30 
00:37:11,033::task::495::TaskManager.Task::(_debug) Task 
0be1d461-f8fa-4c20-861d-27fde8124408: moving from state aborting -> state failed
Thread-47346::DEBUG::2007-06-30 
00:37:11,033::resourceManager::786::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-47346::DEBUG::2007-06-30 
00:37:11,034::resourceManager::821::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-47346::ERROR::2007-06-30 
00:37:11,034::dispatcher::106::Storage.Dispatcher.Protect::(run) 'hbtl'
Thread-47346::ERROR::2007-06-30 
00:37:11,034::dispatcher::107::Storage.Dispatcher.Protect::(run) Traceback 
(most recent call last):
   File "/usr/share/vdsm/storage/dispatcher.py", line 96, in run
   File "/usr/share/vdsm/storage/task.py", line 1178, in prepare
KeyError: 'hbtl'







Alberto Scotto

Blue Reply
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
[email protected]
www.reply.it

-----Original Message-----
From: Haim [mailto:[email protected]]
Sent: giovedì 23 agosto 2012 17:00
To: Scotto Alberto
Cc: [email protected]
Subject: Re: [Users] [rhev 3] add new domain fails: Could not retrieve LUNs

On 08/23/2012 05:54 PM, Scotto Alberto wrote:

hi,

can you attach full vdsm log during the execution of getDeviceList command?
Hi all,

I'm trying to configure a FCP storage domain on RHEV 3.

I try to add a new domain from the console, but it can't find any
LUNs: "Could not retrieve LUNs, please check your storage"

Here is the output from /var/log/rhevm/rhevm.log:

------------------------------------

2007-06-29 21:50:07,811 WARN
[org.ovirt.engine.core.bll.GetConfigurationValueQuery]
(http-0.0.0.0-8443-1) calling GetConfigurationValueQuery with null
version, using default general for version
2007-06-29 21:50:07,911 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
(http-0.0.0.0-8443-1) START, GetDeviceListVDSCommand(vdsId =
7e077f4c-25d8-11dc-bbcb-001cc4c2469a, storageType=FCP), log id:
60bdafe6
2007-06-29 21:50:08,726 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(http-0.0.0.0-8443-1) Failed in GetDeviceListVDS method
2007-06-29 21:50:08,727 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(http-0.0.0.0-8443-1) Error code BlockDeviceActionError and error
message VDSGenericException: VDSErrorException: Failed to
GetDeviceListVDS, error = Error block device action: ()
2007-06-29 21:50:08,727 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(http-0.0.0.0-8443-1) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand
return value

Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.LUNListReturnForXmlRpc
lunList Null
mStatus Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 600
mMessage Error block device action: ()

2007-06-29 21:50:08,727 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(http-0.0.0.0-8443-1) Vds: pittor06vhxd020
2007-06-29 21:50:08,727 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (http-0.0.0.0-8443-1)
Command GetDeviceListVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
GetDeviceListVDS, error = Error block device action: ()
2007-06-29 21:50:08,727 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
(http-0.0.0.0-8443-1) FINISH, GetDeviceListVDSCommand, log id:
60bdafe6
2007-06-29 21:50:08,727 ERROR
[org.ovirt.engine.core.bll.storage.GetDeviceListQuery]
(http-0.0.0.0-8443-1) Query GetDeviceListQuery failed. Exception
message is VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to GetDeviceListVDS,
error = Error block device action: ()

----------------------------------------------

First question: do LUNs have to be visible from RHEV-H or RHEV-M?

Currently they are visible only from the hypervisor.

----------------------------------------

[root@pittor06vhxd020 log]# multipath -ll
3600601601cde1d0066b2fb054dece111 dm-2 DGC,RAID 5 size=650G
features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| |- 2:0:0:0 sda 8:0 active ready running
| |- 2:0:1:0 sdd 8:48 active ready running
| |- 3:0:0:0 sde 8:64 active ready running
| `- 3:0:1:0 sdf 8:80 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
|- 2:0:2:0 sdb 8:16 active ready running
|- 2:0:3:0 sdc 8:32 active ready running
|- 3:0:2:0 sdg 8:96 active ready running
`- 3:0:3:0 sdh 8:112 active ready running
3600508b1001035333920202020200005 dm-0 HP,LOGICAL VOLUME size=205G
features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 0:0:1:0 cciss!c0d1 104:16 active ready running
------------------------------------------------------

Our SAN device is Clariion AX150. Is it compatible with ovirt?

vdsClient -s 0 getDeviceListgives me:

Error block device action: ()

Could it be due to SPM turned off? (I have only one host)

[root@pittor06vhxd020 log]# ps axu | grep -i spm

root 16068 0.0 0.0 7888 868 pts/1 R+ 00:04 0:00 grep -i spm

How can I turn it on? I know the command but I don't know what
paramaters append

spmStart

<spUUID> <prevID> <prevLVER> <recoveryMode> <scsiFencing> <maxHostID>
<version>

Thank you very much for any hints.

AS



Alberto Scotto

Blue
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
[email protected]
www.reply.it


----------------------------------------------------------------------
--

--
The information transmitted is intended for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of,
or taking of any action in reliance upon, this information by persons
or entities other than the intended recipient is prohibited. If you
received this in error, please contact the sender and delete the
material from any computer.


_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users



________________________________

--
The information transmitted is intended for the person or entity to which it is 
addressed and may contain confidential and/or privileged material. Any review, 
retransmission, dissemination or other use of, or taking of any action in 
reliance upon, this information by persons or entities other than the intended 
recipient is prohibited. If you received this in error, please contact the 
sender and delete the material from any computer.

_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to