Re: [Users] Fail to connect to an iSCSI Eq. LUN

2013-01-17 Thread Nicolas Ecarnot
Too bad : I did not change anything, and time passing by, it all came to 
work...


Damn, this is the worst sysadmin situation : this is working, and you 
don't know why !



Seriously : it indeed appears that the choice of shared access is the 
good one (until someone proves it wrong).


More seriously : I shut down the second node, and made the conection 
with only the first node alive.

In that case, no possible conflict.

--
Nicolas Ecarnot

Le 16/01/2013 15:12, Nicolas Ecarnot a écrit :

Hi,

As a beginner, I'm reading again and again, but I'm not sure of the best
way to do :

Through the oVirt web manager, I'm trying to create an iSCSI storage
domain.
On my Equalogic SAN, I've created a volume with no restriction access
(for the time being).
I have two hypervisors on which I'm quite sure my network config is good
enough for now (two nics bonded for the management, and 2 nics bonded
for the iscsi). Everything is pinging ok. Networking is not an issue.

In the ovirt web manager, I try to create the very first storage domain,
of iscsi type of course.
I choose one of the node, then the iscsi discovery + login is working fine.
I can see my Equalogic volume, I'm checking it, and saving with the OK
button, and I get the following error :


"Error while executing action New SAN Storage Domain: Physical device
initialization failed. Check that the device is empty. Please remove
all files and partitions from the device."


Not very interesting, but the node log file is more instructive :


Thread-2767::INFO::2013-01-16
13:35:57,064::logUtils::37::dispatcher::(wrapper) Run and protect:
createVG(vgname='7becc578-a94b-41f4-bbec-8df5fe9f46c0',
devlist=['364ed2ad5297bb022fd0ee5ba36ad91a0'], options=None)

Thread-2767::DEBUG::2013-01-16
13:35:57,066::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
-n /sbin/lvm pvcreate --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [
\\"a%3600508e0ec7b6d8dea602b0e|364ed2ad5297bb022fd0ee5ba36ad91a0%\\",

\\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } "
--metadatasize 128m --metadatacopies 2 --metadataignore y
/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0' (cwd None)

Thread-2767::DEBUG::2013-01-16
13:35:57,147::__init__::1249::Storage.Misc.excCmd::(_log) FAILED: 
= "  Can't open /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0
exclusively.  Mounted filesystem?\n";  = 5

Thread-2767::DEBUG::2013-01-16
13:35:57,149::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
-n /sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [
\\"a%3600508e0ec7b6d8dea602b0e|364ed2ad5297bb022fd0ee5ba36ad91a0%\\",

\\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " -o
vg_name,pv_name --noheading
/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0' (cwd None)

Thread-2767::DEBUG::2013-01-16
13:35:57,224::__init__::1249::Storage.Misc.excCmd::(_log) FAILED: 
= '  No physical volume label read from
/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0\n  Failed to read physical
volume "/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0"\n';  = 5

Thread-2767::ERROR::2013-01-16
13:35:57,226::task::853::TaskManager.Task::(_setError)
Task=`1c5b8931-0085-489c-8962-ff5cc1323dc7`::Unexpected error

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
  File "/usr/share/vdsm/storage/hsm.py", line 1680, in createVG
  File "/usr/share/vdsm/storage/lvm.py", line 788, in createVG
  File "/usr/share/vdsm/storage/lvm.py", line 631, in _initpvs
PhysDevInitializationError: Failed to initialize physical device:
("found: {} notFound:
('/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0',)",)



I guess the interesting part is :


Can't open /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0 exclusively.
Mounted filesystem?


On any node, this partition is not mounted.
But I have set up the access to this volume as "shared" on the Equalogic
SAN, not knowing whether I should do so? I see both nodes connected on
the same volume.
I tried to remove this permit, but it didn't help.

I also found Redhat answering such a question :


There are two ways to clear this error:

- Remove VG from the Storage LUN using vgremove command. Run this
command from one of the hypervisors.
- Clear the first 4K from the disk . Execute the following from
one of the hypervisors:

# dd if=/dev/zero of=/dev/mapper/diskname bs=4k count=1


I did try that, but with no luck.



Now, two things :

- Do I have to keep the access to this volume shared/allowed to all the
hypervisors dedicated to this volume?
- What is the problem with the pvcreate command?




--
Nicolas Ecarnot

[Users] Fail to connect to an iSCSI Eq. LUN

2013-01-16 Thread Nicolas Ecarnot

Hi,

As a beginner, I'm reading again and again, but I'm not sure of the best
way to do :

Through the oVirt web manager, I'm trying to create an iSCSI storage domain.
On my Equalogic SAN, I've created a volume with no restriction access
(for the time being).
I have two hypervisors on which I'm quite sure my network config is good
enough for now (two nics bonded for the management, and 2 nics bonded
for the iscsi). Everything is pinging ok. Networking is not an issue.

In the ovirt web manager, I try to create the very first storage domain,
of iscsi type of course.
I choose one of the node, then the iscsi discovery + login is working fine.
I can see my Equalogic volume, I'm checking it, and saving with the OK
button, and I get the following error :


"Error while executing action New SAN Storage Domain: Physical device
initialization failed. Check that the device is empty. Please remove
all files and partitions from the device."


Not very interesting, but the node log file is more instructive :


Thread-2767::INFO::2013-01-16
13:35:57,064::logUtils::37::dispatcher::(wrapper) Run and protect:
createVG(vgname='7becc578-a94b-41f4-bbec-8df5fe9f46c0',
devlist=['364ed2ad5297bb022fd0ee5ba36ad91a0'], options=None)

Thread-2767::DEBUG::2013-01-16
13:35:57,066::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
-n /sbin/lvm pvcreate --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [
\\"a%3600508e0ec7b6d8dea602b0e|364ed2ad5297bb022fd0ee5ba36ad91a0%\\",
\\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } "
--metadatasize 128m --metadatacopies 2 --metadataignore y
/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0' (cwd None)

Thread-2767::DEBUG::2013-01-16
13:35:57,147::__init__::1249::Storage.Misc.excCmd::(_log) FAILED: 
= "  Can't open /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0
exclusively.  Mounted filesystem?\n";  = 5

Thread-2767::DEBUG::2013-01-16
13:35:57,149::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
-n /sbin/lvm pvs --config " devices { preferred_names =
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [
\\"a%3600508e0ec7b6d8dea602b0e|364ed2ad5297bb022fd0ee5ba36ad91a0%\\",
\\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " -o
vg_name,pv_name --noheading
/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0' (cwd None)

Thread-2767::DEBUG::2013-01-16
13:35:57,224::__init__::1249::Storage.Misc.excCmd::(_log) FAILED: 
= '  No physical volume label read from
/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0\n  Failed to read physical
volume "/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0"\n';  = 5

Thread-2767::ERROR::2013-01-16
13:35:57,226::task::853::TaskManager.Task::(_setError)
Task=`1c5b8931-0085-489c-8962-ff5cc1323dc7`::Unexpected error

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
  File "/usr/share/vdsm/storage/hsm.py", line 1680, in createVG
  File "/usr/share/vdsm/storage/lvm.py", line 788, in createVG
  File "/usr/share/vdsm/storage/lvm.py", line 631, in _initpvs
PhysDevInitializationError: Failed to initialize physical device:
("found: {} notFound: ('/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0',)",)



I guess the interesting part is :


Can't open /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0 exclusively.  Mounted 
filesystem?


On any node, this partition is not mounted.
But I have set up the access to this volume as "shared" on the Equalogic 
SAN, not knowing whether I should do so? I see both nodes connected on 
the same volume.

I tried to remove this permit, but it didn't help.

I also found Redhat answering such a question :


There are two ways to clear this error:

- Remove VG from the Storage LUN using vgremove command. Run this command 
from one of the hypervisors.
- Clear the first 4K from the disk . Execute the following from one of the 
hypervisors:

# dd if=/dev/zero of=/dev/mapper/diskname bs=4k count=1


I did try that, but with no luck.



Now, two things :

- Do I have to keep the access to this volume shared/allowed to all the 
hypervisors dedicated to this volume?

- What is the problem with the pvcreate command?

--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users