[ovirt-users] Re: unable to create iSCSI storage domain

2018-06-25 Thread Bernhard Dick

Hi,

Am 22.06.2018 um 18:12 schrieb Nir Soffer:
On Fri, Jun 22, 2018 at 6:48 PM Bernhard Dick > wrote:


Am 22.06.2018 um 17:38 schrieb Nir Soffer:
 > On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick mailto:bernh...@bdick.de>
 > >> wrote:
[...]

Is sdc your LUN?

here sdc is from the storage, sdd is from the linux based target.

 > multipath -ll
No Output


You don't have any multipath devices. oVirt block storage is using
only multipath devices. This means that you will no see any devices
on engine side.

 > cat /etc/multipath.conf
# VDSM REVISION 1.3
# VDSM PRIVATE
# VDSM REVISION 1.5


You are mixing several versions here. Is this 1.3 or 1.5 file?
hm I didn't touch the file. Maybe something went weird during update 
procedures.




# This file is managed by vdsm.
# [...]
defaults {
      # [...]
      polling_interval            5
      # [...]
      no_path_retry               4


According to this this is a 1.5 version.

      # [...]
      user_friendly_names         no
      # [...]
      flush_on_last_del           yes
      # [...]
      fast_io_fail_tmo            5
      # [...]
      dev_loss_tmo                30
      # [...]
      max_fds                     4096
}
# Remove devices entries when overrides section is available.
devices {
      device {
          # [...]
          all_devs                yes
          no_path_retry           4
      }
}
# [...]
# inserted by blacklist_all_disks.sh

blacklist {
          devnode "*"
}


This is your issue - why do you blacklist all devices?

By lsblk output I think you are running hyperconverge setup, which
wrongly disabled all multipath devices, instead of the local devices
used by gluster.

To fix this:

1. Remove the wrong multipath blacklist
2. Find the WWID of the local devices used by gluster
    these are /dev/sda and /dev/sdb
3. Add blacklist for these specific devices using

blacklist {
     wwid XXX-YYY
     wwid YYY-ZZZ
}

With this you should be able to access all LUNs from the storage
server (assuming you configured the storage so the host can see them).
Finally, it is recommended to use a drop-in configuration file for
local changes, and *never* touch /etc/multipath.conf, so vdsm is
able to manage this file.

This is done by putting your changes in:
/etc/multipath/conf.d/local.conf

Example:

$ cat /etc/multipath/conf.d/local.conf
# Local multipath configuration for host XXX
# blacklist boot device and device used for gluster storage.
blacklist {
     wwid XXX-YYY
     wwid YYY-ZZZ
}

You probably want to backup these files and have a script to
deploy them to the hosts if you need to restore the setup.

Once you have a proper drop-in configuration, you can use
the standard vdsm multipath configuration by removing the line

# VDSM PRIVATE

And running:

     vdsm-tool configure --force --module multipath

That solved it. Blacklisting the local drives however does not really 
seem to work. I assume that is due to the local drives are virtio 
storage drives in my case (as it is a testing environment based on 
virtual Hosts) and they do have type 0x80 wwids of the Form "0QEMU 
QEMU HARDDISK   drive-scsi1".


Thanks for your help!

  Regards
Bernhard


In EL7.6 we expect to have a fix for this issue, blacklisting
automatically local devices.
See https://bugzilla.redhat.com/1593459


 > vdsm-client Host getDeviceList
[]


Expected in this configuration.



 > Nir
 >
 >     When I logon to the ovirt hosts I see that they are connected
with the
 >     target LUNs (dmesg is telling that there are iscsi devices
being found
 >     and they are getting assigned to devices in /dev/sdX ).
Writing and
 >     reading from the devices (also accros hosts) works. Do you
have some
 >     advice how to troubleshoot this?
 >
 >         Regards
 >           Bernhard
 >     ___
 >     Users mailing list -- users@ovirt.org
 >
 >     To unsubscribe send an email to users-le...@ovirt.org

 >     >
 >     Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 >     oVirt Code of Conduct:
 > https://www.ovirt.org/community/about/community-guidelines/
 >     List Archives:
 >

https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBUEKH5LOHM2DK6BYD44D/
 >


-- 
Dipl.-Inf. Bernhard Dick

Auf dem Anger 24
DE-46485 Wesel
www.BernhardDick.de 

jabber: bernh...@jabber.bdick.de 

Tel : +49.2812

[ovirt-users] Re: unable to create iSCSI storage domain

2018-06-22 Thread Nir Soffer
On Fri, Jun 22, 2018 at 6:48 PM Bernhard Dick  wrote:

> Am 22.06.2018 um 17:38 schrieb Nir Soffer:
> > On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick  > > wrote:
> > I've a problem creating an iSCSI storage domain. My hosts are running
> > the current ovirt 4.2 engine-ng
> >
> >
> > What is engine-ng?
> sorry, I mixed it up. It is ovirt node-ng.
>
> >
> > version. I can detect and login to the
> > iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets
> page).
> > That happens with our storage and with a linux based iSCSI target
> which
> > I created for testing purposes.
> >
> >
> > Linux based iscsi based target works fine, we use it a lot for testing
> > environment.
> >
> > Can you share the output of these commands on the the host connected
> > to the storage server?
> >
> > lsblk
> NAME MAJ:MIN RM  SIZE RO
> TYPE MOUNTPOINT
> sda8:00   64G  0
> disk
> sda1 8:101G  0
> part /boot
> sda2 8:20   63G  0 part
>onn-pool00_tmeta 253:001G  0
> lvm
> onn-pool00-tpool   253:20   44G  0 lvm
>   onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:30   17G  0
> lvm  /
>   onn-pool00   253:12   0   44G  0 lvm
>   onn-var_log_audit253:13   02G  0
> lvm  /var/log/audit
>   onn-var_log  253:14   08G  0
> lvm  /var/log
>   onn-var  253:15   0   15G  0
> lvm  /var
>   onn-tmp  253:16   01G  0
> lvm  /tmp
>   onn-home 253:17   01G  0
> lvm  /home
>   onn-root 253:18   0   17G  0 lvm
>   onn-ovirt--node--ng--4.2.2--0.20180430.0+1   253:19   0   17G  0 lvm
>   onn-var_crash253:20   0   10G  0 lvm
>onn-pool00_tdata 253:10   44G  0
> lvm
> onn-pool00-tpool   253:20   44G  0 lvm
>   onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:30   17G  0
> lvm  /
>   onn-pool00   253:12   0   44G  0 lvm
>   onn-var_log_audit253:13   02G  0
> lvm  /var/log/audit
>   onn-var_log  253:14   08G  0
> lvm  /var/log
>   onn-var  253:15   0   15G  0
> lvm  /var
>   onn-tmp  253:16   01G  0
> lvm  /tmp
>   onn-home 253:17   01G  0
> lvm  /home
>   onn-root 253:18   0   17G  0 lvm
>   onn-ovirt--node--ng--4.2.2--0.20180430.0+1   253:19   0   17G  0 lvm
>   onn-var_crash253:20   0   10G  0 lvm
>onn-swap 253:40  6.4G  0
> lvm  [SWAP]
> sdb8:16   0  256G  0

disk
> gluster_vg_sdb-gluster_thinpool_sdb_tmeta  253:501G  0 lvm
>   gluster_vg_sdb-gluster_thinpool_sdb-tpool253:70  129G  0 lvm
> gluster_vg_sdb-gluster_thinpool_sdb253:80  129G  0 lvm
> gluster_vg_sdb-gluster_lv_data 253:10   0   64G  0
> lvm  /gluster_bricks/data
> gluster_vg_sdb-gluster_lv_vmstore  253:11   0   64G  0
> lvm  /gluster_bricks/vmstore
> gluster_vg_sdb-gluster_thinpool_sdb_tdata  253:60  129G  0 lvm
>   gluster_vg_sdb-gluster_thinpool_sdb-tpool253:70  129G  0 lvm
> gluster_vg_sdb-gluster_thinpool_sdb253:80  129G  0 lvm
> gluster_vg_sdb-gluster_lv_data 253:10   0   64G  0
> lvm  /gluster_bricks/data
> gluster_vg_sdb-gluster_lv_vmstore  253:11   0   64G  0
> lvm  /gluster_bricks/vmstore
> gluster_vg_sdb-gluster_lv_engine   253:90  100G  0
> lvm  /gluster_bricks/engine
> sdc8:32   0  500G  0
> disk
> sdd8:48   01G  0
> disk
> sr0   11:01  1.1G  0
> rom
>

Is sdc your LUN?


> here sdc is from the storage, sdd is from the linux based target.
>
> > multipath -ll
> No Output
>

You don't have any multipath devices. oVirt block storage is using
only multipath devices. This means that you will no see any devices
on engine side.


> > cat /etc/multipath.conf
> # VDSM REVISION 1.3
> # VDSM PRIVATE
> # VDSM REVISION 1.5
>

You are mixing several versions here. Is this 1.3 or 1.5 file?


>
> # 

[ovirt-users] Re: unable to create iSCSI storage domain

2018-06-22 Thread Bernhard Dick

Am 22.06.2018 um 17:38 schrieb Nir Soffer:
On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick > wrote:

I've a problem creating an iSCSI storage domain. My hosts are running
the current ovirt 4.2 engine-ng 



What is engine-ng?

sorry, I mixed it up. It is ovirt node-ng.



version. I can detect and login to the
iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page).
That happens with our storage and with a linux based iSCSI target which
I created for testing purposes.


Linux based iscsi based target works fine, we use it a lot for testing
environment.

Can you share the output of these commands on the the host connected
to the storage server?

lsblk
NAME MAJ:MIN RM  SIZE RO 
TYPE MOUNTPOINT
sda8:00   64G  0 
disk
sda1 8:101G  0 
part /boot

sda2 8:20   63G  0 part
  onn-pool00_tmeta 253:001G  0 
lvm

   onn-pool00-tpool   253:20   44G  0 lvm
 onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:30   17G  0 
lvm  /

 onn-pool00   253:12   0   44G  0 lvm
 onn-var_log_audit253:13   02G  0 
lvm  /var/log/audit
 onn-var_log  253:14   08G  0 
lvm  /var/log
 onn-var  253:15   0   15G  0 
lvm  /var
 onn-tmp  253:16   01G  0 
lvm  /tmp
 onn-home 253:17   01G  0 
lvm  /home

 onn-root 253:18   0   17G  0 lvm
 onn-ovirt--node--ng--4.2.2--0.20180430.0+1   253:19   0   17G  0 lvm
 onn-var_crash253:20   0   10G  0 lvm
  onn-pool00_tdata 253:10   44G  0 
lvm

   onn-pool00-tpool   253:20   44G  0 lvm
 onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:30   17G  0 
lvm  /

 onn-pool00   253:12   0   44G  0 lvm
 onn-var_log_audit253:13   02G  0 
lvm  /var/log/audit
 onn-var_log  253:14   08G  0 
lvm  /var/log
 onn-var  253:15   0   15G  0 
lvm  /var
 onn-tmp  253:16   01G  0 
lvm  /tmp
 onn-home 253:17   01G  0 
lvm  /home

 onn-root 253:18   0   17G  0 lvm
 onn-ovirt--node--ng--4.2.2--0.20180430.0+1   253:19   0   17G  0 lvm
 onn-var_crash253:20   0   10G  0 lvm
  onn-swap 253:40  6.4G  0 
lvm  [SWAP]
sdb8:16   0  256G  0 
disk

gluster_vg_sdb-gluster_thinpool_sdb_tmeta  253:501G  0 lvm
 gluster_vg_sdb-gluster_thinpool_sdb-tpool253:70  129G  0 lvm
   gluster_vg_sdb-gluster_thinpool_sdb253:80  129G  0 lvm
   gluster_vg_sdb-gluster_lv_data 253:10   0   64G  0 
lvm  /gluster_bricks/data
   gluster_vg_sdb-gluster_lv_vmstore  253:11   0   64G  0 
lvm  /gluster_bricks/vmstore

gluster_vg_sdb-gluster_thinpool_sdb_tdata  253:60  129G  0 lvm
 gluster_vg_sdb-gluster_thinpool_sdb-tpool253:70  129G  0 lvm
   gluster_vg_sdb-gluster_thinpool_sdb253:80  129G  0 lvm
   gluster_vg_sdb-gluster_lv_data 253:10   0   64G  0 
lvm  /gluster_bricks/data
   gluster_vg_sdb-gluster_lv_vmstore  253:11   0   64G  0 
lvm  /gluster_bricks/vmstore
gluster_vg_sdb-gluster_lv_engine   253:90  100G  0 
lvm  /gluster_bricks/engine
sdc8:32   0  500G  0 
disk
sdd8:48   01G  0 
disk
sr0   11:01  1.1G  0 
rom


here sdc is from the storage, sdd is from the linux based target.


multipath -ll

No Output

cat /etc/multipath.conf

# VDSM REVISION 1.3
# VDSM PRIVATE
# VDSM REVISION 1.5

# This file is managed by vdsm.
# [...]
defaults {
# [...]
polling_interval5
# [...]
no_path_retry   4
# [...]
user_friendly_names no
# [...]
flush_on_last_del   yes
# [...]
fast_io_fail_tmo5
# [...]
dev_loss_tmo30
# [...]
max_fds 4096
}
# Remove devices entries when overrides section is available.
devices {
device {
# [...]
all_devsyes
no_path_r

[ovirt-users] Re: unable to create iSCSI storage domain

2018-06-22 Thread Nir Soffer
On Fri, Jun 22, 2018 at 6:22 PM Bernhard Dick  wrote:

> Hi,
>
> I've a problem creating an iSCSI storage domain. My hosts are running
> the current ovirt 4.2 engine-ng


What is engine-ng?


> version. I can detect and login to the
> iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page).
> That happens with our storage and with a linux based iSCSI target which
> I created for testing purposes.
>

Linux based iscsi based target works fine, we use it a lot for testing
environment.

Can you share the output of these commands on the the host connected
to the storage server?

lsblk
multipath -ll
cat /etc/multipath.conf
vdsm-client Host getDeviceList

Nir

When I logon to the ovirt hosts I see that they are connected with the
> target LUNs (dmesg is telling that there are iscsi devices being found
> and they are getting assigned to devices in /dev/sdX ). Writing and
> reading from the devices (also accros hosts) works. Do you have some
> advice how to troubleshoot this?
>
>Regards
>  Bernhard
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45IIDJUVOCNLBUEKH5LOHM2DK6BYD44D/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJN7HHYT3B4CZQK4SOAJIHTHHETMHYKB/


[ovirt-users] Re: unable to create iSCSI storage domain

2018-06-22 Thread Christopher Cox

On 06/22/2018 10:20 AM, Bernhard Dick wrote:

Hi,

I've a problem creating an iSCSI storage domain. My hosts are running 
the current ovirt 4.2 engine-ng version. I can detect and login to the 
iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page).
That happens with our storage and with a linux based iSCSI target which 
I created for testing purposes.
When I logon to the ovirt hosts I see that they are connected with the 
target LUNs (dmesg is telling that there are iscsi devices being found 
and they are getting assigned to devices in /dev/sdX ). Writing and 
reading from the devices (also accros hosts) works. Do you have some 
advice how to troubleshoot this?


Stating the obvious... you're not LUN masking them out?  Normally, you'd 
create an access mask that allows the ovirt hypevisors to see the LUNs. 
But without that, maybe your default security policy is to prevent all (?).

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQWW4QVPFDYN4VIO2NTWBWI5XKKECATF/