Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-04-04 Thread Lukáš Kaplan
Yes I agree. As I wrote in some previous message, scsi_id helped me.

For example:
cat /etc/targets.conf



scsi_id 00020001


scsi_id 00020002

initiator-address 192.168.1.0/24



--
Lukas Kaplan



2017-04-04 12:05 GMT+02:00 Yaniv Kaul <yk...@redhat.com>:

>
>
> On Sun, Apr 2, 2017 at 10:17 PM, Lukáš Kaplan <lkap...@dragon.cz> wrote:
>
>> I am using Centos 7 and tgtd (scsi-target-utils)
>>
>
> Interesting - still doesn't explain it though. Certainly looks like some
> tgtd issue (see http://www.spinics.net/lists/linux-stgt/msg04392.html for
> example). Try changing the scsi_id in targets.conf perhaps?
>
> I recommend, btw, targetcli (LIO) instead. Fairly simple to set up.
> Y.
>
>
>>
>> # cat /etc/centos-release
>> CentOS Linux release 7.3.1611 (Core)
>>
>> # tgtd -V
>> 1.0.55
>>
>> # rpm -qi scsi-target-utils
>> Name: scsi-target-utils
>> Version : 1.0.55
>> Release : 4.el7
>> Architecture: x86_64
>> ... etc
>>
>> --
>> Lukas Kaplan
>>
>>
>>
>> 2017-03-31 21:59 GMT+02:00 Yaniv Kaul <yk...@redhat.com>:
>>
>>>
>>>
>>> On Fri, Mar 31, 2017 at 3:43 PM, Lukáš Kaplan <lkap...@dragon.cz> wrote:
>>>
>>>> I solved this issue now.
>>>>
>>>> I thought till today, that iSCSI LUN ID (WWN or WWID) is globaly
>>>> unique. It is not true!
>>>> If you power on two identical linux machines and create iSCSI target on
>>>> them, their LUN IDs will be same...
>>>>
>>>> 36e010001 - for first LUN
>>>> 36e010002 - for second LUN etc
>>>>
>>>
>>> How did you get to such a number with so many zero's? Usually, there's
>>> some vendor ID and so on there...
>>> What target are you using?
>>> Y.
>>>
>>>
>>>>
>>>> You have to change LUN ID manualy (and take care of that uniqueness in
>>>> your domain) in /etc/tgtd/targets.conf for example:
>>>>
>>>> 
>>>> 
>>>> scsi_id 00020001
>>>> 
>>>> 
>>>> scsi_id 00020002
>>>> 
>>>> initiator-address 192.168.1.0/24
>>>> 
>>>>
>>>>
>>>>
>>>> --
>>>> Lukas Kaplan
>>>>
>>>>
>>>>
>>>> 2017-03-31 9:06 GMT+02:00 Lukáš Kaplan <lkap...@dragon.cz>:
>>>>
>>>>> Is it possible that problem can be in conflicting LUN IDs?
>>>>> I see that first LUN from both storage servers have same LUN ID
>>>>>  36e010001. One storage server is connected
>>>>> to ovirt and second is not connected because of described problem (ovirt
>>>>> dont show lun after login and discovery).
>>>>>
>>>>> I am using tgtd as iscsi target server on both servers. Both have same
>>>>> configuration (same disks, md raid6), but different iqn and ip address...
>>>>>
>>>>> --
>>>>> Lukas Kaplan
>>>>>
>>>>> Dragon Internet a.s.
>>>>>
>>>>
>>>>
>>>>> 2017-03-29 12:12 GMT+02:00 Liron Aravot <lara...@redhat.com>:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Mar 29, 2017 at 12:59 PM, Eduardo Mayoral <emayo...@arsys.es>
>>>>>> wrote:
>>>>>>
>>>>>>> I had a similar problem, in my case this was related to multipath,
>>>>>>> it was not masking the LUNs correctly, it was seeing it multiple times 
>>>>>>> (one
>>>>>>> per path), and I could not select the LUNs in the oVirt interface.
>>>>>>>
>>>>>>> Once I configured multipath correctly, everything worked like a
>>>>>>> charm.
>>>>>>>
>>>>>>> Best regards,
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Eduardo Mayoral.
>>>>>>>
>>>>>>> On 29/03/17 11:30, Lukáš Kaplan wrote:
>>>>>>>
>>>>>>> Hello all,
>>>>>>>
>>>>>>> I did all steps as I described in previo

Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-04-02 Thread Lukáš Kaplan
I am using Centos 7 and tgtd (scsi-target-utils)

# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)

# tgtd -V
1.0.55

# rpm -qi scsi-target-utils
Name: scsi-target-utils
Version : 1.0.55
Release : 4.el7
Architecture: x86_64
... etc

--
Lukas Kaplan



2017-03-31 21:59 GMT+02:00 Yaniv Kaul <yk...@redhat.com>:

>
>
> On Fri, Mar 31, 2017 at 3:43 PM, Lukáš Kaplan <lkap...@dragon.cz> wrote:
>
>> I solved this issue now.
>>
>> I thought till today, that iSCSI LUN ID (WWN or WWID) is globaly unique.
>> It is not true!
>> If you power on two identical linux machines and create iSCSI target on
>> them, their LUN IDs will be same...
>>
>> 36e010001 - for first LUN
>> 36e010002 - for second LUN etc
>>
>
> How did you get to such a number with so many zero's? Usually, there's
> some vendor ID and so on there...
> What target are you using?
> Y.
>
>
>>
>> You have to change LUN ID manualy (and take care of that uniqueness in
>> your domain) in /etc/tgtd/targets.conf for example:
>>
>> 
>> 
>> scsi_id 00020001
>> 
>> 
>>         scsi_id 00020002
>> 
>> initiator-address 192.168.1.0/24
>> 
>>
>>
>>
>> --
>> Lukas Kaplan
>>
>>
>>
>> 2017-03-31 9:06 GMT+02:00 Lukáš Kaplan <lkap...@dragon.cz>:
>>
>>> Is it possible that problem can be in conflicting LUN IDs?
>>> I see that first LUN from both storage servers have same LUN ID
>>>  36e010001. One storage server is connected to
>>> ovirt and second is not connected because of described problem (ovirt dont
>>> show lun after login and discovery).
>>>
>>> I am using tgtd as iscsi target server on both servers. Both have same
>>> configuration (same disks, md raid6), but different iqn and ip address...
>>>
>>> --
>>> Lukas Kaplan
>>>
>>> Dragon Internet a.s.
>>>
>>
>>
>>> 2017-03-29 12:12 GMT+02:00 Liron Aravot <lara...@redhat.com>:
>>>
>>>>
>>>>
>>>> On Wed, Mar 29, 2017 at 12:59 PM, Eduardo Mayoral <emayo...@arsys.es>
>>>> wrote:
>>>>
>>>>> I had a similar problem, in my case this was related to multipath, it
>>>>> was not masking the LUNs correctly, it was seeing it multiple times (one
>>>>> per path), and I could not select the LUNs in the oVirt interface.
>>>>>
>>>>> Once I configured multipath correctly, everything worked like a charm.
>>>>>
>>>>> Best regards,
>>>>>
>>>>> --
>>>>>
>>>>> Eduardo Mayoral.
>>>>>
>>>>> On 29/03/17 11:30, Lukáš Kaplan wrote:
>>>>>
>>>>> Hello all,
>>>>>
>>>>> I did all steps as I described in previous email, but no change. I
>>>>> can't see any  LUN after discovery and login of new iSCSI storage.
>>>>> (That storage is ok, if I try to connect it to another and older ovirt
>>>>> domain, it is working...)
>>>>>
>>>>> I tryed it on 3 new iSCSI targets alredy, all have same problem...
>>>>>
>>>>> Can somebody help me, please?
>>>>>
>>>>> --
>>>>> Lukas Kaplan
>>>>>
>>>>>
>>>> Hi Lukas,
>>>> If you try to perform the discovery yourself, do you see the luns?
>>>>
>>>>>
>>>>>
>>>>> 2017-03-27 16:22 GMT+02:00 Lukáš Kaplan <lkap...@dragon.cz>:
>>>>>
>>>>>> I did following steps:
>>>>>>
>>>>>>  - delete target on all initiators (ovirt nodes)
>>>>>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>>>>>> 10.53.1.201:3260 -u
>>>>>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>>>>>> 10.53.1.201:3260 -o delete
>>>>>>
>>>>>>  - stop tgtd on target
>>>>>>  - fill storage by zeroes (dd if=/dev/zero of=/dev/md125 bs=4096
>>>>>> status=progress)
>>>>>>  - start tgtd
>>>>>>  - tried to connect to ovirt (Discovery=ok, Login=ok, but can not see
>>>>>> any LUN).
>>>>>>
>>>>>> ==

Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-03-31 Thread Lukáš Kaplan
I solved this issue now.

I thought till today, that iSCSI LUN ID (WWN or WWID) is globaly unique. It
is not true!
If you power on two identical linux machines and create iSCSI target on
them, their LUN IDs will be same...

36e010001 - for first LUN
36e010002 - for second LUN etc

You have to change LUN ID manualy (and take care of that uniqueness in your
domain) in /etc/tgtd/targets.conf for example:



scsi_id 00020001


scsi_id 00020002

initiator-address 192.168.1.0/24




--
Lukas Kaplan



2017-03-31 9:06 GMT+02:00 Lukáš Kaplan <lkap...@dragon.cz>:

> Is it possible that problem can be in conflicting LUN IDs?
> I see that first LUN from both storage servers have same LUN ID
> 36e010001. One storage server is connected to
> ovirt and second is not connected because of described problem (ovirt dont
> show lun after login and discovery).
>
> I am using tgtd as iscsi target server on both servers. Both have same
> configuration (same disks, md raid6), but different iqn and ip address...
>
> --
> Lukas Kaplan
>
> Dragon Internet a.s.
>


> 2017-03-29 12:12 GMT+02:00 Liron Aravot <lara...@redhat.com>:
>
>>
>>
>> On Wed, Mar 29, 2017 at 12:59 PM, Eduardo Mayoral <emayo...@arsys.es>
>> wrote:
>>
>>> I had a similar problem, in my case this was related to multipath, it
>>> was not masking the LUNs correctly, it was seeing it multiple times (one
>>> per path), and I could not select the LUNs in the oVirt interface.
>>>
>>> Once I configured multipath correctly, everything worked like a charm.
>>>
>>> Best regards,
>>>
>>> --
>>>
>>> Eduardo Mayoral.
>>>
>>> On 29/03/17 11:30, Lukáš Kaplan wrote:
>>>
>>> Hello all,
>>>
>>> I did all steps as I described in previous email, but no change. I can't
>>> see any  LUN after discovery and login of new iSCSI storage.
>>> (That storage is ok, if I try to connect it to another and older ovirt
>>> domain, it is working...)
>>>
>>> I tryed it on 3 new iSCSI targets alredy, all have same problem...
>>>
>>> Can somebody help me, please?
>>>
>>> --
>>> Lukas Kaplan
>>>
>>>
>> Hi Lukas,
>> If you try to perform the discovery yourself, do you see the luns?
>>
>>>
>>>
>>> 2017-03-27 16:22 GMT+02:00 Lukáš Kaplan <lkap...@dragon.cz>:
>>>
>>>> I did following steps:
>>>>
>>>>  - delete target on all initiators (ovirt nodes)
>>>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>>>> 10.53.1.201:3260 -u
>>>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>>>> 10.53.1.201:3260 -o delete
>>>>
>>>>  - stop tgtd on target
>>>>  - fill storage by zeroes (dd if=/dev/zero of=/dev/md125 bs=4096
>>>> status=progress)
>>>>  - start tgtd
>>>>  - tried to connect to ovirt (Discovery=ok, Login=ok, but can not see
>>>> any LUN).
>>>>
>>>> === After that I ran this commands on one node: ===
>>>>
>>>> [root@fudi-cn1 ~]# iscsiadm -m session -o show
>>>> tcp: [1] 10.53.0.10:3260,1 iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>>> (non-flash)
>>>> tcp: [11] 10.53.0.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>>>> (non-flash)
>>>> tcp: [12] 10.53.1.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>>>> (non-flash)
>>>>
>>>> [root@fudi-cn1 ~]# iscsiadm -m discoverydb -P1
>>>> SENDTARGETS:
>>>> DiscoveryAddress: 10.53.0.201,3260
>>>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>>> Portal: 10.53.0.201:3260,1
>>>> Iface Name: default
>>>> iSNS:
>>>> No targets found.
>>>> STATIC:
>>>> Target: iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>>>> Portal: 10.53.1.201:3260,1
>>>> Iface Name: default
>>>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>>> Portal: 10.53.0.10:3260,1
>>>> Iface Name: default
>>>> Target: iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>>>> Portal: 10.53.0.201:3260,1
>>>> Iface Name: default
>>>> FIRMWARE:
>>>> No targets found.
>>>>
>>>&g

Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-03-31 Thread Lukáš Kaplan
Is it possible that problem can be in conflicting LUN IDs?
I see that first LUN from both storage servers have same LUN ID
 36e010001. One storage server is connected to
ovirt and second is not connected because of described problem (ovirt dont
show lun after login and discovery).

I am using tgtd as iscsi target server on both servers. Both have same
configuration (same disks, md raid6), but different iqn and ip address...

--
Lukáš Kaplan

Dragon Internet a.s.
Pod Loretou 883
293 06 Kosmonosy
tel: +420 326 706 166
fax: +420 326 706 154
gsm: +420 736 736 346
web: http://www.dragon.cz
e-mail: lkap...@dragon.cz

2017-03-29 12:12 GMT+02:00 Liron Aravot <lara...@redhat.com>:

>
>
> On Wed, Mar 29, 2017 at 12:59 PM, Eduardo Mayoral <emayo...@arsys.es>
> wrote:
>
>> I had a similar problem, in my case this was related to multipath, it was
>> not masking the LUNs correctly, it was seeing it multiple times (one per
>> path), and I could not select the LUNs in the oVirt interface.
>>
>> Once I configured multipath correctly, everything worked like a charm.
>>
>> Best regards,
>>
>> --
>>
>> Eduardo Mayoral.
>>
>> On 29/03/17 11:30, Lukáš Kaplan wrote:
>>
>> Hello all,
>>
>> I did all steps as I described in previous email, but no change. I can't
>> see any  LUN after discovery and login of new iSCSI storage.
>> (That storage is ok, if I try to connect it to another and older ovirt
>> domain, it is working...)
>>
>> I tryed it on 3 new iSCSI targets alredy, all have same problem...
>>
>> Can somebody help me, please?
>>
>> --
>> Lukas Kaplan
>>
>>
> Hi Lukas,
> If you try to perform the discovery yourself, do you see the luns?
>
>>
>>
>> 2017-03-27 16:22 GMT+02:00 Lukáš Kaplan <lkap...@dragon.cz>:
>>
>>> I did following steps:
>>>
>>>  - delete target on all initiators (ovirt nodes)
>>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>>> 10.53.1.201:3260 -u
>>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>>> 10.53.1.201:3260 -o delete
>>>
>>>  - stop tgtd on target
>>>  - fill storage by zeroes (dd if=/dev/zero of=/dev/md125 bs=4096
>>> status=progress)
>>>  - start tgtd
>>>  - tried to connect to ovirt (Discovery=ok, Login=ok, but can not see
>>> any LUN).
>>>
>>> === After that I ran this commands on one node: ===
>>>
>>> [root@fudi-cn1 ~]# iscsiadm -m session -o show
>>> tcp: [1] 10.53.0.10:3260,1 iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>> (non-flash)
>>> tcp: [11] 10.53.0.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>>> (non-flash)
>>> tcp: [12] 10.53.1.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>>> (non-flash)
>>>
>>> [root@fudi-cn1 ~]# iscsiadm -m discoverydb -P1
>>> SENDTARGETS:
>>> DiscoveryAddress: 10.53.0.201,3260
>>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>> Portal: 10.53.0.201:3260,1
>>> Iface Name: default
>>> iSNS:
>>> No targets found.
>>> STATIC:
>>> Target: iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>>> Portal: 10.53.1.201:3260,1
>>> Iface Name: default
>>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>>> Portal: 10.53.0.10:3260,1
>>> Iface Name: default
>>> Target: iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>>> Portal: 10.53.0.201:3260,1
>>> Iface Name: default
>>> FIRMWARE:
>>> No targets found.
>>>
>>> === On iscsi target: ===
>>> [root@fuvs-sn1 ~]# cat /proc/mdstat
>>> Personalities : [raid1] [raid6] [raid5] [raid4]
>>> md125 : active raid6 sdl1[11] sdk1[10] sdj1[9] sdi1[8] sdh1[7] sdg1[6]
>>> sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
>>>   9766302720 blocks super 1.2 level 6, 512k chunk, algorithm 2
>>> [12/12] []
>>>   bitmap: 0/8 pages [0KB], 65536KB chunk
>>> ...etc...
>>>
>>>
>>> [root@fuvs-sn1 ~]# cat /etc/tgt/targets.conf
>>> default-driver iscsi
>>>
>>> 
>>> # provided devicce as a iSCSI target
>>> backing-store /dev/md125
>>> # iSCSI Initiator's IP address you allow to connect
>>> #initiator-address 10.53.0.0/23
>>> 
>>>
>>> --
>>> Lukas Kaplan
>>>
>>> 2017-03-25 12:36 GMT+01:00 Lukas

Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-03-29 Thread Lukáš Kaplan
Hello all,

I did all steps as I described in previous email, but no change. I can't
see any  LUN after discovery and login of new iSCSI storage.
(That storage is ok, if I try to connect it to another and older ovirt
domain, it is working...)

I tryed it on 3 new iSCSI targets alredy, all have same problem...

Can somebody help me, please?

--
Lukas Kaplan


2017-03-27 16:22 GMT+02:00 Lukáš Kaplan <lkap...@dragon.cz>:

> I did following steps:
>
>  - delete target on all initiators (ovirt nodes)
>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
> 10.53.1.201:3260 -u
>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
> 10.53.1.201:3260 -o delete
>
>  - stop tgtd on target
>  - fill storage by zeroes (dd if=/dev/zero of=/dev/md125 bs=4096
> status=progress)
>  - start tgtd
>  - tried to connect to ovirt (Discovery=ok, Login=ok, but can not see any
> LUN).
>
> === After that I ran this commands on one node: ===
>
> [root@fudi-cn1 ~]# iscsiadm -m session -o show
> tcp: [1] 10.53.0.10:3260,1 iqn.2017-03.cz.dragon.ovirt:ovirtengine
> (non-flash)
> tcp: [11] 10.53.0.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
> (non-flash)
> tcp: [12] 10.53.1.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
> (non-flash)
>
> [root@fudi-cn1 ~]# iscsiadm -m discoverydb -P1
> SENDTARGETS:
> DiscoveryAddress: 10.53.0.201,3260
> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
> Portal: 10.53.0.201:3260,1
> Iface Name: default
> iSNS:
> No targets found.
> STATIC:
> Target: iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
> Portal: 10.53.1.201:3260,1
> Iface Name: default
> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
> Portal: 10.53.0.10:3260,1
> Iface Name: default
> Target: iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
> Portal: 10.53.0.201:3260,1
> Iface Name: default
> FIRMWARE:
> No targets found.
>
> === On iscsi target: ===
> [root@fuvs-sn1 ~]# cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md125 : active raid6 sdl1[11] sdk1[10] sdj1[9] sdi1[8] sdh1[7] sdg1[6]
> sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
>   9766302720 blocks super 1.2 level 6, 512k chunk, algorithm 2 [12/12]
> []
>   bitmap: 0/8 pages [0KB], 65536KB chunk
> ...etc...
>
>
> [root@fuvs-sn1 ~]# cat /etc/tgt/targets.conf
> default-driver iscsi
>
> 
> # provided devicce as a iSCSI target
> backing-store /dev/md125
> # iSCSI Initiator's IP address you allow to connect
> #initiator-address 10.53.0.0/23
> 
>
> --
> Lukas Kaplan
>
> 2017-03-25 12:36 GMT+01:00 Lukas Kaplan <lkap...@dragon.cz>:
>
>> Co muze myslet tim mappingem?
>>
>> Jinak muzu zkusit ddckem celou storage prepsat nulami.
>>
>> co ty na to?
>>
>> Odesláno z iPhonu
>>
>> Začátek přeposílané zprávy:
>>
>> *Od:* Yaniv Kaul <yk...@redhat.com>
>> *Datum:* 24. března 2017 23:25:21 SEČ
>> *Komu:* Lukáš Kaplan <lkap...@dragon.cz>
>> *Kopie:* users <users@ovirt.org>
>> *Předmět:* *Re: [ovirt-users] iSCSI Discovery cannot detetect LUN*
>>
>>
>>
>> On Fri, Mar 24, 2017 at 1:34 PM, Lukáš Kaplan <lkap...@dragon.cz> wrote:
>>
>>> Hello all,
>>>
>>> please do you have some experience with troubleshooting adding of iSCSI
>>> domain to ovirt 4.1.1?
>>>
>>> I am chalenging this issue now:
>>>
>>> 1) I have successfuly installed oVirt 4.1.1 environment with self-hosted
>>> engine, 3 nodes and 3 storages (iSCSI Master domain, iSCSI for hosted
>>> engine and NFS ISO domain). Everything is working now.
>>>
>>> 2) But, when I want to add new iSCSI domain, I can discover it, I can
>>> login, but I cant see any LUN on that storage. (I had same problem in oVirt
>>> 4.1.0, so I made upgrade to 4.1.1)
>>>
>>
>> Are you sure mappings are correct?
>> Can you ensure the LUN is empty?
>> Y.
>>
>>
>>>
>>> 3) Then I tryed to add this storage to another oVirt environment (oVirt
>>> 3.6) and there are no problem. I can see LUN on that storage and I can
>>> connect it to oVirt.
>>>
>>> I tryed to examine vdsm.log, but it is very detailed and unredable for
>>> me :-/
>>>
>>> Thak you in advance, have a nice day,
>>> --
>>> Lukas Kaplan
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-03-27 Thread Lukáš Kaplan
I did following steps:

 - delete target on all initiators (ovirt nodes)
 iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
10.53.1.201:3260 -u
 iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
10.53.1.201:3260 -o delete

 - stop tgtd on target
 - fill storage by zeroes (dd if=/dev/zero of=/dev/md125 bs=4096
status=progress)
 - start tgtd
 - tried to connect to ovirt (Discovery=ok, Login=ok, but can not see any
LUN).

=== After that I ran this commands on one node: ===

[root@fudi-cn1 ~]# iscsiadm -m session -o show
tcp: [1] 10.53.0.10:3260,1 iqn.2017-03.cz.dragon.ovirt:ovirtengine
(non-flash)
tcp: [11] 10.53.0.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
(non-flash)
tcp: [12] 10.53.1.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
(non-flash)

[root@fudi-cn1 ~]# iscsiadm -m discoverydb -P1
SENDTARGETS:
DiscoveryAddress: 10.53.0.201,3260
Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
Portal: 10.53.0.201:3260,1
Iface Name: default
iSNS:
No targets found.
STATIC:
Target: iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
Portal: 10.53.1.201:3260,1
Iface Name: default
Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
Portal: 10.53.0.10:3260,1
Iface Name: default
Target: iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
Portal: 10.53.0.201:3260,1
Iface Name: default
FIRMWARE:
No targets found.

=== On iscsi target: ===
[root@fuvs-sn1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active raid6 sdl1[11] sdk1[10] sdj1[9] sdi1[8] sdh1[7] sdg1[6]
sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
  9766302720 blocks super 1.2 level 6, 512k chunk, algorithm 2 [12/12]
[]
  bitmap: 0/8 pages [0KB], 65536KB chunk
...etc...


[root@fuvs-sn1 ~]# cat /etc/tgt/targets.conf
default-driver iscsi


# provided devicce as a iSCSI target
backing-store /dev/md125
# iSCSI Initiator's IP address you allow to connect
#initiator-address 10.53.0.0/23


--
Lukas Kaplan

2017-03-25 12:36 GMT+01:00 Lukas Kaplan <lkap...@dragon.cz>:

> Co muze myslet tim mappingem?
>
> Jinak muzu zkusit ddckem celou storage prepsat nulami.
>
> co ty na to?
>
> Odesláno z iPhonu
>
> Začátek přeposílané zprávy:
>
> *Od:* Yaniv Kaul <yk...@redhat.com>
> *Datum:* 24. března 2017 23:25:21 SEČ
> *Komu:* Lukáš Kaplan <lkap...@dragon.cz>
> *Kopie:* users <users@ovirt.org>
> *Předmět:* *Re: [ovirt-users] iSCSI Discovery cannot detetect LUN*
>
>
>
> On Fri, Mar 24, 2017 at 1:34 PM, Lukáš Kaplan <lkap...@dragon.cz> wrote:
>
>> Hello all,
>>
>> please do you have some experience with troubleshooting adding of iSCSI
>> domain to ovirt 4.1.1?
>>
>> I am chalenging this issue now:
>>
>> 1) I have successfuly installed oVirt 4.1.1 environment with self-hosted
>> engine, 3 nodes and 3 storages (iSCSI Master domain, iSCSI for hosted
>> engine and NFS ISO domain). Everything is working now.
>>
>> 2) But, when I want to add new iSCSI domain, I can discover it, I can
>> login, but I cant see any LUN on that storage. (I had same problem in oVirt
>> 4.1.0, so I made upgrade to 4.1.1)
>>
>
> Are you sure mappings are correct?
> Can you ensure the LUN is empty?
> Y.
>
>
>>
>> 3) Then I tryed to add this storage to another oVirt environment (oVirt
>> 3.6) and there are no problem. I can see LUN on that storage and I can
>> connect it to oVirt.
>>
>> I tryed to examine vdsm.log, but it is very detailed and unredable for me
>> :-/
>>
>> Thak you in advance, have a nice day,
>> --
>> Lukas Kaplan
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-03-24 Thread Lukáš Kaplan
Hello Nelson,

I did same thing today too and it was succesfull. But I used different
steps. Check this:
https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
http://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/

Hope it helps you.

Have a nice day,

--
Lukas Kaplan


2017-03-24 15:11 GMT+01:00 Nelson Lameiras :

> Hello,
>
> When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's
> console (from SPICE to None in GUI)
>
> My test setup :
> 2 manually built hosts using centos 7.3, ovirt 4.1
> 1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7, accessible
> with SPICE console via GUI
>
> I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine :
> - yum update
> - engine-setup
> - reboot engine
>
> When accessing 4.1.1 GUI, Graphics is set to "None" on "Virtual Machines"
> page, with "console button" greyed out (all other VMs have the same
> Graphics set to the same value as before)
> I tried to edit engine VM settings, and console options are same as before
> (SPLICE, QXL).
>
> I'm hopping this is not a new feature, since if we loose network on
> engine, console is the only way to debug...
>
> Is this a bug?
>
> ps. I was able to reproduce this bug 2 times
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI Discovery cannot detetect LUN

2017-03-24 Thread Lukáš Kaplan
Hello all,

please do you have some experience with troubleshooting adding of iSCSI
domain to ovirt 4.1.1?

I am chalenging this issue now:

1) I have successfuly installed oVirt 4.1.1 environment with self-hosted
engine, 3 nodes and 3 storages (iSCSI Master domain, iSCSI for hosted
engine and NFS ISO domain). Everything is working now.

2) But, when I want to add new iSCSI domain, I can discover it, I can
login, but I cant see any LUN on that storage. (I had same problem in oVirt
4.1.0, so I made upgrade to 4.1.1)

3) Then I tryed to add this storage to another oVirt environment (oVirt
3.6) and there are no problem. I can see LUN on that storage and I can
connect it to oVirt.

I tryed to examine vdsm.log, but it is very detailed and unredable for me
:-/

Thak you in advance, have a nice day,
--
Lukas Kaplan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-03-24 Thread Lukáš Kaplan
Hello Nelson,

I have hosted engine graphics console as VNC. I am using ssho directly to
engine. I cant say if vnc is working or not, because I have no client for
it... (Tested in debian jessie with virt-viewer 1.0 and 5.0 - does not work)

--
Lukas Kaplan



2017-03-24 15:32 GMT+01:00 Nelson Lameiras <nelson.lamei...@lyra-network.com
>:

> Hello Lukas,
>
> Thanks for you feedback.
> I did something very similar to procedures you sent me.
>
> Can you confirm me that your HostedEngine vm still has it's SPICE console
> available and is working?
> (otherwise my update went ok)
>
> cordialement, regards,
>
> <https://www.lyra-network.com/>
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu <https://payzen.eu>
> <https://www.youtube.com/channel/UCrVl1CO_Jlu3KbiRH-tQ_vA>
> <https://www.linkedin.com/company/lyra-network_2>
> <https://twitter.com/LyraNetwork>
> <https://payzen.eu>
> ------
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> --
> *From: *"Lukáš Kaplan" <lkap...@dragon.cz>
> *To: *"Nelson Lameiras" <nelson.lamei...@lyra-network.com>
> *Cc: *"users" <users@ovirt.org>
> *Sent: *Friday, March 24, 2017 3:22:44 PM
> *Subject: *Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more
> engine console available
>
> Hello Nelson,
> I did same thing today too and it was succesfull. But I used different
> steps. Check this:
> https://www.ovirt.org/documentation/how-to/hosted-
> engine/#upgrade-hosted-engine
> http://www.ovirt.org/documentation/upgrade-guide/
> chap-Updates_between_Minor_Releases/
>
> Hope it helps you.
>
> Have a nice day,
>
> --
> Lukas Kaplan
>
>
> 2017-03-24 15:11 GMT+01:00 Nelson Lameiras <nelson.lameiras@lyra-network.
> com>:
>
>> Hello,
>>
>> When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's
>> console (from SPICE to None in GUI)
>>
>> My test setup :
>> 2 manually built hosts using centos 7.3, ovirt 4.1
>> 1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7, accessible
>> with SPICE console via GUI
>>
>> I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine :
>> - yum update
>> - engine-setup
>> - reboot engine
>>
>> When accessing 4.1.1 GUI, Graphics is set to "None" on "Virtual Machines"
>> page, with "console button" greyed out (all other VMs have the same
>> Graphics set to the same value as before)
>> I tried to edit engine VM settings, and console options are same as
>> before (SPLICE, QXL).
>>
>> I'm hopping this is not a new feature, since if we loose network on
>> engine, console is the only way to debug...
>>
>> Is this a bug?
>>
>> ps. I was able to reproduce this bug 2 times
>>
>> cordialement, regards,
>>
>> <https://www.lyra-network.com/>
>> Nelson LAMEIRAS
>> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
>> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
>> nelson.lamei...@lyra-network.com
>> www.lyra-network.com | www.payzen.eu <https://payzen.eu>
>> <https://www.youtube.com/channel/UCrVl1CO_Jlu3KbiRH-tQ_vA>
>> <https://www.linkedin.com/company/lyra-network_2>
>> <https://twitter.com/LyraNetwork>
>> <https://payzen.eu>
>> --
>> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users