Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
On Tue, Jul 25, 2017 at 6:25 PM, Vinícius Ferrão  wrote:
> Bug opened here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1474904


Thanks! Let's continue the discussion in the bug

>
> Thanks,
> V.
>
> On 25 Jul 2017, at 12:08, Vinícius Ferrão  wrote:
>
> Hello Maor,
>
> Thanks for answering and looking deeper in this case. You’re welcome to
> connect to my machine since it’s reachable over the internet. I’ll be
> opening a ticket in moments. Just to feed an update here:
>
> I’ve done what you asked, but since I’m running Self Hosted Engine, I lost
> the connection to HE, here’s the CLI:
>
>
>
> Last login: Thu Jul 20 02:43:50 2017 from 172.31.2.3
>
>  node status: OK
>  See `nodectl check` for more information
>
> Admin Console: https://192.168.11.3:9090/ or https://192.168.12.3:9090/ or
> https://146.164.37.103:9090/
>
> [root@ovirt3 ~]# iscsiadm -m session -u
> Logging out of session [sid: 1, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260]
> Logging out of session [sid: 4, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.12.14,3260]
> Logging out of session [sid: 7, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.12.14,3260]
> Logging out of session [sid: 5, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.11.14,3260]
> Logging out of session [sid: 6, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.11.14,3260]
> Logout of [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.11.14,3260] successful.
> Logout of [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.11.14,3260] successful.
> [root@ovirt3 ~]# service iscsid stop
> Redirecting to /bin/systemctl stop  iscsid.service
> Warning: Stopping iscsid.service, but it can still be activated by:
>  iscsid.socket
>
> [root@ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> [root@ovirt3 ~]# service iscsid start
> Redirecting to /bin/systemctl start  iscsid.service
>
> And finally:
>
> [root@ovirt3 ~]# hosted-engine --vm-status
> .
> .
> .
>
> It just hangs.
>
> Thanks,
> V.
>
> On 25 Jul 2017, at 05:54, Maor Lipchuk  wrote:
>
> Hi Vinícius,
>
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
>
> Thanks,
> Maor
>
>
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk  wrote:
>
> Hi Vinícius,
>
> For some reason it looks like your networks are both connected to the same
> IPs.
>
> based on the  VDSM logs:
> u'connectionParams':[
>{
>   u'netIfaceName':u'eno3.11',
>   u'connection':u'192.168.11.14',
>},
>{
>   u'netIfaceName':u'eno3.11',
>   u'connection':u'192.168.12.14',
>}
>   u'netIfaceName':u'eno4.12',
>   u'connection':u'192.168.11.14',
>},
>{
>   u'netIfaceName':u'eno4.12',
>   u'connection':u'192.168.12.14',
>}
> ],
>
> Can you try to reconnect to the iSCSI storage domain after
> re-initializing your iscsiadm on your host.
>
> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>
> 2. In your VDSM host, log out from your iscsi open sessions which are
> related to this storage domain
> if that is your only iSCSI storage domain log out from all the sessions:
>  "iscsiadm -m session -u"
>
> 3. Stop the iscsid service:
>  "service iscsid stop"
>
> 4. Move your network interfaces configured in the iscsiadm to a
> temporary folder:
>   mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> 5. Start the iscsid service
>  "service iscsid start"
>
> Regards,
> Maor and Benny
>
> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
>
> Hi,
>
>
> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>
> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
> trying to enable the feature without success too.
>
> Here’s what I’ve done, step-by-step.
>
> 1. Installed oVirt Node 4.1.3 with the following network settings:
>
> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
> eno3 with 9216 MTU.
> eno4 with 9216 MTU.
> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>
> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
> different switches.
>
>
>
> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
> 

Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Vinícius Ferrão
Bug opened here:
https://bugzilla.redhat.com/show_bug.cgi?id=1474904

Thanks,
V.

On 25 Jul 2017, at 12:08, Vinícius Ferrão 
> wrote:

Hello Maor,

Thanks for answering and looking deeper in this case. You’re welcome to connect 
to my machine since it’s reachable over the internet. I’ll be opening a ticket 
in moments. Just to feed an update here:

I’ve done what you asked, but since I’m running Self Hosted Engine, I lost the 
connection to HE, here’s the CLI:



Last login: Thu Jul 20 02:43:50 2017 from 172.31.2.3

 node status: OK
 See `nodectl check` for more information

Admin Console: https://192.168.11.3:9090/ or https://192.168.12.3:9090/ or 
https://146.164.37.103:9090/

[root@ovirt3 ~]# iscsiadm -m session -u
Logging out of session [sid: 1, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260]
Logging out of session [sid: 4, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 7, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 5, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logging out of session [sid: 6, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logout of [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.11.14,3260] successful.
Logout of [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.11.14,3260] successful.
[root@ovirt3 ~]# service iscsid stop
Redirecting to /bin/systemctl stop  iscsid.service
Warning: Stopping iscsid.service, but it can still be activated by:
 iscsid.socket

[root@ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces

[root@ovirt3 ~]# service iscsid start
Redirecting to /bin/systemctl start  iscsid.service

And finally:

[root@ovirt3 ~]# hosted-engine --vm-status
.
.
.

It just hangs.

Thanks,
V.

On 25 Jul 2017, at 05:54, Maor Lipchuk 
> wrote:

Hi Vinícius,

I was trying to reproduce your scenario and also encountered this
issue, so please disregard my last comment, can you please open a bug
on that so we can investigate it properly

Thanks,
Maor


On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk 
> wrote:
Hi Vinícius,

For some reason it looks like your networks are both connected to the same IPs.

based on the  VDSM logs:
u'connectionParams':[
   {
  u'netIfaceName':u'eno3.11',
  u'connection':u'192.168.11.14',
   },
   {
  u'netIfaceName':u'eno3.11',
  u'connection':u'192.168.12.14',
   }
  u'netIfaceName':u'eno4.12',
  u'connection':u'192.168.11.14',
   },
   {
  u'netIfaceName':u'eno4.12',
  u'connection':u'192.168.12.14',
   }
],

Can you try to reconnect to the iSCSI storage domain after
re-initializing your iscsiadm on your host.

1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it

2. In your VDSM host, log out from your iscsi open sessions which are
related to this storage domain
if that is your only iSCSI storage domain log out from all the sessions:
 "iscsiadm -m session -u"

3. Stop the iscsid service:
 "service iscsid stop"

4. Move your network interfaces configured in the iscsiadm to a
temporary folder:
  mv /var/lib/iscsi/ifaces/* /tmp/ifaces

5. Start the iscsid service
 "service iscsid start"

Regards,
Maor and Benny

On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz 
> wrote:
Hi,


Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:

I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
trying to enable the feature without success too.

Here’s what I’ve done, step-by-step.

1. Installed oVirt Node 4.1.3 with the following network settings:

eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
eno3 with 9216 MTU.
eno4 with 9216 MTU.
vlan11 on eno3 with 9216 MTU and fixed IP addresses.
vlan12 on eno4 with 9216 MTU and fixed IP addresses.

eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
different switches.


This is the point: the OVirt implementation of iSCSI-Bonding assumes that
all network interfaces in the bond can connect/reach all targets, including
those in the other net(s). The fact that you use separate, isolated networks
means that this is not the case in your setup (and not in mine).

I am not sure if this is a bug, a design flaw or a feature, but as 

Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Vinícius Ferrão
Hello Maor,

Thanks for answering and looking deeper in this case. You’re welcome to connect 
to my machine since it’s reachable over the internet. I’ll be opening a ticket 
in moments. Just to feed an update here:

I’ve done what you asked, but since I’m running Self Hosted Engine, I lost the 
connection to HE, here’s the CLI:



Last login: Thu Jul 20 02:43:50 2017 from 172.31.2.3

  node status: OK
  See `nodectl check` for more information

Admin Console: https://192.168.11.3:9090/ or https://192.168.12.3:9090/ or 
https://146.164.37.103:9090/

[root@ovirt3 ~]# iscsiadm -m session -u
Logging out of session [sid: 1, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260]
Logging out of session [sid: 4, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 7, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 5, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logging out of session [sid: 6, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logout of [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.11.14,3260] successful.
Logout of [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.11.14,3260] successful.
[root@ovirt3 ~]# service iscsid stop
Redirecting to /bin/systemctl stop  iscsid.service
Warning: Stopping iscsid.service, but it can still be activated by:
  iscsid.socket

[root@ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces

[root@ovirt3 ~]# service iscsid start
Redirecting to /bin/systemctl start  iscsid.service

And finally:

[root@ovirt3 ~]# hosted-engine --vm-status
.
.
.

It just hangs.

Thanks,
V.

> On 25 Jul 2017, at 05:54, Maor Lipchuk  wrote:
> 
> Hi Vinícius,
> 
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
> 
> Thanks,
> Maor
> 
> 
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk  wrote:
>> Hi Vinícius,
>> 
>> For some reason it looks like your networks are both connected to the same 
>> IPs.
>> 
>> based on the  VDSM logs:
>>  u'connectionParams':[
>> {
>>u'netIfaceName':u'eno3.11',
>>u'connection':u'192.168.11.14',
>> },
>> {
>>u'netIfaceName':u'eno3.11',
>>u'connection':u'192.168.12.14',
>> }
>>u'netIfaceName':u'eno4.12',
>>u'connection':u'192.168.11.14',
>> },
>> {
>>u'netIfaceName':u'eno4.12',
>>u'connection':u'192.168.12.14',
>> }
>>  ],
>> 
>> Can you try to reconnect to the iSCSI storage domain after
>> re-initializing your iscsiadm on your host.
>> 
>> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>> 
>> 2. In your VDSM host, log out from your iscsi open sessions which are
>> related to this storage domain
>> if that is your only iSCSI storage domain log out from all the sessions:
>>   "iscsiadm -m session -u"
>> 
>> 3. Stop the iscsid service:
>>   "service iscsid stop"
>> 
>> 4. Move your network interfaces configured in the iscsiadm to a
>> temporary folder:
>>mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>> 
>> 5. Start the iscsid service
>>   "service iscsid start"
>> 
>> Regards,
>> Maor and Benny
>> 
>> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
>>> Hi,
>>> 
>>> 
>>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>> 
 I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
 trying to enable the feature without success too.
 
 Here’s what I’ve done, step-by-step.
 
 1. Installed oVirt Node 4.1.3 with the following network settings:
 
 eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
 eno3 with 9216 MTU.
 eno4 with 9216 MTU.
 vlan11 on eno3 with 9216 MTU and fixed IP addresses.
 vlan12 on eno4 with 9216 MTU and fixed IP addresses.
 
 eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
 different switches.
>>> 
>>> 
>>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>>> all network interfaces in the bond can connect/reach all targets, including
>>> those in the other net(s). The fact that you use separate, isolated networks
>>> means that this is not the case in your setup (and not in mine).
>>> 
>>> I am 

Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Can you please try to connect to your iSCSI server using iscsadm from
your VDSM Host, for example like so:
   iscsiadm -m node -T
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio -I eno3.11 -P
192.168.12.14,3260 --login

On Tue, Jul 25, 2017 at 11:54 AM, Maor Lipchuk  wrote:
> Hi Vinícius,
>
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
>
> Thanks,
> Maor
>
>
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk  wrote:
>> Hi Vinícius,
>>
>> For some reason it looks like your networks are both connected to the same 
>> IPs.
>>
>> based on the  VDSM logs:
>>   u'connectionParams':[
>>  {
>> u'netIfaceName':u'eno3.11',
>> u'connection':u'192.168.11.14',
>>  },
>>  {
>> u'netIfaceName':u'eno3.11',
>> u'connection':u'192.168.12.14',
>>  }
>> u'netIfaceName':u'eno4.12',
>> u'connection':u'192.168.11.14',
>>  },
>>  {
>> u'netIfaceName':u'eno4.12',
>> u'connection':u'192.168.12.14',
>>  }
>>   ],
>>
>> Can you try to reconnect to the iSCSI storage domain after
>> re-initializing your iscsiadm on your host.
>>
>> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>>
>> 2. In your VDSM host, log out from your iscsi open sessions which are
>> related to this storage domain
>> if that is your only iSCSI storage domain log out from all the sessions:
>>"iscsiadm -m session -u"
>>
>> 3. Stop the iscsid service:
>>"service iscsid stop"
>>
>> 4. Move your network interfaces configured in the iscsiadm to a
>> temporary folder:
>> mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>>
>> 5. Start the iscsid service
>>"service iscsid start"
>>
>> Regards,
>> Maor and Benny
>>
>> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
>>> Hi,
>>>
>>>
>>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>>
 I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
 trying to enable the feature without success too.

 Here’s what I’ve done, step-by-step.

 1. Installed oVirt Node 4.1.3 with the following network settings:

 eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
 eno3 with 9216 MTU.
 eno4 with 9216 MTU.
 vlan11 on eno3 with 9216 MTU and fixed IP addresses.
 vlan12 on eno4 with 9216 MTU and fixed IP addresses.

 eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
 different switches.
>>>
>>>
>>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>>> all network interfaces in the bond can connect/reach all targets, including
>>> those in the other net(s). The fact that you use separate, isolated networks
>>> means that this is not the case in your setup (and not in mine).
>>>
>>> I am not sure if this is a bug, a design flaw or a feature, but as a result
>>> of this OVirt's iSCSI-Bonding does not work for us.
>>>
>>> Please see my mail from yesterday for a workaround.
>>>
>>> cu,
>>> Uwe
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Hi Vinícius,

I was trying to reproduce your scenario and also encountered this
issue, so please disregard my last comment, can you please open a bug
on that so we can investigate it properly

Thanks,
Maor


On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk  wrote:
> Hi Vinícius,
>
> For some reason it looks like your networks are both connected to the same 
> IPs.
>
> based on the  VDSM logs:
>   u'connectionParams':[
>  {
> u'netIfaceName':u'eno3.11',
> u'connection':u'192.168.11.14',
>  },
>  {
> u'netIfaceName':u'eno3.11',
> u'connection':u'192.168.12.14',
>  }
> u'netIfaceName':u'eno4.12',
> u'connection':u'192.168.11.14',
>  },
>  {
> u'netIfaceName':u'eno4.12',
> u'connection':u'192.168.12.14',
>  }
>   ],
>
> Can you try to reconnect to the iSCSI storage domain after
> re-initializing your iscsiadm on your host.
>
> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>
> 2. In your VDSM host, log out from your iscsi open sessions which are
> related to this storage domain
> if that is your only iSCSI storage domain log out from all the sessions:
>"iscsiadm -m session -u"
>
> 3. Stop the iscsid service:
>"service iscsid stop"
>
> 4. Move your network interfaces configured in the iscsiadm to a
> temporary folder:
> mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> 5. Start the iscsid service
>"service iscsid start"
>
> Regards,
> Maor and Benny
>
> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
>> Hi,
>>
>>
>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>
>>> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
>>> trying to enable the feature without success too.
>>>
>>> Here’s what I’ve done, step-by-step.
>>>
>>> 1. Installed oVirt Node 4.1.3 with the following network settings:
>>>
>>> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
>>> eno3 with 9216 MTU.
>>> eno4 with 9216 MTU.
>>> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
>>> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>>>
>>> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
>>> different switches.
>>
>>
>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>> all network interfaces in the bond can connect/reach all targets, including
>> those in the other net(s). The fact that you use separate, isolated networks
>> means that this is not the case in your setup (and not in mine).
>>
>> I am not sure if this is a bug, a design flaw or a feature, but as a result
>> of this OVirt's iSCSI-Bonding does not work for us.
>>
>> Please see my mail from yesterday for a workaround.
>>
>> cu,
>> Uwe
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Nicolas Ecarnot

Le 25/07/2017 à 10:26, Maor Lipchuk a écrit :

Hi Vinícius,

For some reason it looks like your networks are both connected to the same IPs.


Hi,

Sorry to jump in this thread, but I'm concerned with this issue.

Correct me if I'm wrong, but in this thread, many people are using 
Equallogic SANs, which provides only one virtual IP to connect to.


--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Hi Vinícius,

For some reason it looks like your networks are both connected to the same IPs.

based on the  VDSM logs:
  u'connectionParams':[
 {
u'netIfaceName':u'eno3.11',
u'connection':u'192.168.11.14',
 },
 {
u'netIfaceName':u'eno3.11',
u'connection':u'192.168.12.14',
 }
u'netIfaceName':u'eno4.12',
u'connection':u'192.168.11.14',
 },
 {
u'netIfaceName':u'eno4.12',
u'connection':u'192.168.12.14',
 }
  ],

Can you try to reconnect to the iSCSI storage domain after
re-initializing your iscsiadm on your host.

1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it

2. In your VDSM host, log out from your iscsi open sessions which are
related to this storage domain
if that is your only iSCSI storage domain log out from all the sessions:
   "iscsiadm -m session -u"

3. Stop the iscsid service:
   "service iscsid stop"

4. Move your network interfaces configured in the iscsiadm to a
temporary folder:
mv /var/lib/iscsi/ifaces/* /tmp/ifaces

5. Start the iscsid service
   "service iscsid start"

Regards,
Maor and Benny

On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
> Hi,
>
>
> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>
>> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
>> trying to enable the feature without success too.
>>
>> Here’s what I’ve done, step-by-step.
>>
>> 1. Installed oVirt Node 4.1.3 with the following network settings:
>>
>> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
>> eno3 with 9216 MTU.
>> eno4 with 9216 MTU.
>> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
>> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>>
>> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
>> different switches.
>
>
> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
> all network interfaces in the bond can connect/reach all targets, including
> those in the other net(s). The fact that you use separate, isolated networks
> means that this is not the case in your setup (and not in mine).
>
> I am not sure if this is a bug, a design flaw or a feature, but as a result
> of this OVirt's iSCSI-Bonding does not work for us.
>
> Please see my mail from yesterday for a workaround.
>
> cu,
> Uwe
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-19 Thread Uwe Laverenz

Hi,


Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:

I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m 
trying to enable the feature without success too.


Here’s what I’ve done, step-by-step.

1. Installed oVirt Node 4.1.3 with the following network settings:

eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
eno3 with 9216 MTU.
eno4 with 9216 MTU.
vlan11 on eno3 with 9216 MTU and fixed IP addresses.
vlan12 on eno4 with 9216 MTU and fixed IP addresses.

eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on 
different switches.


This is the point: the OVirt implementation of iSCSI-Bonding assumes 
that all network interfaces in the bond can connect/reach all targets, 
including those in the other net(s). The fact that you use separate, 
isolated networks means that this is not the case in your setup (and not 
in mine).


I am not sure if this is a bug, a design flaw or a feature, but as a 
result of this OVirt's iSCSI-Bonding does not work for us.


Please see my mail from yesterday for a workaround.

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users