Can you please try to connect to your iSCSI server using iscsadm from
your VDSM Host, for example like so:
   iscsiadm -m node -T
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio -I eno3.11 -P
192.168.12.14,3260 --login

On Tue, Jul 25, 2017 at 11:54 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:
> Hi Vinícius,
>
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
>
> Thanks,
> Maor
>
>
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk <mlipc...@redhat.com> wrote:
>> Hi Vinícius,
>>
>> For some reason it looks like your networks are both connected to the same 
>> IPs.
>>
>> based on the  VDSM logs:
>>       u'connectionParams':[
>>          {
>>             u'netIfaceName':u'eno3.11',
>>             u'connection':u'192.168.11.14',
>>          },
>>          {
>>             u'netIfaceName':u'eno3.11',
>>             u'connection':u'192.168.12.14',
>>          }
>>             u'netIfaceName':u'eno4.12',
>>             u'connection':u'192.168.11.14',
>>          },
>>          {
>>             u'netIfaceName':u'eno4.12',
>>             u'connection':u'192.168.12.14',
>>          }
>>       ],
>>
>> Can you try to reconnect to the iSCSI storage domain after
>> re-initializing your iscsiadm on your host.
>>
>> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>>
>> 2. In your VDSM host, log out from your iscsi open sessions which are
>> related to this storage domain
>> if that is your only iSCSI storage domain log out from all the sessions:
>>    "iscsiadm -m session -u"
>>
>> 3. Stop the iscsid service:
>>    "service iscsid stop"
>>
>> 4. Move your network interfaces configured in the iscsiadm to a
>> temporary folder:
>>     mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>>
>> 5. Start the iscsid service
>>    "service iscsid start"
>>
>> Regards,
>> Maor and Benny
>>
>> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz <u...@laverenz.de> wrote:
>>> Hi,
>>>
>>>
>>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>>
>>>> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
>>>> trying to enable the feature without success too.
>>>>
>>>> Here’s what I’ve done, step-by-step.
>>>>
>>>> 1. Installed oVirt Node 4.1.3 with the following network settings:
>>>>
>>>> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
>>>> eno3 with 9216 MTU.
>>>> eno4 with 9216 MTU.
>>>> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
>>>> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>>>>
>>>> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
>>>> different switches.
>>>
>>>
>>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>>> all network interfaces in the bond can connect/reach all targets, including
>>> those in the other net(s). The fact that you use separate, isolated networks
>>> means that this is not the case in your setup (and not in mine).
>>>
>>> I am not sure if this is a bug, a design flaw or a feature, but as a result
>>> of this OVirt's iSCSI-Bonding does not work for us.
>>>
>>> Please see my mail from yesterday for a workaround.
>>>
>>> cu,
>>> Uwe
>>> _______________________________________________
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to