Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2018-03-05 Thread Sergei Hanus
Hi, Nicolas.
As long, as you are able to set up two separate iscsi sessions, which are
bount to two separate paths - multipath driver will handle the rest.
As I understand, Yaniv is talking about iscsi bonding, which is a bit more
broad sort of multipath (per description -
https://www.ovirt.org/documentation/admin-guide/chap-Storage/) - it creates
all possible paths between all possible initiators and targets within bond.
Personally, I don't think it's necessary - it's always better to control
the connections the way you described - two vlans, each vlan contains one
server NIC and one storage NIC, that's it.

 Sergei.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2018-03-05 Thread Nicolas Ecarnot

Hello,

[Unusual setup]
Last week, I eventually managed to make a 4.2.1.7 oVirt work with 
iscsi-multipathing on both hosts and guest, connected to a Dell 
Equallogic SAN which is providing one single virtual ip - my hosts have 
two dedicated NICS for iscsi, but on the same VLAN. Torture-tests showed 
good resilience.


[Classical setup]
But this year, we plan to create at least two additional DCs but to 
connect their hosts to a "classical" SAN, ie which provides TWO IPs on 
segregated VLANs (not routed), and we'd like to use the same 
iscsi-multipathing feature.


The discussion below could lead to think that oVirt needs the two iscsi 
VLANs to be routed, allowing the hosts in one VLAN to access to 
resources in the other.

As Vinicius explained, this is not a best practice to say the least.

Searching through the mailing list archive, I found no answer to 
Vinicius' question.


May a Redhat storage and/or network expert enlighten us on these points?

Regards,

--
Nicolas Ecarnot

Le 21/07/2017 à 20:56, Vinícius Ferrão a écrit :


On 21 Jul 2017, at 15:12, Yaniv Kaul > wrote:




On Wed, Jul 19, 2017 at 9:13 PM, Vinícius Ferrão > wrote:


Hello,

I’ve skipped this message entirely yesterday. So this is per
design? Because the best practices of iSCSI MPIO, as far as I
know, recommends two completely separate paths. If this can’t be
achieved with oVirt what’s the point of running MPIO?


With regular storage it is quite easy to achieve using 'iSCSI bonding'.
I think the Dell storage is a bit different and requires some more 
investigation - or experience with it.

 Y.


Yaniv, thank you for answering this. I’m really hoping that a solution 
would be found.


Actually I’m not running anything from DELL. My storage system is 
FreeNAS which is pretty standard and, as far as I know, iSCSI 
practices dictates segregate networks for proper working.


All other major virtualization products supports iSCSI this way: 
vSphere, XenServer and Hyper-V. So I was really surprised that oVirt 
(and even RHV, I requested a trial yesterday) does not implement ISCSI 
with the well know best practices.


There’s a picture of the architecture that I take from Google when 
searching for ”mpio best practives”: 
https://image.slidesharecdn.com/2010-12-06-midwest-reg-vmug-101206110506-phpapp01/95/nextgeneration-best-practices-for-vmware-and-storage-15-728.jpg?cb=1296301640


Ans as you can see it’s segregated networks on a machine reaching the 
same target.


In my case, my datacenter has five Hypervisor Machines, with two NICs 
dedicated for iSCSI. Both NICs connect to different converged ethernet 
switches and the iStorage is connected the same way.


So it really does not make sense that a the first NIC can reach the 
second NIC target. In a case of a switch failure the cluster will go 
down anyway, so what’s the point of running MPIO? Right?


Thanks once again,
V.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-21 Thread Yaniv Kaul
On Wed, Jul 19, 2017 at 9:13 PM, Vinícius Ferrão  wrote:

> Hello,
>
> I’ve skipped this message entirely yesterday. So this is per design?
> Because the best practices of iSCSI MPIO, as far as I know, recommends two
> completely separate paths. If this can’t be achieved with oVirt what’s the
> point of running MPIO?
>

With regular storage it is quite easy to achieve using 'iSCSI bonding'.
I think the Dell storage is a bit different and requires some more
investigation - or experience with it.
 Y.


> May we ask for a bug fix or a feature redesign on this?
>
> MPIO is part of my datacenter, and it was originally build for running
> XenServer, but I’m considering the move to oVirt. MPIO isn’t working right
> and this can be a great no-go for me...
>
> I’m willing to wait and hold my DC project if this can be fixed.
>
> Any answer from the redhat folks?
>
> Thanks,
> V.
>
> > On 18 Jul 2017, at 11:09, Uwe Laverenz  wrote:
> >
> > Hi,
> >
> >
> > Am 17.07.2017 um 14:11 schrieb Devin Acosta:
> >
> >> I am still troubleshooting the issue, I haven’t found any resolution to
> my issue at this point yet. I need to figure out by this Friday otherwise I
> need to look at Xen or another solution. iSCSI and oVIRT seems problematic.
> >
> > The configuration of iSCSI-Multipathing via OVirt didn't work for me
> either. IIRC the underlying problem in my case was that I use totally
> isolated networks for each path.
> >
> > Workaround: to make round robin work you have to enable it by editing
> "/etc/multipath.conf". Just add the 3 lines for the round robin setting
> (see comment in the file) and additionally add the "# VDSM PRIVATE" comment
> to keep vdsmd from overwriting your settings.
> >
> > My multipath.conf:
> >
> >
> >> # VDSM REVISION 1.3
> >> # VDSM PRIVATE
> >> defaults {
> >>polling_interval5
> >>no_path_retry   fail
> >>user_friendly_names no
> >>flush_on_last_del   yes
> >>fast_io_fail_tmo5
> >>dev_loss_tmo30
> >>max_fds 4096
> >># 3 lines added manually for multipathing:
> >>path_selector   "round-robin 0"
> >>path_grouping_policymultibus
> >>failbackimmediate
> >> }
> >> # Remove devices entries when overrides section is available.
> >> devices {
> >>device {
> >># These settings overrides built-in devices settings. It does
> not apply
> >># to devices without built-in settings (these use the settings
> in the
> >># "defaults" section), or to devices defined in the "devices"
> section.
> >># Note: This is not available yet on Fedora 21. For more info see
> >># https://bugzilla.redhat.com/1253799
> >>all_devsyes
> >>no_path_retry   fail
> >>}
> >> }
> >
> >
> >
> > To enable the settings:
> >
> >  systemctl restart multipathd
> >
> > See if it works:
> >
> >  multipath -ll
> >
> >
> > HTH,
> > Uwe
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-19 Thread Vinícius Ferrão
Hello,

I’ve skipped this message entirely yesterday. So this is per design? Because 
the best practices of iSCSI MPIO, as far as I know, recommends two completely 
separate paths. If this can’t be achieved with oVirt what’s the point of 
running MPIO?

May we ask for a bug fix or a feature redesign on this?

MPIO is part of my datacenter, and it was originally build for running 
XenServer, but I’m considering the move to oVirt. MPIO isn’t working right and 
this can be a great no-go for me...

I’m willing to wait and hold my DC project if this can be fixed.

Any answer from the redhat folks?

Thanks,
V.

> On 18 Jul 2017, at 11:09, Uwe Laverenz  wrote:
> 
> Hi,
> 
> 
> Am 17.07.2017 um 14:11 schrieb Devin Acosta:
> 
>> I am still troubleshooting the issue, I haven’t found any resolution to my 
>> issue at this point yet. I need to figure out by this Friday otherwise I 
>> need to look at Xen or another solution. iSCSI and oVIRT seems problematic.
> 
> The configuration of iSCSI-Multipathing via OVirt didn't work for me either. 
> IIRC the underlying problem in my case was that I use totally isolated 
> networks for each path.
> 
> Workaround: to make round robin work you have to enable it by editing 
> "/etc/multipath.conf". Just add the 3 lines for the round robin setting (see 
> comment in the file) and additionally add the "# VDSM PRIVATE" comment to 
> keep vdsmd from overwriting your settings.
> 
> My multipath.conf:
> 
> 
>> # VDSM REVISION 1.3
>> # VDSM PRIVATE
>> defaults {
>>polling_interval5
>>no_path_retry   fail
>>user_friendly_names no
>>flush_on_last_del   yes
>>fast_io_fail_tmo5
>>dev_loss_tmo30
>>max_fds 4096
>># 3 lines added manually for multipathing:
>>path_selector   "round-robin 0"
>>path_grouping_policymultibus
>>failbackimmediate
>> }
>> # Remove devices entries when overrides section is available.
>> devices {
>>device {
>># These settings overrides built-in devices settings. It does not 
>> apply
>># to devices without built-in settings (these use the settings in the
>># "defaults" section), or to devices defined in the "devices" section.
>># Note: This is not available yet on Fedora 21. For more info see
>># https://bugzilla.redhat.com/1253799
>>all_devsyes
>>no_path_retry   fail
>>}
>> }
> 
> 
> 
> To enable the settings:
> 
>  systemctl restart multipathd
> 
> See if it works:
> 
>  multipath -ll
> 
> 
> HTH,
> Uwe
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-18 Thread Uwe Laverenz

Hi,

just to avoid misunderstandings: the workaround I suggested means that I 
don't use OVirt's iSCSI-Bonding at all (because it let's my environment 
misbehave in the same way you described).


cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-18 Thread Uwe Laverenz

Hi,


Am 17.07.2017 um 14:11 schrieb Devin Acosta:

I am still troubleshooting the issue, I haven’t found any resolution to 
my issue at this point yet. I need to figure out by this Friday 
otherwise I need to look at Xen or another solution. iSCSI and oVIRT 
seems problematic.


The configuration of iSCSI-Multipathing via OVirt didn't work for me 
either. IIRC the underlying problem in my case was that I use totally 
isolated networks for each path.


Workaround: to make round robin work you have to enable it by editing 
"/etc/multipath.conf". Just add the 3 lines for the round robin setting 
(see comment in the file) and additionally add the "# VDSM PRIVATE" 
comment to keep vdsmd from overwriting your settings.


My multipath.conf:



# VDSM REVISION 1.3
# VDSM PRIVATE

defaults {
polling_interval5
no_path_retry   fail
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
# 3 lines added manually for multipathing:
path_selector   "round-robin 0"
path_grouping_policymultibus
failbackimmediate
}

# Remove devices entries when overrides section is available.
devices {
device {
# These settings overrides built-in devices settings. It does not apply
# to devices without built-in settings (these use the settings in the
# "defaults" section), or to devices defined in the "devices" section.
# Note: This is not available yet on Fedora 21. For more info see
# https://bugzilla.redhat.com/1253799
all_devsyes
no_path_retry   fail
}
}




To enable the settings:

  systemctl restart multipathd

See if it works:

  multipath -ll


HTH,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-18 Thread Elad Ben Aharon
Hi,

Please make sure that the hosts can reach the iSCSI targets on your Dell
storage using the NICs that are being used by the 2 networks dedicated for
iSCSI.
You can try check it using 'ping -I  '.



Thanks,

ELAD BEN AHARON

SENIOR QUALITY ENGINEER

Red Hat Israel Ltd. 

34 Jerusalem Road, Building A, 1st floor

Ra'anana, Israel 4350109

ebena...@redhat.comT: +972-9-7692007/8272007
  TRIED. TESTED. TRUSTED. 


On Mon, Jul 17, 2017 at 3:11 PM, Devin Acosta 
wrote:

> V.,
>
> I am still troubleshooting the issue, I haven’t found any resolution to my
> issue at this point yet. I need to figure out by this Friday otherwise I
> need to look at Xen or another solution. iSCSI and oVIRT seems problematic.
>
>
> --
>
> Devin Acosta
> Red Hat Certified Architect, LinuxStack
>
> On July 16, 2017 at 11:53:59 PM, Vinícius Ferrão (fer...@if.ufrj.br)
> wrote:
>
> Have you found any solution for this problem?
>
> I’m using an FreeNAS machine to server iSCSI but I’ve the exactly same
> problem. I’ve reinstalled oVirt at least 3 times during the weekend trying
> to solve the issue.
>
> At this moment my iSCSI Multipath tab is just inconsitent. I can’t see
> both VLAN’s on “Logical networks” but only one target shows up on Storage
> Targets.
>
> When I was able to found two targets everything went down and I needed to
> reboot the host and the Hosted Engine to regenerate oVirt.
>
> V.
>
> On 11 Jul 2017, at 19:29, Devin Acosta  wrote:
>
>
> I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
> Compelent SAN that has 2 fault domains each on a separate VLAN that I have
> attached to oVIRT. From what I understand I am suppose to go into “iSCSI
> Multipathing” option and add a BOND of the iSCSI interfaces. I have done
> this selecting the 2 logical networks together for iSCSI. I notice that
> there is an option below to select Storage Targets but if I select the
> storage targets below with the logical networks the the cluster goes crazy
> and appears to be mad. Storage, Nodes, and everything goes offline even
> thought I have NFS also attached to the cluster.
>
> How should this best be configured. What we notice that happens is when
> the server reboots it seems to log into the SAN correctly but according the
> the Dell SAN it is only logged into once controller. So only pulls both
> fault domains from a single controller.
>
> Please Advise.
>
> Devin
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-17 Thread Devin Acosta
V.,

I am still troubleshooting the issue, I haven’t found any resolution to my
issue at this point yet. I need to figure out by this Friday otherwise I
need to look at Xen or another solution. iSCSI and oVIRT seems problematic.


--

Devin Acosta
Red Hat Certified Architect, LinuxStack

On July 16, 2017 at 11:53:59 PM, Vinícius Ferrão (fer...@if.ufrj.br) wrote:

Have you found any solution for this problem?

I’m using an FreeNAS machine to server iSCSI but I’ve the exactly same
problem. I’ve reinstalled oVirt at least 3 times during the weekend trying
to solve the issue.

At this moment my iSCSI Multipath tab is just inconsitent. I can’t see both
VLAN’s on “Logical networks” but only one target shows up on Storage
Targets.

When I was able to found two targets everything went down and I needed to
reboot the host and the Hosted Engine to regenerate oVirt.

V.

On 11 Jul 2017, at 19:29, Devin Acosta  wrote:


I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
Compelent SAN that has 2 fault domains each on a separate VLAN that I have
attached to oVIRT. From what I understand I am suppose to go into “iSCSI
Multipathing” option and add a BOND of the iSCSI interfaces. I have done
this selecting the 2 logical networks together for iSCSI. I notice that
there is an option below to select Storage Targets but if I select the
storage targets below with the logical networks the the cluster goes crazy
and appears to be mad. Storage, Nodes, and everything goes offline even
thought I have NFS also attached to the cluster.

How should this best be configured. What we notice that happens is when the
server reboots it seems to log into the SAN correctly but according the the
Dell SAN it is only logged into once controller. So only pulls both fault
domains from a single controller.

Please Advise.

Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-17 Thread Vinícius Ferrão
Have you found any solution for this problem?

I’m using an FreeNAS machine to server iSCSI but I’ve the exactly same problem. 
I’ve reinstalled oVirt at least 3 times during the weekend trying to solve the 
issue.

At this moment my iSCSI Multipath tab is just inconsitent. I can’t see both 
VLAN’s on “Logical networks” but only one target shows up on Storage Targets.

When I was able to found two targets everything went down and I needed to 
reboot the host and the Hosted Engine to regenerate oVirt.

V.

On 11 Jul 2017, at 19:29, Devin Acosta 
> wrote:


I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell 
Compelent SAN that has 2 fault domains each on a separate VLAN that I have 
attached to oVIRT. From what I understand I am suppose to go into “iSCSI 
Multipathing” option and add a BOND of the iSCSI interfaces. I have done this 
selecting the 2 logical networks together for iSCSI. I notice that there is an 
option below to select Storage Targets but if I select the storage targets 
below with the logical networks the the cluster goes crazy and appears to be 
mad. Storage, Nodes, and everything goes offline even thought I have NFS also 
attached to the cluster.

How should this best be configured. What we notice that happens is when the 
server reboots it seems to log into the SAN correctly but according the the 
Dell SAN it is only logged into once controller. So only pulls both fault 
domains from a single controller.

Please Advise.

Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-12 Thread Devin Acosta
I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
Compelent SAN that has 2 fault domains each on a separate VLAN that I have
attached to oVIRT. From what I understand I am suppose to go into “iSCSI
Multipathing” option and add a BOND of the iSCSI interfaces. I have done
this selecting the 2 logical networks together for iSCSI. I notice that
there is an option below to select Storage Targets but if I select the
storage targets below with the logical networks the the cluster goes crazy
and appears to be mad. Storage, Nodes, and everything goes offline even
thought I have NFS also attached to the cluster.

How should this best be configured. What we notice that happens is when the
server reboots it seems to log into the SAN correctly but according the the
Dell SAN it is only logged into once controller. So only pulls both fault
domains from a single controller.

Please Advise.

Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVIRT 4.1 / iSCSI Multipathing / Dell Compellent

2017-07-11 Thread Devin Acosta
I just installed a brand new Dell Compellent SAN for use with our oVIRT
4.1.3 fresh installation. I presented a LUN of 30TB to the cluster over
iSCSI 10G. I went into the Storage domain and added a new storage mount
called “dell-storage” and logged into each of the ports for the target. It
detects the targets just right and the Dell SAN is happy, until a host is
rebooted at which point the iSCSI seems to choose to log into only one of
the controllers and not all the paths that it originally logged into. At
this point the Dell SAN shows only 1/2 connected and therefore my e-mail.

When I looked at the original iscsiadm session information after initially
joining to domain it shows correct connected to (1f,21,1e,20) ports.



tcp: [11] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)
tcp: [12] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe21
(non-flash)
tcp: [13] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)
tcp: [14] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe20
(non-flash)
tcp: [15] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)
tcp: [16] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)

After I reboot the hypervisor and it re-connects to the cluster it shows:

tcp: [1] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)

tcp: [2] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)

tcp: [3] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)

tcp: [4] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)

What is bizarre is it shows multiple connections to the same IP but it
shows 2 connections to 1e, and 2 connections to 1f. It seems to selected
only the top controller on each fault domain and not the bottom controller
also.

I did configure a “bond” inside the iSCSI Multipathing of selecting only
the 2 VLANS together for the iSCSI. I didn’t select a target with it
because wasn’t sure the proper configuration for this. If I selected both
virtual and target port the cluster goes down hard.

Any ideas?

Devin Acosta
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users