Re: [ovirt-users] Multipath handling in oVirt

2017-02-01 Thread Nicolas Ecarnot

Le 01/02/2017 à 15:31, Yura Poltoratskiy a écrit :

Here you are:

iSCSI multipathing


network setup of a host




01.02.2017 15:31, Nicolas Ecarnot пишет:

Hello,

Before replying further, may I ask you, Yura, to post a screenshot of
your iSCSI multipathing setup in the web GUI?

And also the same for the network setup of a host ?

Thank you.





Thank you Yura.

To Yaniv and Pavel, yes, this leads to this oVirt feature of iSCSI 
multipathing, indeed.


I would be curious to see (on Yura's hosts for instance) the translation 
of the oVirt iSCSI multipathing in CLI commands (multipath -ll, iscsiadm 
-m session -P3, dmsetup table, ...)


Yura's setup seems to be perfectly fitted to oVirt (2 NICs, 2 VLANs, 2 
targets in different VLANs, iSCSI multipathing), but I'm trying to see 
how I could make this work with our Equallogic presenting one and only 
one virtual ip (thus one target VLAN)...


--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multipath handling in oVirt

2017-02-01 Thread Yura Poltoratskiy

Here you are:

iSCSI multipathing 



network setup of a host 





01.02.2017 15:31, Nicolas Ecarnot пишет:

Hello,

Before replying further, may I ask you, Yura, to post a screenshot of 
your iSCSI multipathing setup in the web GUI?


And also the same for the network setup of a host ?

Thank you.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multipath handling in oVirt

2017-02-01 Thread Nicolas Ecarnot

Hello,

Before replying further, may I ask you, Yura, to post a screenshot of 
your iSCSI multipathing setup in the web GUI?


And also the same for the network setup of a host ?

Thank you.

--
Nicolas ECARNOT

Le 01/02/2017 à 13:14, Yura Poltoratskiy a écrit :

Hi,

As for me personally I have such a config: compute nodes with 4x1G nics
and storages with 2x1G nics and 2 switches (not stackable). All servers
runs on CentOS 7.X (7.3 at this monent).

On compute nodes I have bonding with two nic1 and nic2 (attached to
different switches) for mgmt and VM's network, and the other two nics
nic3 and nic4 without bonding (and also attached to different switches).
On storage nodes I have no bonding and nics nic1 and nic2 connected to
different switches.

I have two networks for iSCSI: 10.0.2.0/24 and 10.0.3.0/24, nic1 of
storage and nic3 of computes connected to one network; nic2 of storage
and nic4 of computes - to another one.

On webUI I've created network iSCSI1 and iSCSI2 for nic3 and nic4, also
created multipath. To have active/active links with double bw throughput
I've added 'path_grouping_policy "multibus"' in defaults section of
/etc/multipath.conf.

After all of that, I have 200+MB/sec throughput to the storage (like
raid0 with 2 sata hdd) and I can lose one nic/link/swith without
stopping vms.

[root@compute02 ~]# multipath -ll
360014052f28c9a60 dm-6 LIO-ORG ,ClusterLunHDD
size=902G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 6:0:0:0 sdc 8:32  active ready running
  `- 8:0:0:0 sdf 8:80  active ready running
36001405551a9610d09b4ff9aa836b906 dm-40 LIO-ORG ,SSD_DOMAIN
size=915G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 7:0:0:0 sde 8:64  active ready running
  `- 9:0:0:0 sdh 8:112 active ready running
360014055eb8d30a91044649bda9ee620 dm-5 LIO-ORG ,ClusterLunSSD
size=135G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 6:0:0:1 sdd 8:48  active ready running
  `- 8:0:0:1 sdg 8:96  active ready running

[root@compute02 ~]# iscsiadm -m session
tcp: [1] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash)
tcp: [2] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash)
tcp: [3] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash)
tcp: [4] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash)

[root@compute02 ~]# ip route show | head -4
default via 10.0.1.1 dev ovirtmgmt
10.0.1.0/24 dev ovirtmgmt  proto kernel  scope link  src 10.0.1.102
10.0.2.0/24 dev enp5s0.2  proto kernel  scope link  src 10.0.2.102
10.0.3.0/24 dev enp2s0.3  proto kernel  scope link  src 10.0.3.102

[root@compute02 ~]# brctl show ovirtmgmt
bridge name bridge id   STP enabled interfaces
ovirtmgmt   8000.000475b4f262   no bond0.1001

[root@compute02 ~]# cat /proc/net/bonding/bond0 | grep "Bonding\|Slave
Interface"
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Slave Interface: enp4s6
Slave Interface: enp6s0


01.02.2017 12:50, Nicolas Ecarnot пишет:

Hello,

I'm starting over on this subject because I wanted to clarify what was
the oVirt way to manage multipathing.

(Here I will talk only about the data/iSCSI/SAN/LUN/you name it
networks.)
According to what I see in the host network setup, one can assign
*ONE* data network to an interface or to a group of interfaces.

That implies that if my host has two data-dedicated interfaces, I can
- either group them using bonding (and oVirt is handy for that in the
host network setup), then assign the data virtual network to this bond
- either assign each nic a different ip in each a different VLAN, then
use two different data networks, and assign them each a different data
network. I never played this game and don't know where it's going to.

At first, may the oVirt storage experts comment on the above to check
it's ok.

Then, as many users here, our hardware is this :
- Hosts : Dell poweredge, mostly blades (M610,620,630), or rack servers
- SANs : Equallogic PS4xxx and PS6xxx

Equallogic's recommendation is that bonding is evil in iSCSI access.
To them, multipath is the only true way.
After reading tons of docs and using Dell support, everything is
telling me to use at least two different NICs with different ip, not
bonded - using the same network is bad but ok.

How can oVirt handle that ?






--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multipath handling in oVirt

2017-02-01 Thread Yura Poltoratskiy

Hi,

As for me personally I have such a config: compute nodes with 4x1G nics 
and storages with 2x1G nics and 2 switches (not stackable). All servers 
runs on CentOS 7.X (7.3 at this monent).


On compute nodes I have bonding with two nic1 and nic2 (attached to 
different switches) for mgmt and VM's network, and the other two nics 
nic3 and nic4 without bonding (and also attached to different switches). 
On storage nodes I have no bonding and nics nic1 and nic2 connected to 
different switches.


I have two networks for iSCSI: 10.0.2.0/24 and 10.0.3.0/24, nic1 of 
storage and nic3 of computes connected to one network; nic2 of storage 
and nic4 of computes - to another one.


On webUI I've created network iSCSI1 and iSCSI2 for nic3 and nic4, also 
created multipath. To have active/active links with double bw throughput 
I've added 'path_grouping_policy "multibus"' in defaults section of 
/etc/multipath.conf.


After all of that, I have 200+MB/sec throughput to the storage (like 
raid0 with 2 sata hdd) and I can lose one nic/link/swith without 
stopping vms.


[root@compute02 ~]# multipath -ll
360014052f28c9a60 dm-6 LIO-ORG ,ClusterLunHDD
size=902G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 6:0:0:0 sdc 8:32  active ready running
  `- 8:0:0:0 sdf 8:80  active ready running
36001405551a9610d09b4ff9aa836b906 dm-40 LIO-ORG ,SSD_DOMAIN
size=915G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 7:0:0:0 sde 8:64  active ready running
  `- 9:0:0:0 sdh 8:112 active ready running
360014055eb8d30a91044649bda9ee620 dm-5 LIO-ORG ,ClusterLunSSD
size=135G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 6:0:0:1 sdd 8:48  active ready running
  `- 8:0:0:1 sdg 8:96  active ready running

[root@compute02 ~]# iscsiadm -m session
tcp: [1] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash)
tcp: [2] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash)
tcp: [3] 10.0.3.200:3260,1 iqn.2015-09.lab.lnx-san:storage (non-flash)
tcp: [4] 10.0.3.203:3260,1 iqn.2016-10.local.ntu:storage3 (non-flash)

[root@compute02 ~]# ip route show | head -4
default via 10.0.1.1 dev ovirtmgmt
10.0.1.0/24 dev ovirtmgmt  proto kernel  scope link  src 10.0.1.102
10.0.2.0/24 dev enp5s0.2  proto kernel  scope link  src 10.0.2.102
10.0.3.0/24 dev enp2s0.3  proto kernel  scope link  src 10.0.3.102

[root@compute02 ~]# brctl show ovirtmgmt
bridge name bridge id   STP enabled interfaces
ovirtmgmt   8000.000475b4f262   no bond0.1001

[root@compute02 ~]# cat /proc/net/bonding/bond0 | grep "Bonding\|Slave 
Interface"

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Slave Interface: enp4s6
Slave Interface: enp6s0


01.02.2017 12:50, Nicolas Ecarnot пишет:

Hello,

I'm starting over on this subject because I wanted to clarify what was 
the oVirt way to manage multipathing.


(Here I will talk only about the data/iSCSI/SAN/LUN/you name it 
networks.)
According to what I see in the host network setup, one can assign 
*ONE* data network to an interface or to a group of interfaces.


That implies that if my host has two data-dedicated interfaces, I can
- either group them using bonding (and oVirt is handy for that in the 
host network setup), then assign the data virtual network to this bond
- either assign each nic a different ip in each a different VLAN, then 
use two different data networks, and assign them each a different data 
network. I never played this game and don't know where it's going to.


At first, may the oVirt storage experts comment on the above to check 
it's ok.


Then, as many users here, our hardware is this :
- Hosts : Dell poweredge, mostly blades (M610,620,630), or rack servers
- SANs : Equallogic PS4xxx and PS6xxx

Equallogic's recommendation is that bonding is evil in iSCSI access.
To them, multipath is the only true way.
After reading tons of docs and using Dell support, everything is 
telling me to use at least two different NICs with different ip, not 
bonded - using the same network is bad but ok.


How can oVirt handle that ?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multipath handling in oVirt

2017-02-01 Thread Pavel Gashev
Nicolas,

Take a look at
http://www.ovirt.org/documentation/admin-guide/chap-Storage/#configuring-iscsi-multipathing

The recommended way is to use different VLANs. Equallogic has to be connected 
to the different VLANs as well.


On Wed, 2017-02-01 at 11:50 +0100, Nicolas Ecarnot wrote:

Hello,

I'm starting over on this subject because I wanted to clarify what was
the oVirt way to manage multipathing.

(Here I will talk only about the data/iSCSI/SAN/LUN/you name it networks.)
According to what I see in the host network setup, one can assign *ONE*
data network to an interface or to a group of interfaces.

That implies that if my host has two data-dedicated interfaces, I can
- either group them using bonding (and oVirt is handy for that in the
host network setup), then assign the data virtual network to this bond
- either assign each nic a different ip in each a different VLAN, then
use two different data networks, and assign them each a different data
network. I never played this game and don't know where it's going to.

At first, may the oVirt storage experts comment on the above to check
it's ok.

Then, as many users here, our hardware is this :
- Hosts : Dell poweredge, mostly blades (M610,620,630), or rack servers
- SANs : Equallogic PS4xxx and PS6xxx

Equallogic's recommendation is that bonding is evil in iSCSI access.
To them, multipath is the only true way.
After reading tons of docs and using Dell support, everything is telling
me to use at least two different NICs with different ip, not bonded -
using the same network is bad but ok.

How can oVirt handle that ?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Multipath handling in oVirt

2017-02-01 Thread Nicolas Ecarnot

Hello,

I'm starting over on this subject because I wanted to clarify what was 
the oVirt way to manage multipathing.


(Here I will talk only about the data/iSCSI/SAN/LUN/you name it networks.)
According to what I see in the host network setup, one can assign *ONE* 
data network to an interface or to a group of interfaces.


That implies that if my host has two data-dedicated interfaces, I can
- either group them using bonding (and oVirt is handy for that in the 
host network setup), then assign the data virtual network to this bond
- either assign each nic a different ip in each a different VLAN, then 
use two different data networks, and assign them each a different data 
network. I never played this game and don't know where it's going to.


At first, may the oVirt storage experts comment on the above to check 
it's ok.


Then, as many users here, our hardware is this :
- Hosts : Dell poweredge, mostly blades (M610,620,630), or rack servers
- SANs : Equallogic PS4xxx and PS6xxx

Equallogic's recommendation is that bonding is evil in iSCSI access.
To them, multipath is the only true way.
After reading tons of docs and using Dell support, everything is telling 
me to use at least two different NICs with different ip, not bonded - 
using the same network is bad but ok.


How can oVirt handle that ?

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users