[ovirt-users] Re: need network design advice for iSCSI

2019-02-07 Thread Vinícius Ferrão
Hello, another guy with what appears to be the same problem...

https://bugzilla.redhat.com/show_bug.cgi?id=1588741

PS: Uwe I’m ccing you.

Sent from my iPhone

> On 29 Jan 2019, at 13:57, John Florian  wrote:
> 
> Okay, both the BZ and ML posts are interesting and helpful.  I'm kind of 
> surprised there seems to be so much trouble and confusion for what I would 
> have thought to be a very common setup.  Are most people using something else?
> 
> I think this gives me what I need for my next stab at doing this but I"m 
> still puzzled on how to tear down what I have in oVirt so that I can redo it. 
>  Specifically, I didn't see how to delete the existing iSCSI connections.  
> I've read that this can only be done through the REST API.  I have managed to 
> redo the interfaces on my Hosts so that everything is now on just 2 NICs 
> each, leaving 2 NICs free for a foothold on a new setup.  From all of my 
> experimentation, it would appear that my only option is to create a new 
> storage domain and export/import each disk volume one by one.  Maybe there's 
> a migration option I have yet to see, but I don't see any way around creating 
> a new storage domain here.
> 
>> On 1/21/19 7:12 AM, Vinícius Ferrão wrote:
>> Hello people, in the past Maor Lipchuk (from RH) tried very hard to help me 
>> and Uwe but we was unable to converge on the solution.
>> 
>> This was discussed a year ago and on my understanding it still and oVirt 
>> bug. As today, if you simple “DuckDuckGo” for “ovirt iscsi multipath not 
>> working” the third link points to this bugzilla: 
>> https://bugzilla.redhat.com/show_bug.cgi?id=1474904
>> 
>> Which is the one I’ve mentioned and it’s extremely similar to John Florian 
>> case, which was my case too.
>> 
>> @John, take a look at the bugzilla link and see if the desired topology 
>> match with your case.
>> 
>> Regards,
>> 
>> 
>>> On 21 Jan 2019, at 05:21, Eitan Raviv  wrote:
>>> 
>>> Shani,
>>> Can you help here with  iSCSI bonding?
>>> Thanks
>>> 
 On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz  wrote:
 
 Hi John,
 
 Am 20.01.19 um 18:32 schrieb John Florian:
 
> As for how to get there, whatever exactly that might look like, I'm also
> having troubles figuring that out.  I figured I would transform the
> setup described below into one where each host has:
> 
>  * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
>  * 1 NIC for my 1st storage net
>  * 1 NIC for my 2nd storage net
 
 This is exactly the setup I use. I have run this successfully with
 CentOS/LIO and FreeNAS iSCSI targets with good performance.
 
 In short:
 
 - 2 separate, isolated networks for iSCSI with dedicated adapters
   on hosts and iSCSI target
 - jumbo frames enabled
 - no VLANs config needed on hosts, untagged VLANs on switch
 - do _not_ use LACP, let multipathd handle failovers
 
 Same experience as Vinicius: what did _not_ work for me is the
 iSCSI-Bonding in OVirt. It seems to require that all storage IPs are
 reachable from all other IPs, which is not the case in every setup.
 
 To get multipathing to work I use multipath directly:
 
> https://www.mail-archive.com/users@ovirt.org/msg42735.html
 
 I will post a bonnie++ result later. If you need more details please let
 me know.
 
 cu,
 Uwe
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/
>> 
>> 
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NSE5BCLJSIFDX2VDZRBRLODEH3ZCPYWN/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BMKNLMONCF44ADXTWE3JM4P6XQBWZHNC/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-29 Thread John Florian
Okay, both the BZ and ML posts are interesting and helpful.  I'm kind of 
surprised there seems to be so much trouble and confusion for what I 
would have thought to be a very common setup.  Are most people using 
something else?


I think this gives me what I need for my next stab at doing this but I"m 
still puzzled on how to tear down what I have in oVirt so that I can 
redo it.  Specifically, I didn't see how to delete the existing iSCSI 
connections.  I've read that this can only be done through the REST 
API.  I have managed to redo the interfaces on my Hosts so that 
everything is now on just 2 NICs each, leaving 2 NICs free for a 
foothold on a new setup.  From all of my experimentation, it would 
appear that my only option is to create a new storage domain and 
export/import each disk volume one by one.  Maybe there's a migration 
option I have yet to see, but I don't see any way around creating a new 
storage domain here.


On 1/21/19 7:12 AM, Vinícius Ferrão wrote:
Hello people, in the past Maor Lipchuk (from RH) tried very hard to 
help me and Uwe but we was unable to converge on the solution.


This was discussed a year ago and on my understanding it still and 
oVirt bug. As today, if you simple “DuckDuckGo” for “ovirt iscsi 
multipath not working” the third link points to this bugzilla: 
https://bugzilla.redhat.com/show_bug.cgi?id=1474904


Which is the one I’ve mentioned and it’s extremely similar to John 
Florian case, which was my case too.


@John, take a look at the bugzilla link and see if the desired 
topology match with your case.


Regards,


On 21 Jan 2019, at 05:21, Eitan Raviv > wrote:


Shani,
Can you help here with  iSCSI bonding?
Thanks

On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz > wrote:


Hi John,

Am 20.01.19 um 18:32 schrieb John Florian:

As for how to get there, whatever exactly that might look like, I'm 
also

having troubles figuring that out.  I figured I would transform the
setup described below into one where each host has:

 * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
 * 1 NIC for my 1st storage net
 * 1 NIC for my 2nd storage net


This is exactly the setup I use. I have run this successfully with
CentOS/LIO and FreeNAS iSCSI targets with good performance.

In short:

- 2 separate, isolated networks for iSCSI with dedicated adapters
  on hosts and iSCSI target
- jumbo frames enabled
- no VLANs config needed on hosts, untagged VLANs on switch
- do _not_ use LACP, let multipathd handle failovers

Same experience as Vinicius: what did _not_ work for me is the
iSCSI-Bonding in OVirt. It seems to require that all storage IPs are
reachable from all other IPs, which is not the case in every setup.

To get multipathing to work I use multipath directly:


https://www.mail-archive.com/users@ovirt.org/msg42735.html


I will post a bonnie++ result later. If you need more details please let
me know.

cu,
Uwe
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 


Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/



___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/site/privacy-policy/
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/NSE5BCLJSIFDX2VDZRBRLODEH3ZCPYWN/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6GMNF3RU5IDEDM4OWG4RDXAFY5BHDSV/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-21 Thread Uwe Laverenz
Hi,

Am Montag, den 21.01.2019, 06:43 +0100 schrieb Uwe Laverenz:

> I will post a bonnie++ result later. If you need more details please 

Attached are the results of the smallest setup (my home lab): storage
server is a HP N40L with 16GB RAM, 4x2TB WD RE as RAID10, CentOS 7 with
LIO as iSCSI target with 2 Gigabit networks (jumbo frames: mtu 9000).

cu,
Uwe

Version  1.97   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ovirt-vm  7568M   739  97 123014  10 72061   8  1395  99 228302  11 405.9  
10
Latency 12475us   13397us 874ms   15675us 247ms   91975us
Version  1.97   --Sequential Create-- Random Create
ovirt-vm-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 12828  58 + +++ 14219  36 13435  62 + +++ 12789  35
Latency 29490us 142us 413ms1160us  36us   23231us
1.97,1.97,ovirt-vm,1,1548073693,7568M,,739,97,123014,10,72061,8,1395,99,228302,11,405.9,10,16,12828,58,+,+++,14219,36,13435,62,+,+++,12789,35,12475us,13397us,874ms,15675us,247ms,91975us,29490us,142us,413ms,1160us,36us,23231us

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UODLGEASXUUP54PHXSWRELY55W3BFRMB/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-21 Thread Vinícius Ferrão
Hello people, in the past Maor Lipchuk (from RH) tried very hard to help me and 
Uwe but we was unable to converge on the solution.

This was discussed a year ago and on my understanding it still and oVirt bug. 
As today, if you simple “DuckDuckGo” for “ovirt iscsi multipath not working” 
the third link points to this bugzilla: 
https://bugzilla.redhat.com/show_bug.cgi?id=1474904 


Which is the one I’ve mentioned and it’s extremely similar to John Florian 
case, which was my case too.

@John, take a look at the bugzilla link and see if the desired topology match 
with your case.

Regards,


> On 21 Jan 2019, at 05:21, Eitan Raviv  wrote:
> 
> Shani,
> Can you help here with  iSCSI bonding?
> Thanks
> 
> On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz  wrote:
>> 
>> Hi John,
>> 
>> Am 20.01.19 um 18:32 schrieb John Florian:
>> 
>>> As for how to get there, whatever exactly that might look like, I'm also
>>> having troubles figuring that out.  I figured I would transform the
>>> setup described below into one where each host has:
>>> 
>>>  * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
>>>  * 1 NIC for my 1st storage net
>>>  * 1 NIC for my 2nd storage net
>> 
>> This is exactly the setup I use. I have run this successfully with
>> CentOS/LIO and FreeNAS iSCSI targets with good performance.
>> 
>> In short:
>> 
>> - 2 separate, isolated networks for iSCSI with dedicated adapters
>>   on hosts and iSCSI target
>> - jumbo frames enabled
>> - no VLANs config needed on hosts, untagged VLANs on switch
>> - do _not_ use LACP, let multipathd handle failovers
>> 
>> Same experience as Vinicius: what did _not_ work for me is the
>> iSCSI-Bonding in OVirt. It seems to require that all storage IPs are
>> reachable from all other IPs, which is not the case in every setup.
>> 
>> To get multipathing to work I use multipath directly:
>> 
>>> https://www.mail-archive.com/users@ovirt.org/msg42735.html
>> 
>> I will post a bonnie++ result later. If you need more details please let
>> me know.
>> 
>> cu,
>> Uwe
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NSE5BCLJSIFDX2VDZRBRLODEH3ZCPYWN/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-21 Thread Shani Leviim
Hi,
I'm not familiar with network construction, so I guess I'm not the right
address for that :)

Regards,
Shani Leviim

On Mon, Jan 21, 2019, 09:22 Eitan Raviv  Shani,
> Can you help here with  iSCSI bonding?
> Thanks
>
> On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz  wrote:
> >
> > Hi John,
> >
> > Am 20.01.19 um 18:32 schrieb John Florian:
> >
> > > As for how to get there, whatever exactly that might look like, I'm
> also
> > > having troubles figuring that out.  I figured I would transform the
> > > setup described below into one where each host has:
> > >
> > >   * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
> > >   * 1 NIC for my 1st storage net
> > >   * 1 NIC for my 2nd storage net
> >
> > This is exactly the setup I use. I have run this successfully with
> > CentOS/LIO and FreeNAS iSCSI targets with good performance.
> >
> > In short:
> >
> > - 2 separate, isolated networks for iSCSI with dedicated adapters
> >on hosts and iSCSI target
> > - jumbo frames enabled
> > - no VLANs config needed on hosts, untagged VLANs on switch
> > - do _not_ use LACP, let multipathd handle failovers
> >
> > Same experience as Vinicius: what did _not_ work for me is the
> > iSCSI-Bonding in OVirt. It seems to require that all storage IPs are
> > reachable from all other IPs, which is not the case in every setup.
> >
> > To get multipathing to work I use multipath directly:
> >
> > > https://www.mail-archive.com/users@ovirt.org/msg42735.html
> >
> > I will post a bonnie++ result later. If you need more details please let
> > me know.
> >
> > cu,
> > Uwe
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NYLPFH7HC32VRLNGVJI2SMGYF56IHT5S/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-20 Thread Eitan Raviv
Shani,
Can you help here with  iSCSI bonding?
Thanks

On Mon, Jan 21, 2019 at 7:51 AM Uwe Laverenz  wrote:
>
> Hi John,
>
> Am 20.01.19 um 18:32 schrieb John Florian:
>
> > As for how to get there, whatever exactly that might look like, I'm also
> > having troubles figuring that out.  I figured I would transform the
> > setup described below into one where each host has:
> >
> >   * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
> >   * 1 NIC for my 1st storage net
> >   * 1 NIC for my 2nd storage net
>
> This is exactly the setup I use. I have run this successfully with
> CentOS/LIO and FreeNAS iSCSI targets with good performance.
>
> In short:
>
> - 2 separate, isolated networks for iSCSI with dedicated adapters
>on hosts and iSCSI target
> - jumbo frames enabled
> - no VLANs config needed on hosts, untagged VLANs on switch
> - do _not_ use LACP, let multipathd handle failovers
>
> Same experience as Vinicius: what did _not_ work for me is the
> iSCSI-Bonding in OVirt. It seems to require that all storage IPs are
> reachable from all other IPs, which is not the case in every setup.
>
> To get multipathing to work I use multipath directly:
>
> > https://www.mail-archive.com/users@ovirt.org/msg42735.html
>
> I will post a bonnie++ result later. If you need more details please let
> me know.
>
> cu,
> Uwe
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q6QGINTOXIWTYXGHRIBEQY7JUA6TIGTJ/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-20 Thread Uwe Laverenz

Hi John,

Am 20.01.19 um 18:32 schrieb John Florian:

As for how to get there, whatever exactly that might look like, I'm also 
having troubles figuring that out.  I figured I would transform the 
setup described below into one where each host has:


  * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
  * 1 NIC for my 1st storage net
  * 1 NIC for my 2nd storage net


This is exactly the setup I use. I have run this successfully with 
CentOS/LIO and FreeNAS iSCSI targets with good performance.


In short:

- 2 separate, isolated networks for iSCSI with dedicated adapters
  on hosts and iSCSI target
- jumbo frames enabled
- no VLANs config needed on hosts, untagged VLANs on switch
- do _not_ use LACP, let multipathd handle failovers

Same experience as Vinicius: what did _not_ work for me is the 
iSCSI-Bonding in OVirt. It seems to require that all storage IPs are 
reachable from all other IPs, which is not the case in every setup.


To get multipathing to work I use multipath directly:


https://www.mail-archive.com/users@ovirt.org/msg42735.html


I will post a bonnie++ result later. If you need more details please let 
me know.


cu,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-20 Thread John Florian
So just to make sure I follow:

  * I will want a distinct VLAN and IP address for each NIC acting as an
iSCSI initiator.
  * In the middle the switch would be configured as basic access ports
without any LACP.
  * Do I want the same for the target?  The QNAP docs say that for MPIO
I would want to use their port trunking feature and a single IP for
both NICs on that end, which confuses me as it seems to contradict
the idea of two (or more) completely independent channels. 

As for how to get there, whatever exactly that might look like, I'm also
having troubles figuring that out.  I figured I would transform the
setup described below into one where each host has:

  * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
  * 1 NIC for my 1st storage net
  * 1 NIC for my 2nd storage net

To get there though,  I need to remove the 4 existing logical storage
nets from my hosts, pull 2 NICs out of the existing bond and so on.  But
when I've attempted that, I get things into a funky state where the
hosts become non-operational because the old storage nets are
"required".  I unchecked that setting thinking that to be the right
path.  But I could never get much further towards the new setup because
the existing storage domain as all the old connections and I see no way
to "forget" them, at least through the engine -- I didn't try to fight
it behind its back with iscsiadmin to do session logouts.  Somewhere in
all this mess I got into a Catch-22 where I couldn't do anything with
the old SD because no host was suitable and no host could be made
suitable because the SD couldn't be connected.  I tried all sorts of
things of varying levels of scariness but wound up putting things back
to present for now since I clearly need some further advice.

One option that struck me as a possibility, but exceeded my risk
aversion threshold was to remove the storage domain entirely and create
a new one pointing to the same LUNs.  Is that what I need to do to
forget the old connections?  Is that safe to all my existing logical
disks, etc?  Does the engine just see an group of LUNs with oVirt
"things" and magically reconstruct it all from what's there?  I'm
guessing that's the case because I have recreated an engine before and
know that all the critical bits live in the SD, but I just want to be
sure I don't commit to something really boneheaded.

On 1/17/19 7:43 PM, Vinícius Ferrão wrote:
> MPIO by concept is when you have two dedicated paths for iSCSI.
>
> So you don’t put iSCSI inside LACP, because it won’t do the MPIO
> magic. Since it’s the same path with a single IP.
>
> The right approach is two subjects, completely segregated without
> routing. You can use the same switch, it will not be redundant on the
> switch part, but it will be on the connections and you have two paths
> to follow load balancing between them.
>
> But to be honest I never get how oVirt handles MPIO. The iSCSI
> Multipath button on the interface request that all points, on
> different paths, to be reached, which doesn’t make sense for my
> understanding. In the past I’ve opened a ticket about this but I
> simply gave up. Ended using XenServer instead for this case
> specifically, which I was trying to avoid.
>
> Sent from my iPhone
>
> On 17 Jan 2019, at 22:14, John Florian  > wrote:
>
>> I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC
>> and a QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives.  At
>> present, the I have 5 VLANs, each with their own subnet as:
>>
>>  1. my "main" net (VLAN 1, 172.16.7.0/24)
>>  2. ovirtmgmt (VLAN 100, 192.168.100.0/24)
>>  3. four storage nets (VLANs 101-104, 192.168.101.0/24 -
>> 192.168.104.0/24)
>>
>> On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an
>> IP address for each of the four storage nets giving me:
>>
>>   * bond0.101@bond0: 192.168.101.101
>>   * bond0.102@bond0: 192.168.102.102
>>   * bond0.103@bond0: 192.168.103.103
>>   * bond0.104@bond0: 192.168.104.104
>>
>> The hosts are similar, but with all four NICs enslaved into a 802.3ad
>> LAG:
>>
>> Host 1:
>>
>>   * bond0.101@bond0: 192.168.101.203
>>   * bond0.102@bond0: 192.168.102.203
>>   * bond0.103@bond0: 192.168.103.203
>>   * bond0.104@bond0: 192.168.104.203
>>
>> Host 2:
>>
>>   * bond0.101@bond0: 192.168.101.204
>>   * bond0.102@bond0: 192.168.102.204
>>   * bond0.103@bond0: 192.168.103.204
>>   * bond0.104@bond0: 192.168.104.204
>>
>> I believe my performance could be better though.  While running
>> bonnie++ on a VM, the NAS reports top disk throughput around 70MB/s
>> and the network (both NICs) topping out around 90MB/s.  I suspect I'm
>> being hurt by the load balancing across the NICs.  I've played with
>> various load balancing options for the LAGs (src-dst-ip and
>> src-dst-mac) but with little difference in effect.  Watching the
>> resource monitor on the NAS, I can see that one NIC almost exclusive
>> does transmits while the other is 

[ovirt-users] Re: need network design advice for iSCSI

2019-01-17 Thread Vinícius Ferrão
MPIO by concept is when you have two dedicated paths for iSCSI.

So you don’t put iSCSI inside LACP, because it won’t do the MPIO magic. Since 
it’s the same path with a single IP.

The right approach is two subjects, completely segregated without routing. You 
can use the same switch, it will not be redundant on the switch part, but it 
will be on the connections and you have two paths to follow load balancing 
between them.

But to be honest I never get how oVirt handles MPIO. The iSCSI Multipath button 
on the interface request that all points, on different paths, to be reached, 
which doesn’t make sense for my understanding. In the past I’ve opened a ticket 
about this but I simply gave up. Ended using XenServer instead for this case 
specifically, which I was trying to avoid.

Sent from my iPhone

> On 17 Jan 2019, at 22:14, John Florian  wrote:
> 
> I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC and a 
> QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives.  At present, the 
> I have 5 VLANs, each with their own subnet as:
> 
> my "main" net (VLAN 1, 172.16.7.0/24)
> ovirtmgmt (VLAN 100, 192.168.100.0/24)
> four storage nets (VLANs 101-104, 192.168.101.0/24 - 192.168.104.0/24)
> On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an IP 
> address for each of the four storage nets giving me:
> 
> bond0.101@bond0: 192.168.101.101
> bond0.102@bond0: 192.168.102.102
> bond0.103@bond0: 192.168.103.103
> bond0.104@bond0: 192.168.104.104
> The hosts are similar, but with all four NICs enslaved into a 802.3ad LAG:
> 
> Host 1:
> 
> bond0.101@bond0: 192.168.101.203
> bond0.102@bond0: 192.168.102.203
> bond0.103@bond0: 192.168.103.203
> bond0.104@bond0: 192.168.104.203
> Host 2:
> 
> bond0.101@bond0: 192.168.101.204
> bond0.102@bond0: 192.168.102.204
> bond0.103@bond0: 192.168.103.204
> bond0.104@bond0: 192.168.104.204
> I believe my performance could be better though.  While running bonnie++ on a 
> VM, the NAS reports top disk throughput around 70MB/s and the network (both 
> NICs) topping out around 90MB/s.  I suspect I'm being hurt by the load 
> balancing across the NICs.  I've played with various load balancing options 
> for the LAGs (src-dst-ip and src-dst-mac) but with little difference in 
> effect.  Watching the resource monitor on the NAS, I can see that one NIC 
> almost exclusive does transmits while the other is almost exclusively 
> receives.  Here's the bonnie report (my apologies to those reading plain-text 
> here):
> 
> Version 1.97  Sequential Output   Sequential InputRandom
> Seeks 
> Sequential Create Random Create
> 
> Size  Per CharBlock   Rewrite Per CharBlock   Num Files   
> Create  ReadDelete  Create  ReadDelete
> 
> K/sec % CPU   K/sec   % CPU   K/sec   % CPU   K/sec   % CPU   K/sec   % CPU   
> /sec% CPU   
> /sec  % CPU   /sec% CPU   /sec% CPU   /sec% CPU   /sec% CPU   
> /sec% CPU
> unamed4G  267 97  75284   21  22775   8   718 
> 97  43559   7   189.5   8   16  678960  +   +++   
>   24948   75  14792   86  +   +++ 18163   51
> Latency   69048us 754ms   898ms   61246us 311ms   1126ms  Latency 33937us 
> 1132us  1299us  528us   22us458us
> 
> 
> I keep seeing MPIO mentioned for iSCSI deployments and now I'm trying to get 
> my head around how to best set that up or to even know if it would be 
> helpful.  I only have one switch (a Catalyst 3750g) in this small setup so 
> fault tolerance at that level isn't a goal.
> 
> So... what would the recommendation be?  I've never done MPIO before but know 
> where it's at in the web UI at least.
> 
> -- 
> John Florian
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6PFUJFCL36VDEI6J6YUBGJNXTJHFQLYX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RAHGWFI7W55LYNFWV6N5WFIXTCIGMSWO/