Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-19 Thread Yaniv Kaul
On Wed, Apr 19, 2017 at 5:07 PM, Bryan Sockel <bryan.soc...@altn.com> wrote:

> Thank you for the information, i did check my servers this morning, in
> total i have 4 servers configured as part of my ovirt deployment, two
> virtualization servers and 2 gluster servers, with one of the
> virtualization being the arbiter for my gluster replicated storage.
>
> From what i can see on my 2 dedicated gluster boxes i see traffic going
> out over multiple links.  On both of my virtualization hosts i am seeing
> all traffic go out via em1, and no traffic going out over the other
> interfaces.  All four interfaces are configured in a single bond as 802.3ad
> on both hosts with my logical networks attached to the bond.
>

the balancing is based on hash with either L2+L3, or L3+L4. It may well be
that both end up with the same hash and therefore go through the same link.
Y.


>
>
>
> -Original Message-
> From: Yaniv Kaul <yk...@redhat.com>
> To: Bryan Sockel <bryan.soc...@altn.com>
> Cc: users <users@ovirt.org>
> Date: Wed, 19 Apr 2017 10:41:40 +0300
> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>
>
>
> On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel <bryan.soc...@altn.com>
> wrote:
>>
>> Was reading over this post to the group about storage options.  I am more
>> of a windows guy as appose to a linux guy, but am learning quickly and had
>> a question.  You said that LACP will not provide extra band with
>> (Especially with NFS).  Does the same hold true with GlusterFS.  We are
>> currently using GlusterFS for the file replication piece.  Does Glusterfs
>> take advantage of any multipathing?
>>
>> Thanks
>>
>>
>
> I'd expect Gluster to take advantage of LACP, as it has replication to
> multiple peers (as opposed to NFS). See[1].
> Y.
>
> [1] https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Network%20Configurations%20Techniques/
>
>>
>>
>> -----Original Message-
>> From: Yaniv Kaul <yk...@redhat.com>
>> To: Charles Tassell <ctass...@gmail.com>
>> Cc: users <users@ovirt.org>
>> Date: Sun, 26 Mar 2017 10:40:00 +0300
>> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>>
>>
>>
>> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctass...@gmail.com>
>> wrote:
>>>
>>> Hi Everyone,
>>>
>>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>>> storage server.  Since the Linux box can provide the storage in pretty much
>>> any form, I'm wondering which option is "best." Our primary focus is on
>>> reliability, with performance being a close second.  Since we will only be
>>> using a single storage server I was thinking NFS would probably beat out
>>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>>> assumed that that iSCSI would be better performance wise, but from what I'm
>>> seeing online that might not be the case.
>>
>>
>> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD
>> support, which is nice.
>> Gluster probably requires 3 servers.
>> In most cases, I don't think people see the difference in performance
>> between NFS and iSCSI. The theory is that block storage is faster, but in
>> practice, most don't get to those limits where it matters really.
>>
>>
>>>
>>>   Our servers will be using a 1G network backbone for regular traffic
>>> and a dedicated 10G backbone with LACP for redundancy and extra bandwidth
>>> for storage traffic if that makes a difference.
>>
>>
>> LCAP many times (especially on NFS) does not provide extra bandwidth, as
>> the (single) NFS connection tends to be sticky to a single physical link.
>> It's one of the reasons I personally prefer iSCSI with multipathing.
>>
>>
>>>
>>>   I'll probably try to do some performance benchmarks with 2-3 options,
>>> but the reliability issue is a little harder to test for.  Has anyone had
>>> any particularly bad experiences with a particular storage option?  We have
>>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>>> with the multipath setup, but that won't be a problem with the new SAN
>>> since it's only got a single controller interface.
>>
>>
>> A single controller is not very reliable. If reliability is your primary
>> concern, I suggest ensuring there is no single point of failure - or at
>> least you are aware of all of them (does the storage server have redundant
>> power supply? to two power sources? Of course in some scenarios it's an
>> overkill and perhaps not practical, but you should be aware of your weak
>> spots).
>>
>> I'd stick with what you are most comfortable managing - creating, backing
>> up, extending, verifying health, etc.
>> Y.
>>
>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-19 Thread Bryan Sockel
Thank you for the information, i did check my servers this morning, in total 
i have 4 servers configured as part of my ovirt deployment, two 
virtualization servers and 2 gluster servers, with one of the virtualization 
being the arbiter for my gluster replicated storage.

>From what i can see on my 2 dedicated gluster boxes i see traffic going out 
over multiple links.  On both of my virtualization hosts i am seeing all 
traffic go out via em1, and no traffic going out over the other interfaces.  
All four interfaces are configured in a single bond as 802.3ad on both hosts 
with my logical networks attached to the bond.


-Original Message-
From: Yaniv Kaul <yk...@redhat.com>
To: Bryan Sockel <bryan.soc...@altn.com>
Cc: users <users@ovirt.org>
Date: Wed, 19 Apr 2017 10:41:40 +0300
Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?



On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel <bryan.soc...@altn.com> wrote:
Was reading over this post to the group about storage options.  I am more of 
a windows guy as appose to a linux guy, but am learning quickly and had a 
question.  You said that LACP will not provide extra band with (Especially 
with NFS).  Does the same hold true with GlusterFS.  We are currently using 
GlusterFS for the file replication piece.  Does Glusterfs take advantage of 
any multipathing?

Thanks


I'd expect Gluster to take advantage of LACP, as it has replication to 
multiple peers (as opposed to NFS). See[1].
Y.

[1] 
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/
 


-Original Message-
From: Yaniv Kaul <yk...@redhat.com>
To: Charles Tassell <ctass...@gmail.com>
Cc: users <users@ovirt.org>
Date: Sun, 26 Mar 2017 10:40:00 +0300
Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?



On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctass...@gmail.com> wrote:
Hi Everyone,

  I'm about to setup an oVirt cluster with two hosts hitting a Linux storage 
server.  Since the Linux box can provide the storage in pretty much any 
form, I'm wondering which option is "best." Our primary focus is on 
reliability, with performance being a close second.  Since we will only be 
using a single storage server I was thinking NFS would probably beat out 
GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had 
assumed that that iSCSI would be better performance wise, but from what I'm 
seeing online that might not be the case.

NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, 
which is nice.
Gluster probably requires 3 servers.
In most cases, I don't think people see the difference in performance 
between NFS and iSCSI. The theory is that block storage is faster, but in 
practice, most don't get to those limits where it matters really.


  Our servers will be using a 1G network backbone for regular traffic and a 
dedicated 10G backbone with LACP for redundancy and extra bandwidth for 
storage traffic if that makes a difference.

LCAP many times (especially on NFS) does not provide extra bandwidth, as the 
(single) NFS connection tends to be sticky to a single physical link.
It's one of the reasons I personally prefer iSCSI with multipathing.


  I'll probably try to do some performance benchmarks with 2-3 options, but 
the reliability issue is a little harder to test for.  Has anyone had any 
particularly bad experiences with a particular storage option?  We have been 
using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with 
the multipath setup, but that won't be a problem with the new SAN since it's 
only got a single controller interface.

A single controller is not very reliable. If reliability is your primary 
concern, I suggest ensuring there is no single point of failure - or at 
least you are aware of all of them (does the storage server have redundant 
power supply? to two power sources? Of course in some scenarios it's an 
overkill and perhaps not practical, but you should be aware of your weak 
spots).

I'd stick with what you are most comfortable managing - creating, backing 
up, extending, verifying health, etc.
Y.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-19 Thread Yaniv Kaul
On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel <bryan.soc...@altn.com> wrote:

> Was reading over this post to the group about storage options.  I am more
> of a windows guy as appose to a linux guy, but am learning quickly and had
> a question.  You said that LACP will not provide extra band with
> (Especially with NFS).  Does the same hold true with GlusterFS.  We are
> currently using GlusterFS for the file replication piece.  Does Glusterfs
> take advantage of any multipathing?
>
> Thanks
>
>

I'd expect Gluster to take advantage of LACP, as it has replication to
multiple peers (as opposed to NFS). See[1].
Y.

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/


>
>
> -Original Message-
> From: Yaniv Kaul <yk...@redhat.com>
> To: Charles Tassell <ctass...@gmail.com>
> Cc: users <users@ovirt.org>
> Date: Sun, 26 Mar 2017 10:40:00 +0300
> Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?
>
>
>
> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctass...@gmail.com>
> wrote:
>>
>> Hi Everyone,
>>
>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>> storage server.  Since the Linux box can provide the storage in pretty much
>> any form, I'm wondering which option is "best." Our primary focus is on
>> reliability, with performance being a close second.  Since we will only be
>> using a single storage server I was thinking NFS would probably beat out
>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>> assumed that that iSCSI would be better performance wise, but from what I'm
>> seeing online that might not be the case.
>
>
> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support,
> which is nice.
> Gluster probably requires 3 servers.
> In most cases, I don't think people see the difference in performance
> between NFS and iSCSI. The theory is that block storage is faster, but in
> practice, most don't get to those limits where it matters really.
>
>
>>
>>   Our servers will be using a 1G network backbone for regular traffic and
>> a dedicated 10G backbone with LACP for redundancy and extra bandwidth for
>> storage traffic if that makes a difference.
>
>
> LCAP many times (especially on NFS) does not provide extra bandwidth, as
> the (single) NFS connection tends to be sticky to a single physical link.
> It's one of the reasons I personally prefer iSCSI with multipathing.
>
>
>>
>>   I'll probably try to do some performance benchmarks with 2-3 options,
>> but the reliability issue is a little harder to test for.  Has anyone had
>> any particularly bad experiences with a particular storage option?  We have
>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>> with the multipath setup, but that won't be a problem with the new SAN
>> since it's only got a single controller interface.
>
>
> A single controller is not very reliable. If reliability is your primary
> concern, I suggest ensuring there is no single point of failure - or at
> least you are aware of all of them (does the storage server have redundant
> power supply? to two power sources? Of course in some scenarios it's an
> overkill and perhaps not practical, but you should be aware of your weak
> spots).
>
> I'd stick with what you are most comfortable managing - creating, backing
> up, extending, verifying health, etc.
> Y.
>
>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-18 Thread Bryan Sockel
Was reading over this post to the group about storage options.  I am more of 
a windows guy as appose to a linux guy, but am learning quickly and had a 
question.  You said that LACP will not provide extra band with (Especially 
with NFS).  Does the same hold true with GlusterFS.  We are currently using 
GlusterFS for the file replication piece.  Does Glusterfs take advantage of 
any multipathing?

Thanks


-Original Message-
From: Yaniv Kaul <yk...@redhat.com>
To: Charles Tassell <ctass...@gmail.com>
Cc: users <users@ovirt.org>
Date: Sun, 26 Mar 2017 10:40:00 +0300
Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?



On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <ctass...@gmail.com> wrote:
Hi Everyone,

  I'm about to setup an oVirt cluster with two hosts hitting a Linux storage 
server.  Since the Linux box can provide the storage in pretty much any 
form, I'm wondering which option is "best." Our primary focus is on 
reliability, with performance being a close second.  Since we will only be 
using a single storage server I was thinking NFS would probably beat out 
GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had 
assumed that that iSCSI would be better performance wise, but from what I'm 
seeing online that might not be the case.

NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, 
which is nice.
Gluster probably requires 3 servers.
In most cases, I don't think people see the difference in performance 
between NFS and iSCSI. The theory is that block storage is faster, but in 
practice, most don't get to those limits where it matters really.


  Our servers will be using a 1G network backbone for regular traffic and a 
dedicated 10G backbone with LACP for redundancy and extra bandwidth for 
storage traffic if that makes a difference.

LCAP many times (especially on NFS) does not provide extra bandwidth, as the 
(single) NFS connection tends to be sticky to a single physical link.
It's one of the reasons I personally prefer iSCSI with multipathing.


  I'll probably try to do some performance benchmarks with 2-3 options, but 
the reliability issue is a little harder to test for.  Has anyone had any 
particularly bad experiences with a particular storage option?  We have been 
using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues with 
the multipath setup, but that won't be a problem with the new SAN since it's 
only got a single controller interface.

A single controller is not very reliable. If reliability is your primary 
concern, I suggest ensuring there is no single point of failure - or at 
least you are aware of all of them (does the storage server have redundant 
power supply? to two power sources? Of course in some scenarios it's an 
overkill and perhaps not practical, but you should be aware of your weak 
spots).

I'd stick with what you are most comfortable managing - creating, backing 
up, extending, verifying health, etc.
Y.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-02 Thread Marcin Kruk
No. You have to edit vdsm.conf, when:
1) link will be broken, and it point to the iSCSI target IP and
2) you want to reboot your host or restart VDSM
I don't know, why but VDSM during startup tries to connect to IP target in
my opinion it should use the /var/lib/iscsi configuration which was set
previously.

I also had problem "Device is not on preferred path", but I edited
multipath.conf file and set the round-robin alghoritm, because during
installation multipathd.conf was changed.

If you want to get right configuration to your array execute:
1) multipath -k #console mode
2) show config #find the proper configuration to your array
3) modify multipath.conf and put above configuration.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-03-28 Thread Charles Tassell

Hi Marcin,

  Hmm, so if you are using multipath with VDSM you have to manually 
edit the vdsm.conf file to put the right IP in every time the active 
controller switches?  That sort of defeats the purpose of multipath  
That was the issue we were having: we'd spin up another host, it would 
connect to the SAN which would then reballance the disks among 
controllers, and all our other hosts would lose their connection to the 
active controller and pause all of the VMs.  It's the "Device is not on 
preferred path" issue that is common on the MD3x00 line.  We had the 
same errors with VMWare, but VMWare was able to automatically switch to 
the active path.


On 2017-03-26 05:42 PM, Marcin Kruk wrote:
But on the Dell MD32x00 you have got two controllers. The trick is 
that you have to sustain link to both controllers, so the best option 
is to use multipath as Yaniv said. Otherwise you get an error 
notifications from the array.

The problem is with iSCSI target.
After server reboot, VDSM tries to connect to target which was 
previously set, but it could be inactive.
So in that case you have to remember to edit configuration in 
vdsm.conf, because vdsm.conf do not accept target with multi IP addresses.


2017-03-26 9:40 GMT+02:00 Yaniv Kaul >:




On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell
> wrote:

Hi Everyone,

  I'm about to setup an oVirt cluster with two hosts hitting a
Linux storage server.  Since the Linux box can provide the
storage in pretty much any form, I'm wondering which option is
"best." Our primary focus is on reliability, with performance
being a close second.  Since we will only be using a single
storage server I was thinking NFS would probably beat out
GlusterFS, and that NFSv4 would be a better choice than
NFSv3.  I had assumed that that iSCSI would be better
performance wise, but from what I'm seeing online that might
not be the case.


NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD
support, which is nice.
Gluster probably requires 3 servers.
In most cases, I don't think people see the difference in
performance between NFS and iSCSI. The theory is that block
storage is faster, but in practice, most don't get to those limits
where it matters really.


  Our servers will be using a 1G network backbone for regular
traffic and a dedicated 10G backbone with LACP for redundancy
and extra bandwidth for storage traffic if that makes a
difference.


LCAP many times (especially on NFS) does not provide extra
bandwidth, as the (single) NFS connection tends to be sticky to a
single physical link.
It's one of the reasons I personally prefer iSCSI with multipathing.


  I'll probably try to do some performance benchmarks with 2-3
options, but the reliability issue is a little harder to test
for.  Has anyone had any particularly bad experiences with a
particular storage option?  We have been using iSCSI with a
Dell MD3x00 SAN and have run into a bunch of issues with the
multipath setup, but that won't be a problem with the new SAN
since it's only got a single controller interface.


A single controller is not very reliable. If reliability is your
primary concern, I suggest ensuring there is no single point of
failure - or at least you are aware of all of them (does the
storage server have redundant power supply? to two power sources?
Of course in some scenarios it's an overkill and perhaps not
practical, but you should be aware of your weak spots).

I'd stick with what you are most comfortable managing - creating,
backing up, extending, verifying health, etc.
Y.



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-03-26 Thread Marcin Kruk
But on the Dell MD32x00 you have got two controllers. The trick is that you
have to sustain link to both controllers, so the best option is to use
multipath as Yaniv said. Otherwise you get an error notifications from the
array.
The problem is with iSCSI target.
After server reboot, VDSM tries to connect to target which was previously
set, but it could be inactive.
So in that case you have to remember to edit configuration in vdsm.conf,
because vdsm.conf do not accept target with multi IP addresses.

2017-03-26 9:40 GMT+02:00 Yaniv Kaul :

>
>
> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell 
> wrote:
>
>> Hi Everyone,
>>
>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>> storage server.  Since the Linux box can provide the storage in pretty much
>> any form, I'm wondering which option is "best." Our primary focus is on
>> reliability, with performance being a close second.  Since we will only be
>> using a single storage server I was thinking NFS would probably beat out
>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>> assumed that that iSCSI would be better performance wise, but from what I'm
>> seeing online that might not be the case.
>>
>
> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support,
> which is nice.
> Gluster probably requires 3 servers.
> In most cases, I don't think people see the difference in performance
> between NFS and iSCSI. The theory is that block storage is faster, but in
> practice, most don't get to those limits where it matters really.
>
>
>>
>>   Our servers will be using a 1G network backbone for regular traffic and
>> a dedicated 10G backbone with LACP for redundancy and extra bandwidth for
>> storage traffic if that makes a difference.
>>
>
> LCAP many times (especially on NFS) does not provide extra bandwidth, as
> the (single) NFS connection tends to be sticky to a single physical link.
> It's one of the reasons I personally prefer iSCSI with multipathing.
>
>
>>
>>   I'll probably try to do some performance benchmarks with 2-3 options,
>> but the reliability issue is a little harder to test for.  Has anyone had
>> any particularly bad experiences with a particular storage option?  We have
>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>> with the multipath setup, but that won't be a problem with the new SAN
>> since it's only got a single controller interface.
>>
>
> A single controller is not very reliable. If reliability is your primary
> concern, I suggest ensuring there is no single point of failure - or at
> least you are aware of all of them (does the storage server have redundant
> power supply? to two power sources? Of course in some scenarios it's an
> overkill and perhaps not practical, but you should be aware of your weak
> spots).
>
> I'd stick with what you are most comfortable managing - creating, backing
> up, extending, verifying health, etc.
> Y.
>
>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-03-25 Thread Charles Tassell

Hi Everyone,

  I'm about to setup an oVirt cluster with two hosts hitting a Linux 
storage server.  Since the Linux box can provide the storage in pretty 
much any form, I'm wondering which option is "best." Our primary focus 
is on reliability, with performance being a close second.  Since we will 
only be using a single storage server I was thinking NFS would probably 
beat out GlusterFS, and that NFSv4 would be a better choice than NFSv3.  
I had assumed that that iSCSI would be better performance wise, but from 
what I'm seeing online that might not be the case.


  Our servers will be using a 1G network backbone for regular traffic 
and a dedicated 10G backbone with LACP for redundancy and extra 
bandwidth for storage traffic if that makes a difference.


  I'll probably try to do some performance benchmarks with 2-3 options, 
but the reliability issue is a little harder to test for.  Has anyone 
had any particularly bad experiences with a particular storage option?  
We have been using iSCSI with a Dell MD3x00 SAN and have run into a 
bunch of issues with the multipath setup, but that won't be a problem 
with the new SAN since it's only got a single controller interface.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users