On Tue, Apr 18, 2017 at 9:57 PM, Bryan Sockel <[email protected]> wrote:
> Was reading over this post to the group about storage options. I am more > of a windows guy as appose to a linux guy, but am learning quickly and had > a question. You said that LACP will not provide extra band with > (Especially with NFS). Does the same hold true with GlusterFS. We are > currently using GlusterFS for the file replication piece. Does Glusterfs > take advantage of any multipathing? > > Thanks > > I'd expect Gluster to take advantage of LACP, as it has replication to multiple peers (as opposed to NFS). See[1]. Y. [1] https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/ > > > -----Original Message----- > From: Yaniv Kaul <[email protected]> > To: Charles Tassell <[email protected]> > Cc: users <[email protected]> > Date: Sun, 26 Mar 2017 10:40:00 +0300 > Subject: Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS? > > > > On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell <[email protected]> > wrote: >> >> Hi Everyone, >> >> I'm about to setup an oVirt cluster with two hosts hitting a Linux >> storage server. Since the Linux box can provide the storage in pretty much >> any form, I'm wondering which option is "best." Our primary focus is on >> reliability, with performance being a close second. Since we will only be >> using a single storage server I was thinking NFS would probably beat out >> GlusterFS, and that NFSv4 would be a better choice than NFSv3. I had >> assumed that that iSCSI would be better performance wise, but from what I'm >> seeing online that might not be the case. > > > NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support, > which is nice. > Gluster probably requires 3 servers. > In most cases, I don't think people see the difference in performance > between NFS and iSCSI. The theory is that block storage is faster, but in > practice, most don't get to those limits where it matters really. > > >> >> Our servers will be using a 1G network backbone for regular traffic and >> a dedicated 10G backbone with LACP for redundancy and extra bandwidth for >> storage traffic if that makes a difference. > > > LCAP many times (especially on NFS) does not provide extra bandwidth, as > the (single) NFS connection tends to be sticky to a single physical link. > It's one of the reasons I personally prefer iSCSI with multipathing. > > >> >> I'll probably try to do some performance benchmarks with 2-3 options, >> but the reliability issue is a little harder to test for. Has anyone had >> any particularly bad experiences with a particular storage option? We have >> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues >> with the multipath setup, but that won't be a problem with the new SAN >> since it's only got a single controller interface. > > > A single controller is not very reliable. If reliability is your primary > concern, I suggest ensuring there is no single point of failure - or at > least you are aware of all of them (does the storage server have redundant > power supply? to two power sources? Of course in some scenarios it's an > overkill and perhaps not practical, but you should be aware of your weak > spots). > > I'd stick with what you are most comfortable managing - creating, backing > up, extending, verifying health, etc. > Y. > > >> >> >> _______________________________________________ >> Users mailing list >> [email protected] >> http://lists.ovirt.org/mailman/listinfo/users > >
_______________________________________________ Users mailing list [email protected] http://lists.ovirt.org/mailman/listinfo/users

