Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-09 Thread Vincent Royer
> > > Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb > flash? > Is anyone able to clarify this requirement for me? ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-09 Thread Alex Chekholko
Your question is difficult to parse. Typically RAID and JBOD are mutually exclusive. By "flash-backed", do you mean a battery backup unit (BBU) on your RAID controller? On Mon, Apr 9, 2018 at 8:49 AM, Vincent Royer wrote: > >> Is a flash-backed Raid required for JBOD,

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-09 Thread Alex Crow
On 09/04/18 16:49, Vincent Royer wrote: Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb flash? RAID and JBOD are completely different things. JBODs are just that, bunches of disks, and they don't have any cache above them in hardware. If you're going to

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-09 Thread Vincent Royer
Thanks, I suppose what I'm trying to gain is some clarity on what choice is best for a given application. How do I know if it's better for me to use a raid card or not, to include flash-cache on it or not, to use ZFS or not, when combined with a small number of SSDs in Replica 3. On Mon, Apr

Re: [Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-09 Thread Shyam Ranganathan
On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote: > On 06/04/2018 19:33, Shyam Ranganathan wrote: >> Hi, >> >> We postponed this and I did not announce this to the lists. The number >> of bugs fixed against 3.10.12 is low, and I decided to move this to the >> 30th of Apr instead. >> >> Is

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-09 Thread Alex Crow
On 09/04/18 19:02, Vincent Royer wrote: Thanks, I suppose what I'm trying to gain is some clarity on what choice is best for a given application.  How do I know if it's better for me to use a raid card or not, to include flash-cache on it or not, to use ZFS or not, when combined with a small

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-09 Thread Vincent Royer
Thanks, The 3 servers are new Lenovo units with redundant PS backed by two huge UPS units (on for each bank of power supplies). I think the chances of losing two nodes is incredibly slim, and in that case a Disaster Recovery from offsite backups would be reasonable. My requirements are about

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-09 Thread Alex Crow
On 09/04/18 19:00, Vincent Royer wrote: Yes the flash-backed RAID cards use a super-capacitor to backup the flash cache.  You have a choice of flash module sizes to include on the card.   The card supports RAID modes as well as JBOD. I do not know if Gluster can make use of battery-backed

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-09 Thread Vincent Royer
Yes the flash-backed RAID cards use a super-capacitor to backup the flash cache. You have a choice of flash module sizes to include on the card. The card supports RAID modes as well as JBOD. I do not know if Gluster can make use of battery-backed flash-based Cache when the disks are presented

[Gluster-users] Gluster cluster on two networks

2018-04-09 Thread Marcus Pedersén
Hi all! I have setup a replicated/distributed gluster cluster 2 x (2 + 1). Centos 7 and gluster version 3.12.6 on server. All machines have two network interfaces and connected to two different networks, 10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6) 192.168.67.0/24

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-09 Thread Vlad Kopylov
you definitely need mount options to /etc/fstab use ones from here http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html I went on with using local mounts to achieve performance as well Also, 3.12 or 3.10 branches would be preferable for production On Fri, Apr 6, 2018 at 4:12

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-09 Thread Artem Russakovskii
Hi Vlad, I'm using only localhost: mounts. Can you please explain what effect each option has on performance issues shown in my posts? "negative-timeout=10,attribute-timeout=30,fopen- keep-cache,direct-io-mode=enable,fetch-attempts=5" From what I remember, direct-io-mode=enable didn't make a

[Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files

Re: [Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-04-09 Thread Alex K
Hi, You need 3 nodes at least to have quorum enabled. In 2 node setup you need to disable quorum so as to be able to still use the volume when one of the nodes go down. On Mon, Apr 9, 2018, 09:02 TomK wrote: > Hey All, > > In a two node glusterfs setup, with one node down,

Re: [Gluster-users] ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")

2018-04-09 Thread Marco Lorenzo Crociani
On 06/04/2018 19:33, Shyam Ranganathan wrote: Hi, We postponed this and I did not announce this to the lists. The number of bugs fixed against 3.10.12 is low, and I decided to move this to the 30th of Apr instead. Is there a specific fix that you are looking for in the release? Hi, yes,

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
Here would be also the corresponding log entries on a gluster node brick log file: [2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File:

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread Ravishankar N
On 04/09/2018 04:36 PM, mabi wrote: As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File:

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. ‐‐‐ Original

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread Ravishankar N
On 04/09/2018 05:40 PM, mabi wrote: Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread Ravishankar N
On 04/09/2018 05:09 PM, mabi wrote: Thanks Ravi for your answer. Stupid question but how do I delete the trusted.afr xattrs on this brick? And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
Thanks Ravi for your answer. Stupid question but how do I delete the trusted.afr xattrs on this brick? And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? ​​ ‐‐‐ Original Message ‐‐‐ On April 9, 2018 1:24 PM, Ravishankar N

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread mabi
Thanks for clarifying that point. Does this mean that the fix for this bug will get backported to the next 3.12 release? ​​ ‐‐‐ Original Message ‐‐‐ On April 9, 2018 2:31 PM, Ravishankar N wrote: > ​​ > > On 04/09/2018 05:54 PM, Dmitry Melekhov wrote: > > >

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread Dmitry Melekhov
09.04.2018 16:18, Ravishankar N пишет: On 04/09/2018 05:40 PM, mabi wrote: Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-04-09 Thread Ravishankar N
On 04/09/2018 05:54 PM, Dmitry Melekhov wrote: 09.04.2018 16:18, Ravishankar N пишет: On 04/09/2018 05:40 PM, mabi wrote: Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am

[Gluster-users] volume start: gv01: failed: Quorum not met. Volume operation not allowed.

2018-04-09 Thread TomK
Hey All, In a two node glusterfs setup, with one node down, can't use the second node to mount the volume. I understand this is expected behaviour? Anyway to allow the secondary node to function then replicate what changed to the first (primary) when it's back online? Or should I just go