>
>
> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb
> flash?
>
Is anyone able to clarify this requirement for me?
___
Gluster-users mailing list
Gluster-users@gluster.org
Your question is difficult to parse. Typically RAID and JBOD are mutually
exclusive. By "flash-backed", do you mean a battery backup unit (BBU) on
your RAID controller?
On Mon, Apr 9, 2018 at 8:49 AM, Vincent Royer wrote:
>
>> Is a flash-backed Raid required for JBOD,
On 09/04/18 16:49, Vincent Royer wrote:
Is a flash-backed Raid required for JBOD, and should it be 1gb, 2,
or 4gb flash?
RAID and JBOD are completely different things. JBODs are just that,
bunches of disks, and they don't have any cache above them in hardware.
If you're going to
Thanks,
I suppose what I'm trying to gain is some clarity on what choice is best
for a given application. How do I know if it's better for me to use a raid
card or not, to include flash-cache on it or not, to use ZFS or not, when
combined with a small number of SSDs in Replica 3.
On Mon, Apr
On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:
> On 06/04/2018 19:33, Shyam Ranganathan wrote:
>> Hi,
>>
>> We postponed this and I did not announce this to the lists. The number
>> of bugs fixed against 3.10.12 is low, and I decided to move this to the
>> 30th of Apr instead.
>>
>> Is
On 09/04/18 19:02, Vincent Royer wrote:
Thanks,
I suppose what I'm trying to gain is some clarity on what choice is
best for a given application. How do I know if it's better for me to
use a raid card or not, to include flash-cache on it or not, to use
ZFS or not, when combined with a small
Thanks,
The 3 servers are new Lenovo units with redundant PS backed by two huge UPS
units (on for each bank of power supplies). I think the chances of losing
two nodes is incredibly slim, and in that case a Disaster Recovery from
offsite backups would be reasonable.
My requirements are about
On 09/04/18 19:00, Vincent Royer wrote:
Yes the flash-backed RAID cards use a super-capacitor to backup the
flash cache. You have a choice of flash module sizes to include on
the card. The card supports RAID modes as well as JBOD.
I do not know if Gluster can make use of battery-backed
Yes the flash-backed RAID cards use a super-capacitor to backup the flash
cache. You have a choice of flash module sizes to include on the card.
The card supports RAID modes as well as JBOD.
I do not know if Gluster can make use of battery-backed flash-based Cache
when the disks are presented
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different
networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24
you definitely need mount options to /etc/fstab
use ones from here
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I went on with using local mounts to achieve performance as well
Also, 3.12 or 3.10 branches would be preferable for production
On Fri, Apr 6, 2018 at 4:12
Hi Vlad,
I'm using only localhost: mounts.
Can you please explain what effect each option has on performance issues
shown in my posts? "negative-timeout=10,attribute-timeout=30,fopen-
keep-cache,direct-io-mode=enable,fetch-attempts=5" From what I remember,
direct-io-mode=enable didn't make a
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer)
cluster to 3.12.7 and this morning I got a warning that 9 files on one of my
volumes are not synced. Ineeded checking that volume with a "volume heal info"
shows that the third node (the arbitrer node) has 9 files
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down,
On 06/04/2018 19:33, Shyam Ranganathan wrote:
Hi,
We postponed this and I did not announce this to the lists. The number
of bugs fixed against 3.10.12 is low, and I decided to move this to the
30th of Apr instead.
Is there a specific fix that you are looking for in the release?
Hi,
yes,
Here would be also the corresponding log entries on a gluster node brick log
file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093]
[posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix:
removing gfid2path xattr failed on
As I was suggested in the past by this mailing list a now ran a stat and
getfattr on one of the problematic files on all nodes and at the end a stat on
the fuse mount directly. The output is below:
NODE1:
STAT:
File:
On 04/09/2018 04:36 PM, mabi wrote:
As I was suggested in the past by this mailing list a now ran a stat and
getfattr on one of the problematic files on all nodes and at the end a stat on
the fuse mount directly. The output is below:
NODE1:
STAT:
File:
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be possible to
backport it to 3.12? I am asking because 3.13 is not a long-term release and as
such I would not like to have to upgrade to 3.13.
‐‐‐ Original
On 04/09/2018 05:40 PM, mabi wrote:
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be possible to
backport it to 3.12? I am asking because 3.13 is not a long-term release and as
such I would not like to have to
On 04/09/2018 05:09 PM, mabi wrote:
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node
3 in my case)?
Sorry I should have been clearer. Yes the brick on the
Thanks Ravi for your answer.
Stupid question but how do I delete the trusted.afr xattrs on this brick?
And when you say "this brick", do you mean the brick on the arbitrer node (node
3 in my case)?
‐‐‐ Original Message ‐‐‐
On April 9, 2018 1:24 PM, Ravishankar N
Thanks for clarifying that point. Does this mean that the fix for this bug will
get backported to the next 3.12 release?
‐‐‐ Original Message ‐‐‐
On April 9, 2018 2:31 PM, Ravishankar N wrote:
>
>
> On 04/09/2018 05:54 PM, Dmitry Melekhov wrote:
>
> >
09.04.2018 16:18, Ravishankar N пишет:
On 04/09/2018 05:40 PM, mabi wrote:
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be
possible to backport it to 3.12? I am asking because 3.13 is not a
long-term release
On 04/09/2018 05:54 PM, Dmitry Melekhov wrote:
09.04.2018 16:18, Ravishankar N пишет:
On 04/09/2018 05:40 PM, mabi wrote:
Again thanks that worked and I have now no more unsynched files.
You mentioned that this bug has been fixed in 3.13, would it be
possible to backport it to 3.12? I am
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go
26 matches
Mail list logo