Hey,
Can you give us the volume info output for this volume?
Why are you not able to get the xattrs from arbiter brick? It is the same
way as you do it on data bricks.
The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in
the getxattr outputs you have provided.
Did you do a
Here is the process for resolving split brain on replica 2:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
It should be pretty much the same for replica 3, you change the xattrs with
something like:
# setfattr
You can try kicking off a client side heal by running:
ls -laR /your-gluster-mount/*
Sometimes when I see just the GFID instead of the file name I have found that
if I stat the file the name shows up in heal info.
Before running that make sure that you don't have any split brain files:
Hi,
I have the following volume:
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: virt3:/data/virt_images/brick
Brick2: virt2:/data/virt_images/brick
I definately prefer replica 2 arbiter 1.
It makes more sense and is more accurate, since that scenario has only
two copies of the actual data.
-wk
On 12/20/2017 2:14 AM, Ravishankar N wrote:
Hi,
The existing syntax in the gluster CLI for creating arbiter volumes is
`gluster volume
I think the replica 2 arbiter 1 is more clear towards the intent of the
configuration.
I would also support :
replica n ,,..., arbiter m ,,...
as that makes it very clear what brick(s) should be the arbiter(s).
On Wed, 2017-12-20 at 15:44 +0530, Ravishankar N wrote:
> Hi,
>
> The existing syntax
On 20/12/2017 16:51, Craig Lesle wrote:
> With the release of 3.12 ltm and now 3.13 stm, when 4.0 is
> released 3.10 is shown to be at eol;
>
> Version Status Release_Date EOL_Version EOL_Date
> 3.10 LTM 2017-02-27 4.0
> 3.11 EOL 2017-05-30 3.12
Dear Users,
I’m experiencing a random problem ( "file changed as we read it” error) during
tar files creation on a distributed dispersed Gluster file system.
The tar files seem to be created correctly, but I can see a lot of message
similar to the following ones:
tar:
On 12/20/2017 05:14 AM, Ravishankar N wrote:
> Hi,
>
> The existing syntax in the gluster CLI for creating arbiter volumes is
> `gluster volume create replica 3 arbiter 1 ` .
> It means (or at least intended to mean) that out of the 3 bricks, 1
> brick is the arbiter.
> There has been some
On 12/20/2017 01:01 AM, Hari Gowtham wrote:
> Yes Atin. I'll take a look.
Once we have a root cause and a way around, please document this in the
upgrade procedure in our docs as well. That way future problems have a
documented solution (outside of the lists as well).
Thanks!
>
> On Wed, Dec
Hi,
> with the option "cluster.min-free-disk" set, glusterfs avoids placing files
> bricks that are "too full".
> I'd like to understand when the free space on the bricks is calculated. It
> seems to me that this does not happen for every write call (naturally) but at
> some interval or that
Hi,
The existing syntax in the gluster CLI for creating arbiter volumes is
`gluster volume create replica 3 arbiter 1 ` .
It means (or at least intended to mean) that out of the 3 bricks, 1
brick is the arbiter.
There has been some feedback while implementing arbiter support in
glusterd2
Could you share the glusterd and the respective brick log files along with
the output of gluster volume info, gluster volume status and 'ps aux | grep
glusterfsd' .
On Wed, Dec 20, 2017 at 3:08 PM, David Spisla
wrote:
> Hello Gluster Community,
>
> I am using
Hello Gluster Community,
I am using Gluster3.10 in a 2-Node-Szenario (with CentosOS7-machines).
After restarting one of the nodes,
sometimes one of these two phenomena occurs:
1. The brick process of a volume on the newly started node is no
longer running
2. The Brick Process Is Running
Hi,
I've just created again the gluster with NFS ganesha. Glusterfs version 3.8
When I run the command gluster nfs-ganesha enable - it returns a success.
However, looking at the pcs status, I see this:
[root@tlxdmz-nfs1 ~]# pcs status
Cluster name: ganesha-nfs
Stack: corosync
Current DC:
15 matches
Mail list logo