Re: [Gluster-users] delettion of files in gluster directories

2018-07-04 Thread Vlad Kopylov
If you delete those from the bricks it will start healing them - restoring from other bricks I have similar issue with email storage which uses maildir format with millions of small files doing delete on the server takes days sometimes worth recreating volumes wiping .glusterfs on bricks,

Re: [Gluster-users] Files not healing & missing their extended attributes - Help!

2018-07-04 Thread Vlad Kopylov
you'll need to query attr of those files for them to be updated in . glusterfs regarding wiping .glusterfs - I've done it half a dozen times on live data: it is a simple drill which fixes almost everything. often you don't have time to ask around etc. you just need it working ASAP so you delete

Re: [Gluster-users] Files not healing & missing their extended attributes - Help!

2018-07-04 Thread Gambit15
Hi Karthik, Many thanks for the response! On 4 July 2018 at 05:26, Karthik Subrahmanya wrote: > Hi, > > From the logs you have pasted it looks like those files are in GFID > split-brain. > They should have the GFIDs assigned on both the data bricks but they will > be different. > > Can you

Re: [Gluster-users] Files not healing & missing their extended attributes - Help!

2018-07-04 Thread Gambit15
On 3 July 2018 at 23:37, Vlad Kopylov wrote: > might be too late but sort of simple always working solution for such > cases is rebuilding .glusterfs > > kill it and query attr for all files again, it will recreate .glusterfs on > all bricks > > something like mentioned here >

Re: [Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-04 Thread Anh Vo
Output of glfsheal-gv0.log: [2018-07-04 16:11:05.435680] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-gv0-client-1: Server lk version = 1 [2018-07-04 16:11:05.436847] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-gv0-client-2: changing port to 49153 (from 0) [2018-07-04

Re: [Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-04 Thread Ravishankar N
On 07/04/2018 09:20 PM, Anh Vo wrote: I forgot to mention we're using 3.12.10 On Wed, Jul 4, 2018 at 8:45 AM, Anh Vo > wrote: If I run "sudo gluster volume heal gv0 split-brain latest-mtime /" I get the following: Lookup failed on /:Invalid argument.

Re: [Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-04 Thread Anh Vo
I forgot to mention we're using 3.12.10 On Wed, Jul 4, 2018 at 8:45 AM, Anh Vo wrote: > If I run "sudo gluster volume heal gv0 split-brain latest-mtime /" I get > the following: > > Lookup failed on /:Invalid argument. > Volume heal failed. > > node2 was not connected at that time, because if

Re: [Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-04 Thread Anh Vo
If I run "sudo gluster volume heal gv0 split-brain latest-mtime /" I get the following: Lookup failed on /:Invalid argument. Volume heal failed. node2 was not connected at that time, because if we connect it to the system after a few minutes gluster will become almost unusable and we have many

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-07-04 Thread Ravishankar N
Hi mabi, there are a couple of AFR patches  from master that I'm currently back porting to the 3.12 branch: afr: heal gfids when file is not present on all bricks afr: don't update readables if inode refresh failed on all children afr: fix bug-1363721.t failure afr: add quorum checks in pre-op

Re: [Gluster-users] New 3.12.7 possible split-brain on replica 3

2018-07-04 Thread mabi
Hello, I just wanted to let you know that last week I have upgraded my two replica nodes from Debian 8 to Debian 9 so now all my 3 nodes (including aribter) are running Debian 9 with a Linux 4 kernel. Unfortunately I still have the exact same issue. Another detail I might have not mentioned

[Gluster-users] Gluster 4.1 servers - KVM Centos 7.5 clients - replica 2 - continuous healing taking place.

2018-07-04 Thread Claus Jeppesen
We have a replica 2 setup using glusterfs-4.1.1-1 which we use to store VM files. The Gluster clients are Centos 7.5 using gluster "url" setup for the VM disks - e.g. like: ... We have the problem that gluster on the 2 brick server keeps on running healing

[Gluster-users] Gluster Outreachy

2018-07-04 Thread Bhumika Goyal
Hi all, Gnome has been working on an initiative known as Outreachy[1] since 2010. Outreachy is a three months remote internship program. It aims to increase the participation of women and members from under-represented groups in open source. This program is held twice in a year. During the

[Gluster-users] delettion of files in gluster directories

2018-07-04 Thread hsafe
Hi all, I have a rather simplistic question, there are dirs that contain a lot of small files in a 2x replica set accessed natively on the clients. Due to the directory file number; it fails to show the dir contents from clients. In case of move or deletion of the dirs natively and from the

Re: [Gluster-users] Files not healing & missing their extended attributes - Help!

2018-07-04 Thread Karthik Subrahmanya
Hi, >From the logs you have pasted it looks like those files are in GFID split-brain. They should have the GFIDs assigned on both the data bricks but they will be different. Can you please paste the getfattr output of those files and their parent from all the bricks again? Which version of

Re: [Gluster-users] Announcing Glusterfs release 3.12.10 (Long Term Maintenance)

2018-07-04 Thread Niels de Vos
On Tue, Jul 03, 2018 at 05:20:44PM -0500, Darrell Budic wrote: > I’ve now tested 3.12.11 on my centos 7.5 ovirt dev cluster, and all > appears good. Should be safe to move from -test to release for > centos-gluster312 Thanks for the report! Someone else already informed me earlier as well. The