Re: [Gluster-users] chances of split brain with a distributed replicated volume where replica is 3

2017-02-09 Thread Ravishankar N
On 02/09/2017 06:39 PM, Joseph Lorenzini wrote: All: I read this in the gluster docs. Note I am not using arbiter -- I am setting up volumes with full 3 replicas. In this case, is this split brain scenario theoretical or has this actually occurred? If so, what are the chances that this could

Re: [Gluster-users] Question about heterogeneous bricks

2017-02-21 Thread Ravishankar N
On 02/21/2017 05:17 PM, Nithya Balachandran wrote: Hi, Ideally, both bricks in a replica set should be of the same size. Ravi, can you confirm? Yes, correct. -Ravi Regards, Nithya On 21 February 2017 at 16:05, Daniele Antolini > wrote: Hi Serkan, thank

Re: [Gluster-users] Announcing release 3.11 : Scope, schedule and feature tracking

2017-03-03 Thread Ravishankar N
On 02/28/2017 08:47 PM, Shyam wrote: We should be transitioning to using github for feature reporting and tracking, more fully from this release. So once again, if there exists any confusion on that front, reach out to the lists for clarification. I see that there was a discussion on this on t

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-03-03 Thread Ravishankar N
On 03/03/2017 07:23 PM, Shyam wrote: On 03/03/2017 06:44 AM, Prashanth Pai wrote: On 02/28/2017 08:47 PM, Shyam wrote: We should be transitioning to using github for feature reporting and tracking, more fully from this release. So once again, if there exists any confusion on that front, reach

Re: [Gluster-users] Issue with duplicated files in gluster 3.10

2017-03-10 Thread Ravishankar N
On 03/10/2017 10:32 PM, Luca Gervasi wrote: Hi, I'm Andrea's collegue. I'd like to add that we have no trusted.afr xattr on the root folder Just to confirm, this would be 'includes2013' right? where those files are located and every file seems to be clean on each brick. You can find another ex

Re: [Gluster-users] Issue with duplicated files in gluster 3.10

2017-03-13 Thread Ravishankar N
ndrea On Sat, 11 Mar 2017 at 02:01 Ravishankar N <mailto:ravishan...@redhat.com>> wrote: On 03/10/2017 10:32 PM, Luca Gervasi wrote: Hi, I'm Andrea's collegue. I'd like to add that we have no trusted.afr xattr on the root folder Just to confi

Re: [Gluster-users] Issue with duplicated files in gluster 3.10

2017-03-13 Thread Ravishankar N
On 03/13/2017 06:41 PM, Luca Gervasi wrote: Is this setting willing to avoid the cration of dupes or to fix the situation live? The linkto files are not duplicates. They're created when you rename a file and the file hashes to a different suvolume of dht. When you issue readdir, dht normally

Re: [Gluster-users] split-brain

2017-03-20 Thread Ravishankar N
and brick logs attached. Also if you do have some kind of reproducer, that would help a lot. -Ravi Best Regards Bernhard 2017-03-20 12:57 GMT+01:00 Ravishankar N <mailto:ravishan...@redhat.com>>: SFILE_CONTAINER_080 is the one which seems to be in split-brain. SFILE_CONTAINER

Re: [Gluster-users] "Fake" distributed-replicated volume

2017-04-07 Thread Ravishankar N
On 04/07/2017 11:34 PM, Jamie Lawrence wrote: Greetings, Glusterites - I have a suboptimal situation, and am wondering if there is any way to create a replica-3 distributed/replicated volume with three machines. I saw in the docs that the create command will fail with multiple bricks on the sa

Re: [Gluster-users] Slow write times to gluster disk

2017-04-07 Thread Ravishankar N
Hi Pat, I'm assuming you are using gluster native (fuse mount). If it helps, you could try mounting it via gluster NFS (gnfs) and then see if there is an improvement in speed. Fuse mounts are slower than gnfs mounts but you get the benefit of avoiding a single point of failure. Unlike fuse mo

Re: [Gluster-users] Slow write times to gluster disk

2017-04-10 Thread Ravishankar N
know if this is needed/ helpful in your current setup where everything (bricks and clients) seem to be on just one server. -Ravi Thanks Pat On 04/08/2017 12:58 AM, Ravishankar N wrote: Hi Pat, I'm assuming you are using gluster native (fuse mount). If it helps, you could try mou

Re: [Gluster-users] BTRFS as a GlusterFS storage back-end, and what I've learned from using it as such.

2017-04-11 Thread Ravishankar N
Adding gluster-users list. I think there are a few users out there running gluster on top of btrfs, so this might benefit a broader audience. On 04/11/2017 09:10 PM, Austin S. Hemmelgarn wrote: About a year ago now, I decided to set up a small storage cluster to store backups (and partially rep

Re: [Gluster-users] Slow write times to gluster disk

2017-04-13 Thread Ravishankar N
to mount 2 xfs volumes as a single gluster file system volume? If not, what would be a better path? Pat On 04/11/2017 12:21 AM, Ravishankar N wrote: On 04/11/2017 12:42 AM, Pat Haley wrote: Hi Ravi, Thanks for the reply. And yes, we are using the gluster native (fuse) mount. Since thi

Re: [Gluster-users] Slow write times to gluster disk

2017-04-14 Thread Ravishankar N
On 04/14/2017 12:20 PM, Pranith Kumar Karampuri wrote: On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N <mailto:ravishan...@redhat.com>> wrote: Hi Pat, I'm assuming you are using gluster native (fuse mount). If it helps, you could try mounting it via gluster NFS (

Re: [Gluster-users] Quorum replica 2 and arbiter

2017-04-22 Thread Ravishankar N
On 04/22/2017 12:54 PM, Pranith Kumar Karampuri wrote: Also, can we add an arbiter node to the current replica 2 volume without losing data ? if yes, does the re-balance bug "Bug 1440635" affect this process ? I remember we had one user in Redhat who wanted to do the sa

Re: [Gluster-users] glustershd: unable to get index-dir on myvolume-client-0

2017-05-02 Thread Ravishankar N
On 05/02/2017 01:08 AM, mabi wrote: Hi, I have a two nodes GlusterFS 3.8.11 replicated volume and just noticed today in the glustershd.log log file a lot of the following warning messages: [2017-05-01 18:42:18.004747] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-myvolume-re

Re: [Gluster-users] glustershd: unable to get index-dir on myvolume-client-0

2017-05-02 Thread Ravishankar N
On 05/02/2017 11:48 PM, mabi wrote: Hi Ravi, Thanks for the pointer, you are totally right the "dirty" directory is missing on my node1. Here is the output of a "ls -la" of both nodes: node1: drw--- 2 root root 2 Apr 28 22:15 entry-changes drw--- 2 root root 2 Mar 6 2016 xat

Re: [Gluster-users] Arbiter and Hot Tier

2017-05-05 Thread Ravishankar N
Hi Walter, Yes, arbiter volumes are currently not supported with tiering. -Ravi On 05/05/2017 08:54 PM, Walter Deignan wrote: I've been googling this to no avail so apologies if this is explained somewhere I missed. Is there a known incompatibility between using arbiters and hot tiering? Expe

Re: [Gluster-users] Arbiter and Hot Tier

2017-05-05 Thread Ravishankar N
Architect From: Ravishankar N To: Walter Deignan , gluster-users@gluster.org Date: 05/05/2017 11:08 AM Subject: Re: [Gluster-users] Arbiter and Hot Tier Hi Walter, Yes, arbiter volumes are currently not supported with tier

Re: [Gluster-users] Slow write times to gluster disk

2017-05-05 Thread Ravishankar N
debugging this? Thanks Pat On 04/14/2017 03:01 AM, Ravishankar N wrote: On 04/14/2017 12:20 PM, Pranith Kumar Karampuri wrote: On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N mailto:ravishan...@redhat.com>> wrote: Hi Pat, I'm assuming you

Re: [Gluster-users] VM going down

2017-05-09 Thread Ravishankar N
On 05/09/2017 12:59 PM, Alessandro Briosi wrote: Il 08/05/2017 15:49, Alessandro Briosi ha scritto: Il 08/05/2017 12:57, Jesper Led Lauridsen TS Infra server ha scritto: I dont know if this has any relation to you issue. But I have seen several times during gluster healing that my wm’s fail o

Re: [Gluster-users] VM going down

2017-05-11 Thread Ravishankar N
On 05/11/2017 05:49 PM, Niels de Vos wrote: On Wed, May 10, 2017 at 09:08:03PM +0530, Pranith Kumar Karampuri wrote: On Wed, May 10, 2017 at 7:11 PM, Niels de Vos wrote: On Wed, May 10, 2017 at 04:08:22PM +0530, Pranith Kumar Karampuri wrote: On Tue, May 9, 2017 at 7:40 PM, Niels de Vos wro

Re: [Gluster-users] Gluster arbiter with tier

2017-05-15 Thread Ravishankar N
On 05/15/2017 11:03 AM, Benjamin Kingston wrote: Are there any plans to enable tiering with arbiter enabled? There was a discussion on brick ordering in tiered volumes affecting arbiter brick placement [1] but nothing concrete turned out. I don't think this is being actively looked into at t

Re: [Gluster-users] 120k context switches on GlsuterFS nodes

2017-05-16 Thread Ravishankar N
On 05/16/2017 11:13 PM, mabi wrote: Today I even saw up to 400k context switches for around 30 minutes on my two nodes replica... Does anyone else have so high context switches on their GlusterFS nodes? I am wondering what is "normal" and if I should be worried... Original Messag

Re: [Gluster-users] 120k context switches on GlsuterFS nodes

2017-05-17 Thread Ravishankar N
On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote: + gluster-devel On Wed, May 17, 2017 at 10:50 PM, mabi > wrote: I don't know exactly what kind of context-switches it was but what I know is that it is the "cs" number under "system" when you run vmst

Re: [Gluster-users] URGENT - Cheat on quorum

2017-05-18 Thread Ravishankar N
On 05/18/2017 07:18 PM, lemonni...@ulrar.net wrote: Hi, We are having huge hardware issues (oh joy ..) with RAID cards. On a replica 3 volume, we have 2 nodes down. Can we somehow tell gluster that it's quorum is 1, to get some amount of service back while we try to fix the other nodes or insta

Re: [Gluster-users] URGENT - Cheat on quorum

2017-05-21 Thread Ravishankar N
r get written on-disk on the arbiter brick. -Ravi I tried stopping the volume and even rebooting node1 and still get the error (And of course the volume wont start for the same reason) -WK On 5/18/2017 7:41 AM, Ravishankar N wrote: On 05/18/2017 07:18 PM, lemonni...@ulrar.net wrote:

Re: [Gluster-users] URGENT - Cheat on quorum

2017-05-22 Thread Ravishankar N
On 05/22/2017 11:02 AM, WK wrote: On 5/21/2017 7:00 PM, Ravishankar N wrote: On 05/22/2017 03:11 AM, W Kern wrote: gluster volume set VOL cluster.quorum-type none from the remaining 'working' node1 and it simply responds with "volume set: failed: Quorum not met. Volum

Re: [Gluster-users] Recovering from Arb/Quorum Write Locks

2017-05-28 Thread Ravishankar N
On 05/29/2017 03:36 AM, W Kern wrote: So I have testbed composed of a simple 2+1 replicate 3 with ARB testbed. gluster1, gluster2 and gluster-arb (with shards) My testing involves some libvirt VMs running continuous write fops on a localhost fuse mount on gluster1 Works great when all the pi

Re: [Gluster-users] gluster heal entry reappears

2017-05-28 Thread Ravishankar N
On 05/28/2017 10:31 PM, Markus Stockhausen wrote: Hi, I'm fairly new to gluster and quite happy with it. We are using it in an OVirt environment that stores its VM images in the gluster. Setup is as follows and Clients mount the volume with gluster native fuse protocol. 3 storage nodes: Cent

Re: [Gluster-users] Recovering from Arb/Quorum Write Locks

2017-05-28 Thread Ravishankar N
On 05/29/2017 10:45 AM, wk wrote: OK, can I assume SOME pause is expected when Gluster first sees gluster2 go down which would unpause after a timeout period. I have seen that behaviour as well. Yes, when you power off/shutdown/reboot a node, the mount hangs for a bit due to not receiving th

Re: [Gluster-users] About the maintenance time

2017-06-15 Thread Ravishankar N
On 06/15/2017 12:26 PM, te-yamau...@usen.co.jp wrote: What is the current stable version of glusterfs? 3.8.x and 3.10.x are long term stable releases. Newer features are in 3.11.x, which is a short term stable release branch. Also, the current latest versions are 3.8.9, 3.10.3, and 3.11.0, res

Re: [Gluster-users] About the maintenance time

2017-06-15 Thread Ravishankar N
On 06/16/2017 09:07 AM, te-yamau...@usen.co.jp wrote: I currently use it in the replica configuration of 3.10.2. The brick process may not start when restarting the storage server. Also, when using gnfs, the I / O may hang up and become unusable. After checking the release notes of 3.11.0, the f

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-28 Thread Ravishankar N
On 06/28/2017 06:52 PM, Paolo Margara wrote: Hi list, yesterday I noted the following lines into the glustershd.log log file: [2017-06-28 11:53:05.000890] W [MSGID: 108034] [afr-self-heald.c:479:afr_shd_index_sweep] 0-iso-images-repo-replicate-0: unable to get index-dir on iso-images-repo-clien

Re: [Gluster-users] afr-self-heald.c:479:afr_shd_index_sweep

2017-06-29 Thread Ravishankar N
r.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.7/ lists the steps for upgrade to 3.7 but the steps mentioned there are similar for any rolling upgrade. -Ravi Greetings, Paolo Margara Il 28/06/2017 18:41, Pranith Kumar Karampuri ha scritto: On Wed, Jun 28, 2017 at

Re: [Gluster-users] How to shutdown a node properly ?

2017-06-29 Thread Ravishankar N
On 06/29/2017 08:31 PM, Renaud Fortier wrote: Hi, Everytime I shutdown a node, I lost access (from clients) to the volumes for 42 seconds (network.ping-timeout). Is there a special way to shutdown a node to keep the access to the volumes without interruption ? Currently, I use the ‘shutdown’

Re: [Gluster-users] How to shutdown a node properly ?

2017-06-29 Thread Ravishankar N
ll-gluster-processes.sh which automatically checks for pending heals etc before killing the gluster processes. -Ravi *De :*Gandalf Corvotempesta [mailto:gandalf.corvotempe...@gmail.com] *Envoyé :* 29 juin 2017 13:41 *À :* Ravishankar N *Cc :* gluster-users@gluster.org; Renaud Fortier *Objet :* Re

Re: [Gluster-users] issue with trash feature and arbiter volumes

2017-06-29 Thread Ravishankar N
On 06/30/2017 02:03 AM, Alastair Neil wrote: Gluster 3.10.2 I have a replica 3 (2+1) volume and I have just seen both data bricks go down (arbiter stayed up). I had to disable trash feature to get the bricks to start. I had a quick look on bugzilla but did not see anything that looked simil

Re: [Gluster-users] How to shutdown a node properly ?

2017-06-30 Thread Ravishankar N
r a known-issue ;-) ) alright but I do not know at what layer (kernel/tcp-ip or user-space/gluster) the fix needs to be done. Maybe someone who is familiar with the tcp layer and connection timeouts can pitch in. -Ravi Il 30 giu 2017 3:24 AM, "Ravishankar N" <mailto:ravishan...@redhat.com&

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Ravishankar N
On 07/07/2017 01:23 PM, Florian Leleu wrote: Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 and 1 arbiter), each following command was made on node ipvr8.xxx: # gluster

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Ravishankar N
tput ? Le 07/07/2017 à 11:31, Ravishankar N a écrit : On 07/07/2017 01:23 PM, Florian Leleu wrote: Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 a

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Ravishankar N
cks and immediately doing an `ls snooper` from the mount to trigger heals to recreate the entries. Hope this helps Ravi Thanks. Le 07/07/2017 à 11:54, Ravishankar N a écrit : What does the mount log say when you get the EIO error on snooper? Check if there is a gfid mismatch on snooper di

Re: [Gluster-users] Problem manipulating files when a replica is down

2017-07-17 Thread Ravishankar N
On 07/17/2017 02:40 PM, Gary Lloyd wrote: We currently have a small pilot gluster setup as Replica 2 + 1 (arbiter) As a DR test we decided to see that to see what would happen when taking one of the replicas offline. For the first couple of hours everything seemed fine and then one of my col

Re: [Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-19 Thread Ravishankar N
On 07/19/2017 08:02 PM, Sahina Bose wrote: [Adding gluster-users] On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) > wrote: Hi all, We have an ovirt cluster hyperconverged with hosted engine on 3 full replicated node . This cluster have 2 gluster volume: -

Re: [Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-20 Thread Ravishankar N
On 07/20/2017 02:20 PM, yayo (j) wrote: Hi, Thank you for the answer and sorry for delay: 2017-07-19 16:55 GMT+02:00 Ravishankar N <mailto:ravishan...@redhat.com>>: 1. What does the glustershd.log say on all 3 nodes when you run the command? Does it complain anything ab

Re: [Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-20 Thread Ravishankar N
On 07/20/2017 03:42 PM, yayo (j) wrote: 2017-07-20 11:34 GMT+02:00 Ravishankar N <mailto:ravishan...@redhat.com>>: Could you check if the self-heal daemon on all nodes is connected to the 3 bricks? You will need to check the glustershd.log for that. If it is not conne

Re: [Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-21 Thread Ravishankar N
On 07/21/2017 02:55 PM, yayo (j) wrote: 2017-07-20 14:48 GMT+02:00 Ravishankar N <mailto:ravishan...@redhat.com>>: But it does say something. All these gfids of completed heals in the log below are the for the ones that you have given the getfattr output of. So what

Re: [Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-21 Thread Ravishankar N
ster.granular-entry-heal: on/ /auth.allow: */ server.allow-insecure: on 2017-07-21 19:13 GMT+02:00 yayo (j) <mailto:jag...@gmail.com>>: 2017-07-20 14:48 GMT+02:00 Ravishankar N mailto:ravishan...@redhat.com>>: But it does say something. All these gfids of completed heals

Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-29 Thread Ravishankar N
On 07/29/2017 04:36 PM, mabi wrote: Hi, Sorry for mailing again but as mentioned in my previous mail, I have added an arbiter node to my replica 2 volume and it seem to have gone fine except for the fact that there is one single file which needs healing and does not get healed as you can se

Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-30 Thread Ravishankar N
On 07/30/2017 02:24 PM, mabi wrote: Hi Ravi, Thanks for your hints. Below you will find the answer to your questions. First I tried to start the healing process by running: gluster volume heal myvolume and then as you suggested watch the output of the glustershd.log file but nothing appear

Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread Ravishankar N
On 07/31/2017 12:20 PM, mabi wrote: I did a find on this inode number and I could find the file but only on node1 (nothing on node2 and the new arbiternode). Here is an ls -lai of the file itself on node1: Sorry I don't understand, isn't that (XFS) inode number specific to node2's brick? If you

Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread Ravishankar N
On 07/31/2017 02:00 PM, mabi wrote: To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 -- 2 root root 0 Apr 28 22:51 /data/myvolume/bri

Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread Ravishankar N
On 07/31/2017 02:33 PM, mabi wrote: Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below: sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29

Re: [Gluster-users] Gluster operations speed limit

2017-08-01 Thread Ravishankar N
Adding Mohit who is experimenting with cgroups has found some way to restrict glustershd's CPU usage using cgroups. Mohit maybe you want to share the steps we need to follow to apply cgroups only to glustershd. Thanks. Ravi On 08/01/2017 03:46 PM, Alexey Zakurin wrote: Hi, community I hav

Re: [Gluster-users] How are bricks healed in Debian Jessie 3.11

2017-08-08 Thread Ravishankar N
On 08/08/2017 04:51 PM, Gerry O'Brien wrote: Hi, How are bricks healed in Debian Jessie 3.11? Is it at the file of block level? The scenario we have in mind is a 2 brick replica volume for storing VM file systems in a self-service IaaS, e.g. OpenNebula. If one of the bricks is off-line fo

Re: [Gluster-users] [Gluster-devel] How commonly applications make use of fadvise?

2017-08-11 Thread Ravishankar N
On 08/11/2017 04:51 PM, Niels de Vos wrote: On Fri, Aug 11, 2017 at 12:47:47AM -0400, Raghavendra Gowdappa wrote: Hi all, In a conversation between me, Milind and Csaba, Milind pointed out fadvise(2) [1] and its potential benefits to Glusterfs' caching translators like read-ahead etc. After d

Re: [Gluster-users] self-heal not working

2017-08-21 Thread Ravishankar N
Explore the following: - Launch index heal and look at the glustershd logs of all bricks for possible errors - See if the glustershd in each node is connected to all bricks. - If not try to restart shd by `volume start force` - Launch index heal again and try. - Try debugging the shd log by

Re: [Gluster-users] self-heal not working

2017-08-22 Thread Ravishankar N
On 08/22/2017 02:30 PM, mabi wrote: Thanks for the additional hints, I have the following 2 questions first: - In order to launch the index heal is the following command correct: gluster volume heal myvolume Yes - If I run a "volume start force" will it have any short disruptions on my clie

Re: [Gluster-users] self-heal not working

2017-08-23 Thread Ravishankar N
Unlikely. In your case only the afr.dirty is set, not the afr.volname-client-xx xattr. `gluster volume set myvolume diagnostics.client-log-level DEBUG` is right. On 08/23/2017 10:31 PM, mabi wrote: I just saw the following bug which was fixed in 3.8.15: https://bugzilla.redhat.com/show_bug.c

Re: [Gluster-users] self-heal not working

2017-08-27 Thread Ravishankar N
last mail? Best, Mabi Original Message Subject: Re: [Gluster-users] self-heal not working Local Time: August 24, 2017 12:08 PM UTC Time: August 24, 2017 10:08 AM From: m...@protonmail.ch To: Ravishankar N Ben Turner , Gluster Users Thanks for confirming the command.

Re: [Gluster-users] self-heal not working

2017-08-27 Thread Ravishankar N
On 08/28/2017 01:57 AM, Ben Turner wrote: - Original Message - From: "mabi" To: "Ravishankar N" Cc: "Ben Turner" , "Gluster Users" Sent: Sunday, August 27, 2017 3:15:33 PM Subject: Re: [Gluster-users] self-heal not working Thanks Ravi for

Re: [Gluster-users] self-heal not working

2017-08-28 Thread Ravishankar N
Turner , mabi Gluster Users On 08/28/2017 01:57 AM, Ben Turner wrote: > - Original Message - >> From: "mabi" >> To: "Ravishankar N" >> Cc: "Ben Turner" , "Gluster Users" >> Sent: Sunday, August 27, 2017 3:15:33 PM >

Re: [Gluster-users] self-heal not working

2017-08-28 Thread Ravishankar N
ishan...@redhat.com To: Ben Turner , mabi Gluster Users On 08/28/2017 01:57 AM, Ben Turner wrote: > - Original Message - >> From: "mabi" >> To: "Ravishankar N" >> Cc: "Ben Turner" , "Gluster Users" >> Sent: Sunday,

Re: [Gluster-users] heal info OK but statistics not working

2017-09-05 Thread Ravishankar N
On 09/04/2017 07:35 PM, Atin Mukherjee wrote: Ravi/Karthick, If one of the self heal process is down, will the statstics heal-count command work? No it doesn't seem to: glusterd stage-op phase fails because shd was down on that node and we error out. FWIW, the error message "Gathering crawl

Re: [Gluster-users] Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-09-22 Thread Ravishankar N
On 09/20/2017 01:45 PM, Martin Toth wrote: *My Questions* - is heal and rebalance necessary in order to upgrade replica 2 to replica 3 ? No, `gluster volume add-brick voname replica 3 node3.san:/tank/gluster/brick1' should automatically trigger healing in gluster 3.12 (actually earlier than

Re: [Gluster-users] Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

2017-09-22 Thread Ravishankar N
I had replied (reply-to-all) to this email y'day but it I don't see it on the list. Anyway pasting it again: On 09/21/2017 10:03 AM, Ravishankar N wrote: On 09/20/2017 01:45 PM, Martin Toth wrote: *My Questions* - is heal and rebalance necessary in order to upgrade replica 2 to

Re: [Gluster-users] Arbiter and geo-replication

2017-09-22 Thread Ravishankar N
On 09/22/2017 02:25 AM, Kotresh Hiremath Ravishankar wrote: The volume layout of geo-replication slave volume could be different from master volume. It's not mandatory that if the master volume is arbiter type, the slave also needs to be arbiter. But if it's decided to use the arbiter both at

Re: [Gluster-users] [Gluster-infra] lists.gluster.org issues this weekend

2017-09-22 Thread Ravishankar N
Hello, Are our servers still facing the overload issue? My replies to gluster-users ML are not getting delivered to the list. Regards, Ravi On 09/19/2017 10:03 PM, Michael Scherer wrote: Le samedi 16 septembre 2017 à 20:48 +0530, Nigel Babu a écrit : Hello folks, We have discovered that for

[Gluster-users] AFR: Fail lookups when quorum not met

2017-09-22 Thread Ravishankar N
Hello, In AFR we currently allow look-ups to pass through without taking into account whether the lookup is served from the good or bad brick. We always serve from the good brick whenever possible, but if there is none, we just serve the lookup from one of the bricks that we got a positive re

Re: [Gluster-users] [Gluster-devel] AFR: Fail lookups when quorum not met

2017-10-09 Thread Ravishankar N
On 09/22/2017 07:27 PM, Niels de Vos wrote: On Fri, Sep 22, 2017 at 12:27:46PM +0530, Ravishankar N wrote: Hello, In AFR we currently allow look-ups to pass through without taking into account whether the lookup is served from the good or bad brick. We always serve from the good brick

Re: [Gluster-users] file shred

2017-11-09 Thread Ravishankar N
On 11/08/2017 11:36 PM, Kingsley Tart wrote: Hi, if we were to use shred to delete a file on a gluster volume, will the correct blocks be overwritten on the bricks? (still using Gluster 3.6.3 as have been too cautious to upgrade a mission critical live system). When I strace `shred filename`,

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread Ravishankar N
On 11/09/2017 09:05 AM, Sam McLeod wrote: On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda > wrote: From today's community meeting, we had an item from the issue queue: https://github.com/gluster/community/issues/13

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-15 Thread Ravishankar N
On 11/15/2017 12:54 PM, Daniel Berteaud wrote: Le 13/11/2017 à 21:07, Daniel Berteaud a écrit : Le 13/11/2017 à 10:04, Daniel Berteaud a écrit : Could I just remove the content of the brick (including the .glusterfs directory) and reconnect ? If it is only the brick that is faulty o

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-16 Thread Ravishankar N
On 11/16/2017 12:54 PM, Daniel Berteaud wrote: Le 15/11/2017 à 09:45, Ravishankar N a écrit : If it is only the brick that is faulty on the bad node, but everything else is fine, like glusterd running, the node being a part of the trusted storage pool etc,  you could just kill the brick

Re: [Gluster-users] Missing files on one of the bricks

2017-11-16 Thread Ravishankar N
On 11/16/2017 04:12 PM, Nithya Balachandran wrote: On 15 November 2017 at 19:57, Frederic Harmignies > wrote: Hello, we have 2x files that are missing from one of the bricks. No idea how to fix this. Details: # gluster volume info

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-17 Thread Ravishankar N
On 11/17/2017 03:41 PM, Daniel Berteaud wrote: Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N a écrit: On 11/16/2017 12:54 PM, Daniel Berteaud wrote: Any way in this situation to check which file will be healed from which brick before reconnecting ? Using some getfattr tricks

[Gluster-users] Syntax for creating arbiter volumes in gluster 4.0

2017-12-20 Thread Ravishankar N
Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create replica 3 arbiter 1 ` . It means (or at least intended to mean) that out of the 3 bricks, 1 brick is the arbiter. There has been some feedback while implementing arbiter support in glusterd2 for

Re: [Gluster-users] Syntax for creating arbiter volumes in gluster 4.0

2017-12-21 Thread Ravishankar N
3 brick paths, it can be confusing, so sticking to `replica 2 arbiter 1 for now. Regards, Ravi On 12/20/2017 03:44 PM, Ravishankar N wrote: Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create replica 3 arbiter 1 ` . It means (or at least

Re: [Gluster-users] "file changed as we read it" message during tar file creation on GlusterFS

2018-01-02 Thread Ravishankar N
I think it is safe to ignore it. The problem exists  due to the minor difference in file time stamps in the backend bricks of the same sub volume (for a given file) and during the course of tar, the timestamp can be served from different bricks causing it to complain . The ctime xlator[1] featu

Re: [Gluster-users] "file changed as we read it" message during tar file creation on GlusterFS

2018-01-02 Thread Ravishankar N
, Mauro Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N mailto:ravishan...@redhat.com>> ha scritto: I think it is safe to ignore it. The problem exists  due to the minor difference in file time stamps in the backend bricks of the same sub volume (for a given file) and during the course

Re: [Gluster-users] Possible memory leak via wordpress wordfence plugin behavior in 4.1.16

2019-03-10 Thread Ravishankar N
On 09/03/19 7:15 AM, Brian Litzinger wrote: I have 4 machines running glusterfs and wordpress with the wordfence plugin. The wordfence plugin in all 4 instance pounds away writing and re-writing the file: /mnt/glusterfs/www/openvpn.net/wp-content/wflogs/config-synced.php This is leading to s

Re: [Gluster-users] Docu - how to debug issues

2019-03-19 Thread Ravishankar N
On 20/03/19 10:29 AM, Strahil wrote: Hello Community, Is there a docu page clearing what information is needed to be gathered in advance in order to help the devs resolve issues ? So far I couldn't find one - but I have missed that. volume info, gluster version of the clients/servers and al

Re: [Gluster-users] Cross-compiling GlusterFS

2019-04-02 Thread Ravishankar N
On 01/04/19 1:20 PM, François Duport wrote: Hi, I try to cross-compile GlusterFS because I don't want my embedded client Todo it and each time I reset my client. So I want the compile application to be in my rom image. That said in appearance I did succeeded in my cross compilation but whe

Re: [Gluster-users] Is "replica 4 arbiter 1" allowed to tweak client-quorum?

2019-04-03 Thread Ravishankar N
On 03/04/19 12:18 PM, Ingo Fischer wrote: Hi All, I had a replica 2 cluster to host my VM images from my Proxmox cluster. I got a bit around split brain scenarios by using "nufa" to make sure the files are located on the host where the machine also runs normally. So in fact one replica could f

Re: [Gluster-users] Replica 3: Client access via FUSE failed if two bricks are down

2019-04-12 Thread Ravishankar N
On 12/04/19 8:34 PM, Felix Kölzow wrote: Dear Gluster-Community, I created a test-environment to test a gluster volume with replica 3. Afterwards, I am able to manually mount the gluster volume using FUSE. mount command: mount -t glusterfs  -o backup-volfile-servers=gluster01:gluster02 g

Re: [Gluster-users] heal: Not able to fetch volfile from glusterd

2019-05-06 Thread Ravishankar N
On 06/05/19 6:43 PM, Łukasz Michalski wrote: Hi, I have problem resolving split-brain in one of my installations. CenOS 7, glusterfs 3.10.12, replica on two nodes: [root@ixmed1 iscsi]# gluster volume status cluster Status of volume: cluster Gluster process TCP Port

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-17 Thread Ravishankar N
On 17/05/19 5:59 AM, David Cunningham wrote: Hello, We're adding an arbiter node to an existing volume and having an issue. Can anyone help? The root cause error appears to be "----0001: failed to resolve (Transport endpoint is not connected)", as below. Was you

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-21 Thread Ravishankar N
Hi David, Could you provide the `getfattr -d -m. -e hex /nodirectwritedata/gluster/gvol0` output of all bricks and the output of `gluster volume info`? Thanks, Ravi On 22/05/19 4:57 AM, David Cunningham wrote: Hi Sanju, Here's what glusterd.log says on the new arbiter server when trying to

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-21 Thread Ravishankar N
data/gluster/gvol0 Brick2: gfs2:/nodirectwritedata/gluster/gvol0 Brick3: gfs3:/nodirectwritedata/gluster/gvol0 (arbiter) Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet On Wed, 22 May 2019 at 12:43, Ravishankar N <mailto:ravishan...@redh

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-21 Thread Ravishankar N
es look okay to me David. Basically, '/nodirectwritedata/gluster/gvol0' must be empty and must not have any extended attributes set on it. Why fuse_first_lookup() is failing is a bit of a mystery to me at this point. :-( Regards, Ravi Thank you. On Wed, 22 May 2019 at 13:56, Ravis

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-21 Thread Ravishankar N
If you are trying this again, please 'gluster volume set $volname client-log-level DEBUG`before attempting the add-brick and attach the gvol0-add-brick-mount.log here. After that, you can change the client-log-level back to INFO. -Ravi On 22/05/19 11:32 AM, Ravishankar N wrote: On

Re: [Gluster-users] gluster 5.6: Gfid mismatch detected

2019-05-22 Thread Ravishankar N
On 22/05/19 12:39 PM, Hu Bert wrote: Hi @ll, today i updated and rebooted the 3 servers of my replicate 3 setup; after the 3rd one came up again i noticed this error: [2019-05-22 06:41:26.781165] E [MSGID: 108008] [afr-self-heal-common.c:392:afr_gfid_split_brain_source] 0-workdata-replicate-0

Re: [Gluster-users] gluster 5.6: Gfid mismatch detected

2019-05-22 Thread Ravishankar N
data statistics heal-count" there are 0 entries left. Files/directories are there. Happened the first time with this setup, but everything ok now. Thx for your fast help :-) Hubert Am Mi., 22. Mai 2019 um 09:32 Uhr schrieb Ravishankar N : On 22/05/19 12:39 PM, Hu Bert wrote: Hi @ll, today

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-24 Thread Ravishankar N
elf-heal Daemon on gfs1    N/A N/A    Y   28600 Self-heal Daemon on gfs2    N/A N/A    Y   17614 Task Status of Volume gvol0 -- There are no active volume tasks On Wed, 22

Re: [Gluster-users] Does replace-brick migrate data?

2019-05-24 Thread Ravishankar N
On 23/05/19 2:40 AM, Alan Orth wrote: Dear list, I seem to have gotten into a tricky situation. Today I brought up a shiny new server with new disk arrays and attempted to replace one brick of a replica 2 distribute/replicate volume on an older server using the `replace-brick` command: # g

Re: [Gluster-users] remove-brick failure on distributed with 5.6

2019-05-24 Thread Ravishankar N
Adding a few DHT folks for some possible suggestions. -Ravi On 23/05/19 11:15 PM, bran...@thinkhuge.net wrote: Does anyone know what should be done on a glusterfs v5.6 "gluster volume remove-brick" operation that fails?  I'm trying to remove 1 of 8 distributed smaller nodes for replacement w

Re: [Gluster-users] Does replace-brick migrate data?

2019-05-28 Thread Ravishankar N
on the new brick (example hardlink for files and symlinks for directories are present etc.) . Regards, Ravi Thanks, ¹ https://lists.gluster.org/pipermail/gluster-users/2018-February/033584.html On Fri, May 24, 2019 at 4:59 PM Ravishankar N <mailto:ravishan...@redhat.com>> w

Re: [Gluster-users] Does replace-brick migrate data?

2019-05-28 Thread Ravishankar N
On 29/05/19 9:50 AM, Ravishankar N wrote: On 29/05/19 3:59 AM, Alan Orth wrote: Dear Ravishankar, I'm not sure if Brick4 had pending AFRs because I don't know what that means and it's been a few days so I am not sure I would be able to find that information. When you find

Re: [Gluster-users] Transport endpoint is not connected

2019-05-28 Thread Ravishankar N
On 29/05/19 6:21 AM, David Cunningham wrote: Hello all, We are seeing a strange issue where a new node gfs3 shows another node gfs2 as not connected on the "gluster volume heal" info: [root@gfs3 bricks]# gluster volume heal gvol0 info Brick gfs1:/nodirectwritedata/gluster/gvol0 Status: Conne

Re: [Gluster-users] Transport endpoint is not connected

2019-05-29 Thread Ravishankar N
   N/A N/A    Y   7634 Task Status of Volume gvol0 -- There are no active volume tasks On Wed, 29 May 2019 at 16:26, Ravishankar N <mailto:ravishan...@redhat.com>> wrote:

  1   2   3   4   5   6   >