Re: [Gluster-users] Gluster install using Ganesha for NFS

2017-07-07 Thread Mahdi Adnan
Hi, Why change to storhaug ? and whats going to happen to the current setup if i want to update Gluster to 3.11 or beyond ? -- Respectfully Mahdi A. Mahdi From: gluster-users-boun...@gluster.org on behalf of Kaleb S.

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
Ram, As per the code, self-heal was the only candidate which *can* do it. Could you check logs of self-heal daemon and the mount to check if there are any metadata heals on root? +Sanoj Sanoj, Is there any systemtap script we can use to detect which process is removing these

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Vijay Bellur
Do you observe any event pattern (self-healing / disk failures / reboots etc.) after which the extended attributes are missing? Regards, Vijay On Fri, Jul 7, 2017 at 5:28 PM, Ankireddypalle Reddy wrote: > We lost the attributes on all the bricks on servers glusterfs2 and

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Ankireddypalle Reddy
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again. [root@glusterfs2 Log_Files]# gluster volume info Volume Name: StoragePool Type: Distributed-Disperse Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f Status: Started Number of Bricks: 20 x (2 + 1) = 60

Re: [Gluster-users] Gluster failure due to "0-management: Lock not released for "

2017-07-07 Thread Victor Nomura
It’s working again! After countless hours trying to get it fixed, I just redid everything and tested to see what caused Gluster to fail. The problem went away and there no more locks after I disabled jumbo frames and changed the MTU back to 1500. If the MTU is set to 9000, Gluster was dead.

Re: [Gluster-users] Ganesha "Failed to create client in recovery dir" in logs

2017-07-07 Thread Soumya Koduri
On 07/07/2017 11:36 PM, Renaud Fortier wrote: Hi all, I have this entry in ganesha.log file on server when mounting the volume on client : « GLUSTER-NODE3 : ganesha.nfsd-54084[work-27] nfs4_add_clid :CLIENT ID :EVENT :Failed to create client in recovery dir

[Gluster-users] Ganesha "Failed to create client in recovery dir" in logs

2017-07-07 Thread Renaud Fortier
Hi all, I have this entry in ganesha.log file on server when mounting the volume on client : < GLUSTER-NODE3 : ganesha.nfsd-54084[work-27] nfs4_add_clid :CLIENT ID :EVENT :Failed to create client in recovery dir (/var/lib/nfs/ganesha/v4recov/node0/:::192.168.2.152-(24:Linux NFSv4.2

Re: [Gluster-users] Slow write times to gluster disk

2017-07-07 Thread Soumya Koduri
Hi, On 07/07/2017 06:16 AM, Pat Haley wrote: Hi All, A follow-up question. I've been looking at various pages on nfs-ganesha & gluster. Is there a version of nfs-ganesha that is recommended for use with glusterfs 3.7.11 built on Apr 27 2016 14:09:22 CentOS release 6.8 (Final) For

[Gluster-users] Gluster 3.11 on ubuntu 16.04 not working

2017-07-07 Thread Christiane Baier
Hi There, we have a problem with a fresh installation of gluster 3.11 on a ubuntu 16.04 server. we have made the installaton straight forward like it ist described on the gluster.org website. in fstab is: /dev/sdb1

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy wrote: > 3.7.19 > These are the only callers for removexattr and only _posix_remove_xattr has the potential to do removexattr as posix_removexattr already makes sure that it is not gfid/volume-id. And surprise surprise

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Ankireddypalle Reddy
3.7.19 Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com] Sent: Friday, July 07, 2017 11:54 AM To: Ankireddypalle Reddy Cc: Gluster Devel (gluster-de...@gluster.org); gluster-users@gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy wrote: > Pranith, > > Thanks for looking in to the issue. The bricks were > mounted after the reboot. One more thing that I noticed was when the > attributes were manually set when glusterd was up then on

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Ankireddypalle Reddy
Pranith, Thanks for looking in to the issue. The bricks were mounted after the reboot. One more thing that I noticed was when the attributes were manually set when glusterd was up then on starting the volume the attributes were again lost. Had to stop glusterd set attributes

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
On Fri, Jul 7, 2017 at 9:15 PM, Pranith Kumar Karampuri wrote: > Did anything special happen on these two bricks? It can't happen in the > I/O path: > posix_removexattr() has: > 0 if (!strcmp (GFID_XATTR_KEY, name)) > { > > > 1 gf_msg

Re: [Gluster-users] [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-07 Thread Pranith Kumar Karampuri
Did anything special happen on these two bricks? It can't happen in the I/O path: posix_removexattr() has: 0 if (!strcmp (GFID_XATTR_KEY, name)) { 1 gf_msg (this->name, GF_LOG_WARNING, 0, P_MSG_XATTR_NOT_REMOVED, 2 "Remove xattr called on gfid

[Gluster-users] gfid and volume-id extended attributes lost

2017-07-07 Thread Ankireddypalle Reddy
Hi, We faced an issue in the production today. We had to stop the volume and reboot all the servers in the cluster. Once the servers rebooted starting of the volume failed because the following extended attributes were not present on all the bricks on 2 servers. 1) trusted.gfid

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Florian Leleu
Thank you Ravi, after checking gfid within the brick I think someone made modification inside the brick and not inside the mountpoint ... Well, I'll try to fix it all it's all within my hands. Thanks again, have a nice day. Le 07/07/2017 à 12:28, Ravishankar N a écrit : > On 07/07/2017 03:39

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Ravishankar N
On 07/07/2017 03:39 PM, Florian Leleu wrote: I guess you're right aboug gfid, I got that: [2017-07-07 07:35:15.197003] W [MSGID: 108008] [afr-self-heal-name.c:354:afr_selfheal_name_gfid_mismatch_check] 0-applicatif-replicate-0: GFID mismatch for /snooper

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Florian Leleu
I guess you're right aboug gfid, I got that: [2017-07-07 07:35:15.197003] W [MSGID: 108008] [afr-self-heal-name.c:354:afr_selfheal_name_gfid_mismatch_check] 0-applicatif-replicate-0: GFID mismatch for /snooper b9222041-72dd-43a3-b0ab-4169dbd9a87f on applicatif-client-1 and

[Gluster-users] Rebalance task fails

2017-07-07 Thread Szymon Miotk
Hello everyone, I have problem rebalancing Gluster volume. Gluster version is 3.7.3. My 1x3 replicated volume become full, so I've added three more bricks to make it 2x3 and wanted to rebalance. But every time I start rebalancing, it fails immediately. Rebooting Gluster nodes doesn't help. #

Re: [Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

2017-07-07 Thread Atin Mukherjee
You'd need to allow some more time to dig into the logs. I'll try to get back on this by Monday. On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi > wrote: > >> On Thu, Jul 6, 2017 at

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Ravishankar N
What does the mount log say when you get the EIO error on snooper? Check if there is a gfid mismatch on snooper directory or the files under it for all 3 bricks. In any case the mount log or the glustershd.log of the 3 nodes for the gfids you listed below should give you some idea on why the

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Florian Leleu
Hi Ravi, thanks for your answer, sure there you go: # gluster volume heal applicatif info Brick ipvr7.xxx:/mnt/gluster-applicatif/brick Status: Connected Number of entries: 6 Brick ipvr8.xxx:/mnt/gluster-applicatif/brick Status: Connected Number of entries: 29

Re: [Gluster-users] GluserFS WORM hardlink

2017-07-07 Thread Karthik Subrahmanya
Hi, If I did not misunderstood, you are saying that WORM is not allowing to create hard links for the files. Answering it based on that assumption. If the volume level or file level WORM feature is enabled and the file is in WORM/WORM-Retained state, then those files should be immutable and hence

Re: [Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Ravishankar N
On 07/07/2017 01:23 PM, Florian Leleu wrote: Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 and 1 arbiter), each following command was made on node ipvr8.xxx: #

[Gluster-users] I/O error for one folder within the mountpoint

2017-07-07 Thread Florian Leleu
Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 and 1 arbiter), each following command was made on node ipvr8.xxx: # gluster volume info applicatif Volume Name: applicatif

[Gluster-users] GluserFS WORM hardlink

2017-07-07 Thread 최두일
GlusterFS WORM hard links will not be created OS is CentOS7 ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Community Meeting 2017-07-05 Minutes

2017-07-07 Thread Kaushal M
Hi all, The meeting minutes and logs for the community meeting held on Wednesday are available at the links below. [1][2][3][4] We had a good showing this meeting. Thank you everyone who attended this meeting. Our next meeting will be on 19th July. Everyone is welcome to attend. The meeting