Re: [Gluster-users] [Gluster-devel] Input/Output Error on Gluster NFS

2015-02-04 Thread Soumya Koduri
Hi Peter, Have you disabled Gluster-NFS ACLs . Please check the option value - #gluster v info | grep nfs.acl nfs.acl: ON Also please provide the details of the nfs-client you are using. Typically, nfs-clients seem to issue getxattr before doing setxattr/removexattr operations and return

Re: [Gluster-users] [Gluster-devel] Input/Output Error on Gluster NFS

2015-02-05 Thread Soumya Koduri
volume info' only if we explicitly modify its value to ON/OFF. Can you please verify if the filesystem where your Gluster bricks have been created has been mounted with ACLs enabled. Thanks, Soumya Thanks -Peter From: Soumya Koduri [skod...@redhat.com

Re: [Gluster-users] [Gluster-devel] [Gluster-user] Sybase backup server failed to write to Gluster NFS

2015-01-22 Thread Soumya Koduri
Hi Peter, Can you please try manually mounting those volumes using any/other nfs client and check if you are able to perform write operations. Also please collect the gluster nfs log while doing so. Thanks, Soumya On 01/22/2015 08:18 AM, Peter Auyeung wrote: Hi, We have been having 5

Re: [Gluster-users] [Gluster-devel] [Gluster-user] Sybase backup server failed to write to Gluster NFS

2015-01-23 Thread Soumya Koduri
:afr_log_self_heal_completion_status] 0-sas02-replicate-2: metadata self heal is successfully completed, metadata self heal from source sas02-client-4 to sas02-client-5, metadata - Pending matrix: [ [ 0 0 ] [ 0 0 ] ], on /RepDBSata02 Thanks Peter From: Soumya Koduri

Re: [Gluster-users] [Gluster-devel] REMINDER: NFS-Ganesha features demo on Google Hangout in ~1 hour from now

2015-04-30 Thread Soumya Koduri
We are sorry for the inconvenience caused during the hangout session. There is a network outage at our place. We shall do the recording again and share the link sometime next week. Thanks, Soumya On 04/30/2015 06:08 PM, Niels de Vos wrote: On Wed, Apr 29, 2015 at 11:20:20AM -0400, Meghana

Re: [Gluster-users] Questions on ganesha HA and shared storage size

2015-06-08 Thread Soumya Koduri
! Cheers, Alessandro Il giorno 08/giu/2015, alle ore 13:53, Soumya Koduri skod...@redhat.com ha scritto: Hi, Please find the slides of the demo video at [1] We recommend to have a distributed replica volume as a shared volume for better data-availability. Size of the volume depends

Re: [Gluster-users] Questions on ganesha HA and shared storage size

2015-06-09 Thread Soumya Koduri
09/giu/2015, alle ore 08:06, Soumya Koduri skod...@redhat.com mailto:skod...@redhat.com ha scritto: On 06/09/2015 01:31 AM, Alessandro De Salvo wrote: OK, I found at least one of the bugs. The /usr/libexec/ganesha/ganesha.sh has the following lines: if [ -e /etc/os-release

Re: [Gluster-users] Questions on ganesha HA and shared storage size

2015-06-09 Thread Soumya Koduri
On 06/09/2015 02:06 PM, Alessandro De Salvo wrote: Hi Soumya, Il giorno 09/giu/2015, alle ore 08:06, Soumya Koduri skod...@redhat.com ha scritto: On 06/09/2015 01:31 AM, Alessandro De Salvo wrote: OK, I found at least one of the bugs. The /usr/libexec/ganesha/ganesha.sh has

Re: [Gluster-users] Questions on ganesha HA and shared storage size

2015-06-09 Thread Soumya Koduri
, Alessandro Il giorno 08/giu/2015, alle ore 19:30, Soumya Koduri skod...@redhat.com ha scritto: On 06/08/2015 08:20 PM, Alessandro De Salvo wrote: Sorry, just another question: - in my installation of gluster 3.7.1 the command gluster features.ganesha enable does not work

Re: [Gluster-users] Questions on ganesha HA and shared storage size

2015-06-08 Thread Soumya Koduri
Hi, Please find the slides of the demo video at [1] We recommend to have a distributed replica volume as a shared volume for better data-availability. Size of the volume depends on the workload you may have. Since it is used to maintain states of NLM/NFSv4 clients, you may calculate the

Re: [Gluster-users] gluster-3.7 cannot start volume ganesha feature cannot turn on problem

2015-06-03 Thread Soumya Koduri
[mailto:achir...@redhat.com] Sent: Tuesday, June 02, 2015 11:18 PM To: benchu...@iii.org.tw; gluster-users@gluster.org Cc: Meghana Madhusudhan; Soumya Koduri Subject: Re: [Gluster-users] gluster-3.7 cannot start volume ganesha feature cannot turn on problem Can you please attach the glusterd logs here? You

Re: [Gluster-users] gluster-3.7 cannot start volume ganesha feature cannot turn on problem

2015-06-03 Thread Soumya Koduri
'Volume (null)' failed on localhost By the way, does glusterfs HA for ganesha support pnfs? Many thanks, Ben -Original Message- From: Soumya Koduri [mailto:skod...@redhat.com] Sent: Wednesday, June 03, 2015 3:35 PM To: 莊尚豪; 'Anoop C S'; gluster-users@gluster.org Cc: 'Meghana Madhusudhan

Re: [Gluster-users] Questions on ganesha HA and shared storage size

2015-06-09 Thread Soumya Koduri
with nfs-ganesha 2.1.0, so it must be something introduced with 2.2.0. Cheers, Alessandro On Tue, 2015-06-09 at 15:16 +0530, Soumya Koduri wrote: On 06/09/2015 02:48 PM, Alessandro De Salvo wrote: Hi, OK, the problem with the VIPs not starting is due to the ganesha_mon heartbeat script

Re: [Gluster-users] Questions on ganesha HA and shared storage size

2015-06-10 Thread Soumya Koduri
Thanks, Alessandro Il giorno 09/giu/2015, alle ore 18:37, Soumya Koduri skod...@redhat.com ha scritto: On 06/09/2015 09:47 PM, Alessandro De Salvo wrote: Another update: the fact that I was unable to use vol set ganesha.enable was due to another bug in the ganesha scripts. In short

Re: [Gluster-users] [Nfs-ganesha-devel] Problems in /usr/libexec/ganesha/dbus-send.sh and ganesha dbus interface when disabling exports from gluster

2015-06-18 Thread Soumya Koduri
...@lists.sourceforge.net, Soumya Koduri skod...@redhat.com Sent: Thursday, June 18, 2015 7:24:55 PM Subject: Re: [Nfs-ganesha-devel] Problems in /usr/libexec/ganesha/dbus-send.sh and ganesha dbus interface when disabling exports from gluster Hi Meghana, Il giorno 18/giu/2015, alle ore 07:04, Meghana

Re: [Gluster-users] [Nfs-ganesha-devel] Problems in /usr/libexec/ganesha/dbus-send.sh and ganesha dbus interface when disabling exports from gluster

2015-06-18 Thread Soumya Koduri
On 06/18/2015 07:39 PM, Malahal Naineni wrote: I still have not looked at the log messages, but I see the dbus thread waiting for the upcall thread to complete when an export is removed. Is there is a time limit on how the upcall thread gets blocked? A variable called 'destroy_mode' is used

Re: [Gluster-users] [Nfs-ganesha-devel] Problems in /usr/libexec/ganesha/dbus-send.sh and ganesha dbus interface when disabling exports from gluster

2015-06-17 Thread Soumya Koduri
On 06/17/2015 10:57 PM, Alessandro De Salvo wrote: Hi, when disabling exports from gluster 3.7.1, by using gluster vol set volume ganesha.enable off, I always get the following error: Error: Dynamic export addition/deletion failed. Please see log file for details This message is produced by

Re: [Gluster-users] Ganesha exports access restrictions and Posix ACLs support

2015-06-17 Thread Soumya Koduri
, Soumya Koduri skod...@redhat.com ha scritto: Hi Alessandro, Response inline. On 06/17/2015 08:07 PM, Alessandro De Salvo wrote: Hi, I do not seem to be able to find an option to automatically add the gluster volume restrictions I can put with auth.allow into a ganesha export. As far as I have

Re: [Gluster-users] gluster-3.7 cannot start volume ganesha feature cannot turn on problem

2015-06-02 Thread Soumya Koduri
On 06/02/2015 04:38 PM, Anoop C S wrote: On 06/02/2015 01:42 PM, 莊尚豪 wrote: Hi all, I have two question for glusterfs-3.7 on fedora-22 I used to have a glusterfs cluster version 3.6.2. The following configuration can be work in version-3.6.2, but not in version-3.7 There is 2 node for

Re: [Gluster-users] Questions on ganesha HA and shared storage size

2015-06-11 Thread Soumya Koduri
? That would be wrong. Thanks, Alessandro On Wed, 2015-06-10 at 15:28 +0530, Soumya Koduri wrote: On 06/10/2015 05:49 AM, Alessandro De Salvo wrote: Hi, I have enabled the full debug already, but I see nothing special. Before exporting any volume the log shows no error, even when I do

Re: [Gluster-users] [Gluster-devel] glusterfs+nfs-ganesha+windows2008 issues

2015-05-26 Thread Soumya Koduri
Hi Louis, AFAIK, we never tested nfs-ganesha+glusterfs using windows client. May be it would be good if you can collect and provide packet trace/cores or logs (with nfs-ganesha atleast in NIV_DEBUG level) on the server side while you run these tests to debug futher. Thanks, Soumya On

Re: [Gluster-users] ganesha BFS

2015-08-13 Thread Soumya Koduri
It depends on the workload. Like native NFS, even with NFS-Ganesha, data is routed through the server where its mounted from. In addition NFSv4.x protocol adds more complexity and cannot be directly compared with NFSv3 traffic. However with pNFS, I/O is routed to data servers directly by the

Re: [Gluster-users] NFS mount

2015-08-19 Thread Soumya Koduri
On 08/19/2015 09:53 PM, shacky wrote: Hi Atin, thank you very much for your answer. It seems like your NFS kernel module is not disabled. Please try disabling it and re mount. I tried but it did not solve my problem: # service nfs-common stop Stopping NFS common utilities: idmapd statd.

Re: [Gluster-users] GlusterFS 3.7.2 and ACL

2015-07-31 Thread Soumya Koduri
absolute path names # file: mnt/dir5 # owner: root # group: root user::rwx group::r-x other::r-x # Though these ACLs are displayed when done getfacl using brick-path directly. Thanks, Soumya On 31 Jul 2015, at 09:35, Soumya Koduri skod...@redhat.com wrote: I have tested it using the gluster-NFS

Re: [Gluster-users] GlusterFS 3.7.2 and ACL

2015-07-31 Thread Soumya Koduri
this, along with ACCESS ACL, please set an inherit/default ACL on that directory and check getfacl output. Thanks, Soumya On 07/31/2015 05:51 PM, Soumya Koduri wrote: On 07/31/2015 05:33 PM, Jüri Palis wrote: Playing around with my GlusterFS test setup I discovered following anomaly On volume

Re: [Gluster-users] GlusterFS 3.7.2 and ACL

2015-07-31 Thread Soumya Koduri
I have tested it using the gluster-NFS server with GlusterFS version 3.7.* running on a RHEL7 machine and RHEL 6.7 as NFS client. ACLs with named groups got properly set on the directory. Could you please provide us the packet trace (better taken on the server side so that we can check

Re: [Gluster-users] Bareos backup from Gluster mount

2015-07-30 Thread Soumya Koduri
On 07/30/2015 03:36 PM, Pranith Kumar Karampuri wrote: On 07/30/2015 03:34 PM, Soumya Koduri wrote: On 07/30/2015 01:07 PM, Pranith Kumar Karampuri wrote: Added folks who work on nfs on gluster. Let's see. Pranith On 07/30/2015 09:25 AM, Ryan Clough wrote:able Okay, I think this has

Re: [Gluster-users] Change transport-type on volume from tcp to rdma, tcp

2015-07-21 Thread Soumya Koduri
From the following errors, [2015-07-21 14:36:30.495321] I [MSGID: 114020] [client.c:2118:notify] 0-vol_shared-client-0: parent translators are ready, attempting connect on transport [2015-07-21 14:36:30.498989] W [socket.c:923:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT 0 on

Re: [Gluster-users] Change transport-type on volume from tcp to rdma, tcp

2015-07-21 Thread Soumya Koduri
On 07/21/2015 02:40 PM, Geoffrey Letessier wrote: Dears, Is it exist a way to modify GlusterFS volumes transport-type settings? Indeed, I’ve previously set the transport-type parameter to tcp for my main volume and I would like to re-set it from tcp to rdma,tcp. [1] captures most of the

Re: [Gluster-users] Question on HA Active-Active Ganesha setup

2015-11-06 Thread Soumya Koduri
On 11/05/2015 08:43 PM, Surya K Ghatty wrote: All... I need your help! I am trying to setup Highly available Active-Active Ganesha configuration on two glusterfs nodes based on instructions here: https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/

Re: [Gluster-users] write(filename, ...) implementation

2015-08-25 Thread Soumya Koduri
are sufficient - 'glfs_h_lookupat' - to get 'glfs_object' given file path. 'glfs_h_anonymous_write' - takes above handle as one of the inputs. Thanks, Soumya Ivica On 24 Aug 2015, at 20:02, Soumya Koduri skod...@redhat.com wrote: On 08/24/2015 11:24 PM, Ivica Siladic wrote: Hi, I'm doing a lot

[Gluster-users] Contributing to GlusterFS: Fixing Coverity defects

2015-09-08 Thread Soumya Koduri
Hi, If you would like to contribute to GlusterFS, one of the easiest ways which shall help you to analyze the code is by fixing defects reported by Coverity Scan tool. The detailed process is mentioned in [1]. To summarize, * Signup as a member of https://scan.coverity.com/projects/987 *

Re: [Gluster-users] ganesha and glusterfs 3.7, what provides nfs-ganesha-gluster?

2015-09-03 Thread Soumya Koduri
you can download the same using the repos listed here http://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/2.2.0/ Thanks, Soumya On 09/04/2015 05:22 AM, Patrick Riehecky wrote: I'm fairly close to a working ganesha/glusterfs setup, but I can't seem to install glusterfs-ganesha. Any

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-13 Thread Soumya Koduri
On 09/13/2015 09:38 AM, Yaroslav Molochko wrote: I wish this could be that simple: root@PSC01SERV008:/var/lib# netstat -nap | grep 38465 root@PSC01SERV008:/var/lib# ss -n | grep 38465 root@PSC01SERV008:/var/lib# 2015-09-13 1:34 GMT+08:00 Atin Mukherjee

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Soumya Koduri
ng I could find in the google and this doesn't work. Please, lets move on with something more sophisticated than restart glusterfs... I would not contact you if I had not tried to restart it dozen of time. Do you have any debugging to see what is really happening? 2015-09-15 1:55 GMT+08:00 Soumya Kodur

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Soumya Koduri
Small correction in the file I provided earlier. pmap_set returns 0 in case of failure. On 09/16/2015 12:08 AM, Soumya Koduri wrote: /* pmap_set() returns 0 for FAIL and 1 for SUCCESS */ if (!(pmap_set (newprog->prognum, newprog->progver, IPPRO

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-14 Thread Soumya Koduri
Could you try * disabling iptables (& firewalld if enabled) * restart rpcbind service * restart glusterd If this doesn't work, (mentioned in one of the forums) Add below line in '/etc/hosts.allow' file. ALL: 127.0.0.1 : ALLOW Restart rpcbind and glusterd services. Thanks, Soumya On

Re: [Gluster-users] volume répliqué.

2015-09-28 Thread Soumya Koduri
On 09/25/2015 04:04 PM, Pierre Léonard wrote: Hi all, I have 14 nodes with a replicated 2 volume. I want to suppress the replication function. I that possible without losing data. Or Do I need to make a save of the data before. The only way I can imagine to avoid replication in case of

Re: [Gluster-users] Enquiry about Gluster

2015-09-27 Thread Soumya Koduri
On 09/23/2015 07:34 AM, Premkumar Mani wrote: Team, We would like to implement gluster in my environment. could you please help me to get more information about this product ? May be good to start with the below links - http://www.gluster.org/ http://gluster.readthedocs.org/en/latest/

Re: [Gluster-users] Fwd: nfs-ganesha HA with arbiter volume

2015-09-22 Thread Soumya Koduri
[2015-09-21 08:18:51.504694] E [MSGID: 106062] [glusterd-op-sm.c:3698:glusterd_op_ac_unlock] 0-management: Unable to acquire volname I have put the hostnames of all servers in my /etc/hosts file, including the arbiter node. On 18 September 2015 at 16:52, Soumya Koduri <skod...@redh

Re: [Gluster-users] Gluster 3.7 and nfs ganesha HA howto

2015-09-22 Thread Soumya Koduri
Hi, This doc may come in handy for you to configure HA NFS - https://github.com/soumyakoduri/glusterdocs/blob/ha_guide /Administrator%20Guide/Configuring%20HA%20NFS%20Server.md Thanks, Soumya On 09/21/2015 11:24 PM, Gluster Admin wrote: Hi, Can someone point me to the howto/docs on setting

Re: [Gluster-users] Fwd: nfs-ganesha HA with arbiter volume

2015-09-22 Thread Soumya Koduri
; -- > > With Regards, > > Jiffin > > > > > >> [2015-09-21 07:59:48.653912] E [MSGID: 106123] > >> [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit > >> of operation 'Volu

Re: [Gluster-users] Fwd: nfs-ganesha HA with arbiter volume

2015-09-22 Thread Soumya Koduri
safe, after disabling nfs-ganesha, run the below script command # ./usr/libexec/ganesha/ganesha-ha.sh --cleanup /etc/ganesha Thanks, Soumya On 22 September 2015 at 09:04, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi Tiemen, Have added the step

Re: [Gluster-users] nfs-ganesha HA with arbiter volume

2015-09-18 Thread Soumya Koduri
Hi Tiemen, One of the pre-requisites before setting up nfs-ganesha HA is to create and mount shared_storage volume. Use below CLI for that "gluster volume set all cluster.enable-shared-storage enable" It shall create the volume and mount in all the nodes (including the arbiter node). Note

Re: [Gluster-users] gluster nfs-ganesha enable fails and is driving me crazy

2015-12-09 Thread Soumya Koduri
On 12/10/2015 02:51 AM, Marco Antonio Carcano wrote: Hi Kaleb, thank you very much for the quick reply I tried what you suggested, but I got the same error I tried both HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local" VIP_glstr01.carcano.local="192.168.65.250"

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
re ~1.8M files on this test volume. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2015-12-31 Thread Soumya Koduri
On 12/28/2015 02:32 PM, Soumya Koduri wrote: - Original Message - From: "Pranith Kumar Karampuri" <pkara...@redhat.com> To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri" <skod...@redhat.com> Cc: gluster-users@glu

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
ave pasted below apply to gfapi/nfs-ganesha applications. Also, to resolve the nfs-ganesha issue which I had mentioned below (in case if Entries_HWMARK option gets changed), I have posted below fix - https://review.gerrithub.io/#/c/258687 Thanks, Soumya Ideas? 05.01.2016 12:31, Sou

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
2016 р. 22:52:25 EET Soumya Koduri wrote: On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote: Unfortunately, both patches didn't make any difference for me. I've patched 3.7.6 with both patches, recompiled and installed patched GlusterFS package on client side and mounted volume with ~2M of files

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote: What units Cache_Size is measured in? Bytes? Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you please run ganesha process under valgrind? Will help in detecting leaks. Thanks, Soumya 25.12.2015 16:58, Soumya Koduri

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory usage: === root 5416 34.2 78.5

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-26 Thread Soumya Koduri
. https://gist.github.com/e4602a50d3c98f7a2766 One may see GlusterFS-related leaks here as well. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2015-12-28 Thread Soumya Koduri
- Original Message - > From: "Pranith Kumar Karampuri" <pkara...@redhat.com> > To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri" > <skod...@redhat.com> > Cc: gluster-users@gluster.org, gluster-de...@gluste

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2015-12-22 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-users] [Gluster-devel] REMINDER: Gluster Bug Triage timing-poll

2015-12-22 Thread Soumya Koduri
+gluster-users On 12/22/2015 06:03 PM, Hari Gowtham wrote: Hi all, There was a poll conducted to find the timing that suits best for the people who want to participate in the weekly Gluster bug triage meeting. The result for the poll is yet to be announced but we would like to get more

[Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (22nd Dec 2015)

2015-12-22 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attend the meeting. Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-12-22/gluster_bug_triage.2015-12-22-12.00.html Minutes (text):

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-30 Thread Soumya Koduri
Hi, But are you telling me that in a 3-node cluster, quorum is lost when one of the nodes ip is down? yes. Its the limitation with Pacemaker/Corosync. If the nodes participating in cluster cannot communicate with majority of them (quorum is lost), then the cluster is shut down. However i

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-30 Thread Soumya Koduri
On 11/30/2015 03:26 PM, Soumya Koduri wrote: Hi, But are you telling me that in a 3-node cluster, quorum is lost when one of the nodes ip is down? yes. Its the limitation with Pacemaker/Corosync. If the nodes participating in cluster cannot communicate with majority of them (quorum is lost

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-27 Thread Soumya Koduri
Hi, On 11/27/2015 01:58 PM, ml wrote: Dear All, I am trying to get a nfs-ganesha ha cluster running, with 3, CentOS Linux release 7.1.1503 nodes. I use the package glusterfs-ganesha-3.7.6 -1.el7.x86_64 to get the HA scripts. So far it works fine when i stop the nfs-ganesha service on one of

Re: [Gluster-users] nfs mounting and copying files

2015-11-18 Thread Soumya Koduri
On 11/17/2015 08:50 PM, Pierre Léonard wrote: Hi all, Y have a cluster with 14 nodes. I have build a volume stripe 7 with the 14 nodes. Underlying I use XFS. Locally I mount the global volume with nfs : mount -t nfs 127.0.0.1:gvExport /glusterfs/gvExport -o _netdev,nosuid,bg,exec then I

Re: [Gluster-users] Configuring Ganesha and gluster on separate nodes?

2015-11-18 Thread Soumya Koduri
On 11/17/2015 10:21 PM, Surya K Ghatty wrote: Hi: I am trying to understand if it is technically feasible to have gluster nodes on one machine, and export a volume from one of these nodes using a nfs-ganesha server installed on a totally different machine? I tried the below and showmount -e

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
On 01/08/2016 05:04 PM, Soumya Koduri wrote: I could reproduce while testing deep directories with in the mount point. I root caus'ed the issue & had discussion with Pranith to understand the purpose and recommended way of taking nlookup on inodes. I shall make changes to my existing fix and

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
as lost. But most of the inodes should have got purged when we drop vfs cache. Did you do drop vfs cache before exiting the process? I shall add some log statements and check that part Thanks, Soumya 12.01.2016 08:24, Soumya Koduri написав: For fuse client, I tried vfs drop_caches

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
On 01/13/2016 04:08 PM, Soumya Koduri wrote: On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: Just in case, here is Valgrind output on FUSE client with 3.7.6 + API-related patches we discussed before: https://gist.github.com/cd6605ca19734c1496a4 Thanks for sharing the results. I made

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
Cordialement, Mathieu CHATEAU http://www.lotp.fr 2016-01-12 7:24 GMT+01:00 Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>>: On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote: Brief test shows that Ganesha stopped leaking a

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
11.01.2016 12:26, Soumya Koduri написав: I have made changes to fix the lookup leak in a different way (as discussed with Pranith) and uploaded them in the latest patch set #4 - http://review.gluster.org/#/c/13096/ Please check if it resolves the mem leak and hopefully doesn't result in any

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-11 Thread Soumya Koduri
b2902bba1 [10] https://gist.github.com/385bbb95ca910ec9766f [11] https://gist.github.com/685c4d3e13d31f597722 10.02.2016 15:37, Oleksandr Natalenko написав: Hi, folks. Here go new test results regarding client memory leak. I use v3.7.8 with the following patches: === Soumya Koduri (2):

Re: [Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-27 Thread Soumya Koduri
USE client may be a good option for us as well, but I can't seem to get speeds higher than 30 MB/s using the Gluster FUSE client (I posted more details on that earlier today to this group as well, looking for advice there). -Kris ____ From: Soumya Ko

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Soumya Koduri
ix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash Soumya Koduri (3): gfapi: Fix inode nlookup counts inode: Retire the inodes from the lru list in inode_table_destroy upcall: free the xdr* allocations === With those patches we got API leaks fix

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
-level, client/server/both. Thanks, Soumya 01.02.2016 09:54, Soumya Koduri написав: On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote: Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.github.com/ fc1647de09

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 02:48 PM, Xavier Hernandez wrote: Hi, On 01/02/16 09:54, Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited

Re: [Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-27 Thread Soumya Koduri
On 01/27/2016 09:39 PM, Kris Laib wrote: Hi all, We're getting ready to roll out Gluster using standard NFS from the clients, and CTDB and RRDNS to help facilitate HA. I thought we were good to know, but recently had an issue where there wasn't enough memory on one of the gluster nodes in a

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/12/2016 11:27 AM, Soumya Koduri wrote: On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote: And "API" test. I used custom API app [1] and did brief file manipulations through it (create/remove/stat). Then I performed drop_caches, finished API [2] and got the following Valgr

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote: Hmm, OK. I've rechecked 3.7.8 with the following patches (latest revisions): === Soumya Koduri (3): gfapi: Use inode_forget in case of handle objects inode: Retire the inodes from the lru list in inode_table_destroy rpc

Re: [Gluster-users] Troubleshooting Gluster Client Mount Disconnection

2016-02-19 Thread Soumya Koduri
On 02/20/2016 02:08 AM, Takemura, Won wrote: I am new user / admin of Gluster. I am involved in a support issue where access to a gluster mount fails and the error is displayed when users attempt to access the mount: -bash: cd: /app/: Transport endpoint is not connected This error is

Re: [Gluster-users] nfs-ganesha volume null errors

2016-03-14 Thread Soumya Koduri
Hi, On 03/14/2016 04:06 AM, ML Wong wrote: Running CentOS Linux release 7.2.1511, glusterfs 3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha-gluster-2.3.0-1.el7.x86_64 1) Ensured the connectivity between gluster nodes by using PING 2) Disabled NetworkManager (Loaded: loaded

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 2.5 hours)

2016-03-15 Thread Soumya Koduri
Hi, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in

Re: [Gluster-users] nfs-ganesha volume null errors

2016-03-22 Thread Soumya Koduri
console during cluster setup. Please give a try and let me know if you see any errors. Thanks, Soumya Testing Environment: Running CentOS Linux release 7.2.1511, glusterfs 3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha-gluster-2.3.0-1.el7.x86_64 On Mon, Mar 14, 2016 at 2:05 AM, S

Re: [Gluster-users] NFS Client issues with Gluster Server 3.6.9

2016-03-08 Thread Soumya Koduri
The log file didn't have any errors logged. Please check the NFS client logs in '/var/log/messages' or using dmesg and brick logs as well. Probably strace or packet trace could help too. You could use the below command to capture the pkt trace while running the I/Os on the node where

[Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (May 17 2016)

2016-05-17 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended the meeting. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-05-17/gluster_bug_triage.2016-05-17-12.01.html Minutes (text):

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
Even on my setup if I change nfs.port, all the other services also started registering on those ports. Can you please file a bug for it. That seems like a bug (or is it intentional..Niels?). 153 tcp 2049 mountd 151 tcp 2049 mountd 133 tcp 2049

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
e it. Regards, Abhishek On Wed, May 4, 2016 at 11:33 AM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs.

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
per 103 udp111 portmapper 102 udp111 portmapper 153 tcp 2049 mountd 151 tcp 2049 mountd 133 tcp 2049 nfs 1002273 tcp 2049 On Wed, May 4, 2016 at 12:09 PM, Soumya Koduri <skod...@

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs. Could you please check the log file '/var/lib/glusterfs/nfs.log' if there are any errors logged with respect protocol registration failures. Thanks, Soumya On 05/04/2016

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-07-30 Thread Soumya Koduri
Inode stored in the shard xlator local is NULL. CCin Kruthika to comment. Thanks, Soumya (gdb) bt #0 0x7f196acab210 in pthread_spin_lock () from /lib64/libpthread.so.0 #1 0x7f196be7bcd5 in fd_anonymous (inode=0x0) at fd.c:804 #2 0x7f195deb1787 in shard_common_inode_write_do

Re: [Gluster-users] NFS-Ganesha lo traffic

2016-08-09 Thread Soumya Koduri
On 08/09/2016 09:06 PM, Mahdi Adnan wrote: Hi, Thank you for your reply. The traffic is related to GlusterFS; 18:31:20.419056 IP 192.168.208.134.49058 > 192.168.208.134.49153: Flags [.], ack 3876, win 24576, options [nop,nop,TS val 247718812 ecr 247718772], length 0 18:31:20.419080 IP

[Gluster-users] Minutes from today's Gluster Community Bug Triage meeting (July 12 2016)

2016-07-12 Thread Soumya Koduri
Hi, Thanks to everyone who joined the meeting. Please find the minutes of today's Gluster Community Bug Triage meeting at the below links. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-07-12/gluster_bug_triage.2016-07-12-12.00.html Minutes (text):

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-07-12 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-users] NFS-Ganesha lo traffic

2016-08-09 Thread Soumya Koduri
On 08/09/2016 03:33 PM, Mahdi Adnan wrote: Hi, Im using NFS-Ganesha to access my volume, it's working fine for now but im seeing lots of traffic on the Loopback interface, in fact it's the same amount of traffic on the bonding interface, can anyone please explain to me why is this happening ?

[Gluster-users] **Reminder** Triaging and Updating Bug status

2016-06-28 Thread Soumya Koduri
Hi, We have noticed that many of the bugs (esp., in the recent past the ones filed against 'tests' component) which are being actively worked upon do not have either 'Triaged' keyword set or bug status(/assignee) updated appropriately. Sometimes even many of the active community members fail

Re: [Gluster-users] Strange - Missing hostname-trigger_ip-1 resources

2017-02-02 Thread Soumya Koduri
Hi, On 02/03/2017 07:52 AM, ML Wong wrote: Hello All, Any pointers will be very-much appreciated. Thanks in advance! Environment: Running centOS 7.2.511 Gluster: 3.7.16, with nfs-ganesha on 2.3.0.1 from centos-gluster37 repo sha1: cab5df4064e3a31d1d92786d91bd41d91517fba8 ganesha-ha.sh we

Re: [Gluster-users] NFS crashing

2017-02-27 Thread Soumya Koduri
On 02/27/2017 07:19 PM, Kevin Lemonnier wrote: Hi, I have a simple glusterFS configured on a VM, with a single volume on a single brick. It's setup that way to replicate the production conditions as close as possible, but with no replica as it's just for dev. Every few hours, the NFS

Re: [Gluster-users] nfs-ganesha logs

2017-03-01 Thread Soumya Koduri
I am not sure if there are any outstanding issues with exposing shard volume via gfapi. CCin Krutika. On 02/28/2017 01:29 PM, Mahdi Adnan wrote: Hi, We have a Gluster volume hosting VMs for ESXi exported via Ganesha. Im getting the following messages in ganesha-gfapi.log and ganesha.log

Re: [Gluster-users] Checklist for ganesha FSAL plugin integration testing for 3.9

2016-09-06 Thread Soumya Koduri
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply. Thanks, Soumya On 09/06/2016 12:19 PM, Soumya Koduri wrote: On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote: hi, Did you get a chance to decide on the nfs-ganesha integrations tests that need to be

Re: [Gluster-users] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-06 Thread Soumya Koduri
On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote: On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi, With http://review.gluster.org/#/c/15051/ <http://review.gluster.org/#/c/15051/>, performace

[Gluster-users] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Soumya Koduri
Hi, With http://review.gluster.org/#/c/15051/, performace/client-io-threads is enabled by default. But with that we see regression caused to nfs-ganesha application trying to un/re-export any glusterfs volume. This shall be the same case with any gfapi application using glfs_fini(). More

[Gluster-users] Minutes from today's Gluster Community Bug Triage meeting (Oct 4 2016)

2016-10-04 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting at the links posted below. We had very few participants today as many are traveling. Thanks to hgowtham and ankitraj for joining. Minutes:

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-10-04 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-users] pacemaker VIP routing latency to gluster node.

2016-09-23 Thread Soumya Koduri
On 09/23/2016 02:34 AM, Dung Le wrote: Hello, I have a pretty straight forward configuration as below: 3 storage nodes running version 3.7.11 with replica of 3 and it using native gluster NFS. corosync version 1.4.7 and pacemaker version 1.1.12 I have DNS round-robin on 3 VIPs living on the

Re: [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Soumya Koduri
Hi, On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote: > +- Component GlusterFS > | > | > | +Subcomponent nfs Maybe its time to change it to 'gluster-NFS/native NFS'. Niels/Kaleb? +- Component gdeploy | | | +Subcomponent sambha | +Subcomponent hyperconvergence | +Subcomponent RHSC

  1   2   >