Re: [Gluster-users] NFS crashing

2017-02-27 Thread Soumya Koduri
On 02/27/2017 07:19 PM, Kevin Lemonnier wrote: Hi, I have a simple glusterFS configured on a VM, with a single volume on a single brick. It's setup that way to replicate the production conditions as close as possible, but with no replica as it's just for dev. Every few hours, the NFS server

Re: [Gluster-users] nfs-ganesha logs

2017-03-01 Thread Soumya Koduri
I am not sure if there are any outstanding issues with exposing shard volume via gfapi. CCin Krutika. On 02/28/2017 01:29 PM, Mahdi Adnan wrote: Hi, We have a Gluster volume hosting VMs for ESXi exported via Ganesha. Im getting the following messages in ganesha-gfapi.log and ganesha.log =

Re: [Gluster-users] NFS-Ganesha HA reboot

2017-03-13 Thread Soumya Koduri
Hi Paul, On 03/13/2017 09:15 PM, Paul Cammarata wrote: We have gluster 3.8.9 setup with 2 nodes, and a number of replicated volumes. We are using Ganesha for HA. This weekend we had to reboot both nodes and seem to have run into an issue. We first attempted to bring them up one at a time but whe

Re: [Gluster-users] Working and up to date guide for ganesha ? nfs-ganesha gluster-ganesha

2017-04-03 Thread Soumya Koduri
Hi Travis, On 04/04/2017 03:49 AM, Travis Eddy wrote: Hello, I've tried all the guides I can find. The is alot of descrepency on the ganesha.conf file and how it interacts with gluster. none of the examples I found worked, also none of them have been updated in the last year or so, even Redhat's

Re: [Gluster-users] Slow write times to gluster disk

2017-04-17 Thread Soumya Koduri
On 04/14/2017 10:27 AM, Ravishankar N wrote: I'm not sure if the version you are running (glusterfs 3.7.11 ) works with NFS-Ganesha as the link seems to suggest version >=3.8 as a per-requisite. Adding Soumya for help. If it is not supported, then you might have to go the plain glusterNFS way.

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-26 Thread Soumya Koduri
Hi Shyam, On 04/25/2017 07:38 PM, Shyam wrote: On 04/25/2017 07:40 AM, Pranith Kumar Karampuri wrote: On Thu, Apr 13, 2017 at 8:17 PM, Shyam mailto:srang...@redhat.com>> wrote: On 02/28/2017 10:17 AM, Shyam wrote: 1) Halo - Initial Cut (@pranith) Sorry for the delay in response. Du

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-02 Thread Soumya Koduri
Hi, On 05/02/2017 01:34 AM, Rudolf wrote: Hi Gluster users, First, I'd like to thank you all for this amazing open-source! Thank you! I'm working on home project – three servers with Gluster and NFS-Ganesha. My goal is to create HA NFS share with three copies of each file on each server. My s

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-03 Thread Soumya Koduri
ay 2, 2017 at 8:49 AM, Soumya Koduri mailto:skod...@redhat.com>> wrote: Hi, On 05/02/2017 01:34 AM, Rudolf wrote: Hi Gluster users, First, I'd like to thank you all for this amazing open-source! Thank you! I'm working on home project – thre

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-03 Thread Soumya Koduri
ay 3, 2017 12:09:58 PM *To:* Soumya Koduri *Cc:* gluster-users@gluster.org *Subject:* Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot Hi Soumya, thank you very much for your reply. I enabled pcsd during setup and after reboot during troubleshooting I manually started it a

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-05 Thread Soumya Koduri
- *From:* gluster-users-boun...@gluster.org <mailto:gluster-users-boun...@gluster.org> mailto:gluster-users-boun...@gluster.org>> on behalf of Adam Ru mailto:ad.ruc...@gmail.com>> *Sent:* Wednesday, May 3, 2017 12:09:58 PM *To:* Soumya Ko

Re: [Gluster-users] gdeploy not starting all the daemons for NFS-ganesha :(

2017-05-09 Thread Soumya Koduri
CCin Sac, Manisha, Arthy who could help with troubleshooting. Thanks, Soumya On 05/09/2017 08:31 PM, hvjunk wrote: Hi there, Given the following config file, what am I doing wrong causing the error at the bottom & no mounted /gluster_shared_storage? Hendrik [root@linked-clone-of-centos-li

Re: [Gluster-users] Is there difference when Nfs-Ganesha is unavailable

2017-05-09 Thread Soumya Koduri
On 05/10/2017 04:18 AM, ML Wong wrote: While I m troubleshooting the failover of Nfs-Ganesha, the failover is always successful when I shutdown Nfs-Ganesha service online while the OS is running. However, it always failed when I did a either shutdown -r or power-reset. During the failure, the

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-15 Thread Soumya Koduri
-lock.service entered failed state. May 12 12:14:01 mynode1.localdomain systemd[1]: nfs-ganesha-lock.service failed. Cant remember right now. Could you please paste the AVCs you get, and se-linux packages version. Or preferably please file a bug. We can get the details verified from selinux members. Th

Re: [Gluster-users] Slow write times to gluster disk

2017-05-31 Thread Soumya Koduri
On 05/31/2017 07:24 AM, Pranith Kumar Karampuri wrote: Thanks this is good information. +Soumya Soumya, We are trying to find why kNFS is performing way better than plain distribute glusterfs+fuse. What information do you think will benefit us to compare the operations with kNFS vs glu

Re: [Gluster-users] Floating IPv6 in a cluster (as NFS-Ganesha VIP)

2017-05-31 Thread Soumya Koduri
+Andrew and Ken On 05/29/2017 11:48 PM, Jan wrote: Hi all, I love this project, Gluster and Ganesha are amazing. Thank you for this great work! The only thing that I miss is IPv6 support. I know that there are some challenges and that’s OK. For me it’s not important whether Gluster servers use

Re: [Gluster-users] Slow write times to gluster disk

2017-06-26 Thread Soumya Koduri
On 06/27/2017 10:17 AM, Pranith Kumar Karampuri wrote: The only problem with using gluster mounted via NFS is that it does not respect the group write permissions which we need. We have an exercise coming up in the a couple of weeks. It seems to me that in order to improve our write times bef

Re: [Gluster-users] Slow write times to gluster disk

2017-07-03 Thread Soumya Koduri
arise from users belonging to too many group. We have seen the above problem even with a user belonging to only 4 groups. Let me know what additional information I can provide. Thanks Pat On 06/27/2017 02:45 AM, Soumya Koduri wrote: On 06/27/2017 10:17 AM, Pranith Kumar Karampuri wrote: T

Re: [Gluster-users] Slow write times to gluster disk

2017-07-04 Thread Soumya Koduri
fuse mounted gluster volume. I am having Steve confirm this. I tried to find the fuse-mnt logs but failed. Where should I look for them? Thanks Pat On 07/03/2017 07:58 AM, Soumya Koduri wrote: On 06/30/2017 07:56 PM, Pat Haley wrote: Hi, I was wondering if there were any additional test

Re: [Gluster-users] Slow write times to gluster disk

2017-07-07 Thread Soumya Koduri
-3.7/ Thanks Pat On 07/04/2017 05:01 AM, Soumya Koduri wrote: On 07/03/2017 09:01 PM, Pat Haley wrote: Hi Soumya, When I originally did the tests I ran tcpdump on the client. I have rerun the tests, doing tcpdump on the server tcpdump -i any -nnSs 0 host 172.16.1.121 -w /root/captur

Re: [Gluster-users] Ganesha "Failed to create client in recovery dir" in logs

2017-07-07 Thread Soumya Koduri
On 07/07/2017 11:36 PM, Renaud Fortier wrote: Hi all, I have this entry in ganesha.log file on server when mounting the volume on client : « GLUSTER-NODE3 : ganesha.nfsd-54084[work-27] nfs4_add_clid :CLIENT ID :EVENT :Failed to create client in recovery dir (/var/lib/nfs/ganesha/v4recov/nod

Re: [Gluster-users] Gluster native mount is really slow compared to nfs

2017-07-11 Thread Soumya Koduri
+ Ambarish On 07/11/2017 02:31 PM, Jo Goossens wrote: Hello, We tried tons of settings to get a php app running on a native gluster mount: e.g.: 192.168.140.41:/www /var/www glusterfs defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable 0 0 I tr

Re: [Gluster-users] Slow write times to gluster disk

2017-07-13 Thread Soumya Koduri
e other? The current stable/maintained/tested combination is nfs-ganesha2.4/2.5 + glusterfs-3.8/3.10. But however incase you cannot upgrade, you can still use nfs-ganesha-2.3* with glusterfs-3.8/3.7 Hope it is clear. Thanks, Soumya Pat On 07/07/2017 01:31 PM, Soumya Koduri wrote: Hi, On 07

Re: [Gluster-users] NFS Ganesha

2017-07-27 Thread Soumya Koduri
++Kaleb On 07/06/2017 10:04 PM, Anthony Valentine wrote: Hello! I am attempting to setup a Gluster install using Ganesha for NFS using the guide found here http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/ The Gluster portion is work

Re: [Gluster-users] Slow write times to gluster disk

2017-08-07 Thread Soumya Koduri
- Original Message - > From: "Pat Haley" > To: "Soumya Koduri" , gluster-users@gluster.org, "Pranith > Kumar Karampuri" > Cc: "Ben Turner" , "Ravishankar N" > , "Raghavendra Gowdappa" > , "Niels d

Re: [Gluster-users] Slow write times to gluster disk

2017-08-10 Thread Soumya Koduri
Thanks, Steve ---- *From:* Soumya Koduri *Sent:* Tuesday, August 8, 2017 1:37 AM *To:* Pat Haley; Niels de Vos *Cc:* gluster-users@gluster.org; Pranith Kumar Karampuri; Ben Turner; Ravishankar N; Raghavendra Gowdappa; Niels de Vos; Steve Postma *Subjec

Re: [Gluster-users] ganesha error ?

2017-09-02 Thread Soumya Koduri
On 09/02/2017 02:09 AM, Renaud Fortier wrote: Hi, I got these errors 3 times since I’m testing gluster with nfs-ganesha. The clients are php apps and when this happen, clients got strange php session error. Below, the first error only happen once but other errors happen every time a clients

Re: [Gluster-users] Restrict root clients / experimental patch

2017-09-22 Thread Soumya Koduri
Hi, On 09/21/2017 07:32 PM, Pierre C wrote: Hi All, I would like to use glusterfs in an environment where storage servers are managed by an IT service - myself :) - and several users in the organization can mount the distributed fs. The users are root on their machines. As far as I know abou

Re: [Gluster-users] nfs-ganesha locking problems

2017-10-02 Thread Soumya Koduri
Hi On 09/29/2017 09:09 PM, Bernhard Dübi wrote: Hi, I have a problem with nfs-ganesha serving gluster volumes I can read and write files but then one of the DBAs tried to dump an Oracle DB onto the NFS share and got the following errors: Export: Release 11.2.0.4.0 - Production on Wed Sep 27

Re: [Gluster-users] nfs-ganesha locking problems

2017-10-06 Thread Soumya Koduri
On 10/03/2017 02:15 AM, Bernhard Dübi wrote: Hi Soumya, what I can say so far: it is working on a standalone system but not on the clustered system Hi, Sorry for the delay. Locking seem to have failed due to below nsm_monitor error : 03/10/2017 14:27:38 : epoch 59cbce8c : chvirnfsprd12

Re: [Gluster-users] Is there any performance impact in setting up every gluster client as a NFS server?

2017-11-14 Thread Soumya Koduri
Hi, On 11/14/2017 11:45 PM, Jeevan Patnaik wrote: Hi, We have around 60 hosts and each of them acts as glusterFs clients as well as server. To achieve HA, my underatanding is that we can use Ganesha NFS alone (and not Kernel NFS) and for above 3.10 versions, the HA packages are not ready y

Re: [Gluster-users] glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write

2019-02-07 Thread Soumya Koduri
On 2/8/19 3:20 AM, Maurits Lamers wrote: Hi, [2019-02-07 10:11:24.812606] E [MSGID: 104055] [glfs-fops.c:4955:glfs_cbk_upcall_data] 0-gfapi: Synctak for Upcall event_type(1) and gfid(yøêÙ  Mz„–îSL4_@) failed [2019-02-07 10:11:24.819376] E [MSGID: 104055] [glfs-fops.c:4955:glfs_c

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Soumya Koduri
On 3/27/19 12:55 PM, Xavi Hernandez wrote: Hi Raghavendra, On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa mailto:rgowd...@redhat.com>> wrote: All, Glusterfs cleans up POSIX locks held on an fd when the client/mount through which those locks are held disconnects from brick

Re: [Gluster-users] glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write

2019-03-28 Thread Soumya Koduri
On 2/8/19 11:53 AM, Soumya Koduri wrote: On 2/8/19 3:20 AM, Maurits Lamers wrote: Hi, [2019-02-07 10:11:24.812606] E [MSGID: 104055] [glfs-fops.c:4955:glfs_cbk_upcall_data] 0-gfapi: Synctak for Upcall event_type(1) and gfid(yøêÙ  Mz„–îSL4_@) failed [2019-02-07 10:11:24.819376

Re: [Gluster-users] Gluster GEO replication fault after write over nfs-ganesha

2019-03-28 Thread Soumya Koduri
On 3/27/19 7:39 PM, Alexey Talikov wrote: I have two clusters with dispersed volumes (2+1) with GEO replication It works fine till I use glusterfs-fuse, but as even one file written over nfs-ganesha replication goes to Fault and recovers after I remove this file (sometimes after stop/start)

Re: [Gluster-users] upgrade best practices

2019-03-31 Thread Soumya Koduri
On 3/29/19 10:39 PM, Poornima Gurusiddaiah wrote: On Fri, Mar 29, 2019, 10:03 PM Jim Kinney > wrote: Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and out of sync, need heal files. We need to migrate the three replica servers

Re: [Gluster-users] upgrade best practices

2019-04-01 Thread Soumya Koduri
Thanks for the details. Response inline - On 4/1/19 9:45 PM, Jim Kinney wrote: On Sun, 2019-03-31 at 23:01 +0530, Soumya Koduri wrote: On 3/29/19 10:39 PM, Poornima Gurusiddaiah wrote: On Fri, Mar 29, 2019, 10:03 PM Jim Kinney < jim.kin...@gmail.com <mailto:jim.kin...@gma

Re: [Gluster-users] Release 6.5: Expected tagging on 5th August

2019-08-01 Thread Soumya Koduri
Hi Hari, [1] is a critical patch which addresses issue affecting upcall processing by applications such as NFS-Ganesha. As soon as it gets merged in master, I shall backport it to release-7/6/5 branches. Kindly consider the same. Thanks, Soumya [1] https://review.gluster.org/#/c/glusterfs/+

Re: [Gluster-users] Reg Performance issue in GlusterFS

2019-10-06 Thread Soumya Koduri
Hi Pratik, Offhand I do not see any issue with the configuration. But I think for VM images store, using gfapi may give better performance than compared to fuse. CC'in Kritika and Gobinda who have been working on this use-case and may be able to guide you. Thanks, Soumya On 10/5/19 11:25 AM

Re: [Gluster-users] boot auto mount NFS-Ganesha exports failed

2020-03-15 Thread Soumya Koduri
Hi, Since its working on your test machine, most likely could be NFS client-side issue. Please check if there are any kernel fixes between those versions which may have caused this. I see similar issue reported in below threads [1] [2]. As suggested there, could you try disabling kerberos mo

Re: [Gluster-users] Latest NFS-Ganesha Gluster Integration docs

2020-06-29 Thread Soumya Koduri
Hi, On 6/29/20 7:30 AM, wkm...@bneit.com wrote: For many years, we have maintained a number of standalone, hyperconverged Gluster/Libvirt clusters  Replica 2 + Arbiter using Fuse mount and Sharding. Performance has been mostly acceptable. The clusters have high availability and we have had v

Re: [Gluster-users] glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write

2019-02-07 Thread Soumya Koduri
On 2/7/19 6:01 PM, Maurits Lamers wrote: Hi all, I am trying to find out more about why a nfs mount through nfs-ganesha of a glusterfs volume freezes. Little bit of a background: The system consists of one glusterfs volume across 5 nodes. Every node runs Ubuntu 16.04, gluster 4.1.7 and nfs-

Re: [Gluster-users] ganesha BFS

2015-08-12 Thread Soumya Koduri
It depends on the workload. Like native NFS, even with NFS-Ganesha, data is routed through the server where its mounted from. In addition NFSv4.x protocol adds more complexity and cannot be directly compared with NFSv3 traffic. However with pNFS, I/O is routed to data servers directly by the NFS

Re: [Gluster-users] NFS mount

2015-08-19 Thread Soumya Koduri
On 08/19/2015 09:53 PM, shacky wrote: Hi Atin, thank you very much for your answer. It seems like your NFS kernel module is not disabled. Please try disabling it and re mount. I tried but it did not solve my problem: # service nfs-common stop Stopping NFS common utilities: idmapd statd. #

Re: [Gluster-users] write(filename, ...) implementation

2015-08-24 Thread Soumya Koduri
On 08/24/2015 11:24 PM, Ivica Siladic wrote: Hi, I'm doing a lot of small writes to distributed/replicated Gluster volume. The performance I'm getting is not acceptable. Interestingly, I'm getting doubled speed boost if I use libgfsapi instead of kernel volume mount. My guess is that I coul

Re: [Gluster-users] write(filename, ...) implementation

2015-08-25 Thread Soumya Koduri
sufficient - 'glfs_h_lookupat' - to get 'glfs_object' given file path. 'glfs_h_anonymous_write' - takes above handle as one of the inputs. Thanks, Soumya Ivica On 24 Aug 2015, at 20:02, Soumya Koduri wrote: On 08/24/2015 11:24 PM, Ivica Siladic wrote: Hi, I&#x

Re: [Gluster-users] ganesha and glusterfs 3.7, what provides nfs-ganesha-gluster?

2015-09-03 Thread Soumya Koduri
you can download the same using the repos listed here http://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/2.2.0/ Thanks, Soumya On 09/04/2015 05:22 AM, Patrick Riehecky wrote: I'm fairly close to a working ganesha/glusterfs setup, but I can't seem to install glusterfs-ganesha. Any s

[Gluster-users] Contributing to GlusterFS: Fixing Coverity defects

2015-09-08 Thread Soumya Koduri
Hi, If you would like to contribute to GlusterFS, one of the easiest ways which shall help you to analyze the code is by fixing defects reported by Coverity Scan tool. The detailed process is mentioned in [1]. To summarize, * Signup as a member of https://scan.coverity.com/projects/987 * Sub

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-13 Thread Soumya Koduri
On 09/13/2015 09:38 AM, Yaroslav Molochko wrote: I wish this could be that simple: root@PSC01SERV008:/var/lib# netstat -nap | grep 38465 root@PSC01SERV008:/var/lib# ss -n | grep 38465 root@PSC01SERV008:/var/lib# 2015-09-13 1:34 GMT+08:00 Atin Mukherjee mailto:atin.mukherje...@gmail.com>>:

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-14 Thread Soumya Koduri
Could you try * disabling iptables (& firewalld if enabled) * restart rpcbind service * restart glusterd If this doesn't work, (mentioned in one of the forums) Add below line in '/etc/hosts.allow' file. ALL: 127.0.0.1 : ALLOW Restart rpcbind and glusterd services. Thanks, Soumya On 09

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Soumya Koduri
could, I've checked anything I could find in the google and this doesn't work. Please, lets move on with something more sophisticated than restart glusterfs... I would not contact you if I had not tried to restart it dozen of time. Do you have any debugging to see what is really happening?

Re: [Gluster-users] After upgrade from 3.5 to 3.7 gluster local NFS is not starting on one of the servers

2015-09-15 Thread Soumya Koduri
Small correction in the file I provided earlier. pmap_set returns 0 in case of failure. On 09/16/2015 12:08 AM, Soumya Koduri wrote: /* pmap_set() returns 0 for FAIL and 1 for SUCCESS */ if (!(pmap_set (newprog->prognum, newprog->progver, IPPRO

Re: [Gluster-users] nfs-ganesha HA with arbiter volume

2015-09-18 Thread Soumya Koduri
Hi Tiemen, One of the pre-requisites before setting up nfs-ganesha HA is to create and mount shared_storage volume. Use below CLI for that "gluster volume set all cluster.enable-shared-storage enable" It shall create the volume and mount in all the nodes (including the arbiter node). Note th

Re: [Gluster-users] Fwd: nfs-ganesha HA with arbiter volume

2015-09-22 Thread Soumya Koduri
info: Name or service not known [2015-09-21 08:18:51.504694] E [MSGID: 106062] [glusterd-op-sm.c:3698:glusterd_op_ac_unlock] 0-management: Unable to acquire volname I have put the hostnames of all servers in my /etc/hosts file, including the arbiter node. On 18 Sept

Re: [Gluster-users] Gluster 3.7 and nfs ganesha HA howto

2015-09-22 Thread Soumya Koduri
Hi, This doc may come in handy for you to configure HA NFS - https://github.com/soumyakoduri/glusterdocs/blob/ha_guide /Administrator%20Guide/Configuring%20HA%20NFS%20Server.md Thanks, Soumya On 09/21/2015 11:24 PM, Gluster Admin wrote: Hi, Can someone point me to the howto/docs on setting u

Re: [Gluster-users] Fwd: nfs-ganesha HA with arbiter volume

2015-09-22 Thread Soumya Koduri
, that should do the cleanup as well. # gluster nfs-ganesha disable. If you are still in doubt and to be safe, after disabling nfs-ganesha, run the below script command # ./usr/libexec/ganesha/ganesha-ha.sh --cleanup /etc/ganesha Thanks, Soumya On 22 September 2015 at 09:04, Soumya Koduri mailt

Re: [Gluster-users] Fwd: nfs-ganesha HA with arbiter volume

2015-09-22 Thread Soumya Koduri
? > > > > -- > > With Regards, > > Jiffin > > > > > >> [2015-09-21 07:59:48.653912] E [MSGID: 106123] > >> [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit > >>

Re: [Gluster-users] Enquiry about Gluster

2015-09-27 Thread Soumya Koduri
On 09/23/2015 07:34 AM, Premkumar Mani wrote: Team, We would like to implement gluster in my environment. could you please help me to get more information about this product ? May be good to start with the below links - http://www.gluster.org/ http://gluster.readthedocs.org/en/latest/ Than

Re: [Gluster-users] volume répliqué.

2015-09-28 Thread Soumya Koduri
On 09/25/2015 04:04 PM, Pierre Léonard wrote: Hi all, I have 14 nodes with a replicated 2 volume. I want to suppress the replication function. I that possible without losing data. Or Do I need to make a save of the data before. The only way I can imagine to avoid replication in case of repli

Re: [Gluster-users] Question on HA Active-Active Ganesha setup

2015-11-06 Thread Soumya Koduri
On 11/05/2015 08:43 PM, Surya K Ghatty wrote: All... I need your help! I am trying to setup Highly available Active-Active Ganesha configuration on two glusterfs nodes based on instructions here: https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/ a

Re: [Gluster-users] Configuring Ganesha and gluster on separate nodes?

2015-11-18 Thread Soumya Koduri
On 11/17/2015 10:21 PM, Surya K Ghatty wrote: Hi: I am trying to understand if it is technically feasible to have gluster nodes on one machine, and export a volume from one of these nodes using a nfs-ganesha server installed on a totally different machine? I tried the below and showmount -e do

Re: [Gluster-users] nfs mounting and copying files

2015-11-18 Thread Soumya Koduri
On 11/17/2015 08:50 PM, Pierre Léonard wrote: Hi all, Y have a cluster with 14 nodes. I have build a volume stripe 7 with the 14 nodes. Underlying I use XFS. Locally I mount the global volume with nfs : mount -t nfs 127.0.0.1:gvExport /glusterfs/gvExport -o _netdev,nosuid,bg,exec then I unt

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-27 Thread Soumya Koduri
Hi, On 11/27/2015 01:58 PM, ml wrote: Dear All, I am trying to get a nfs-ganesha ha cluster running, with 3, CentOS Linux release 7.1.1503 nodes. I use the package glusterfs-ganesha-3.7.6 -1.el7.x86_64 to get the HA scripts. So far it works fine when i stop the nfs-ganesha service on one of the

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-30 Thread Soumya Koduri
Hi, But are you telling me that in a 3-node cluster, quorum is lost when one of the nodes ip is down? yes. Its the limitation with Pacemaker/Corosync. If the nodes participating in cluster cannot communicate with majority of them (quorum is lost), then the cluster is shut down. However i

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-30 Thread Soumya Koduri
On 11/30/2015 03:26 PM, Soumya Koduri wrote: Hi, But are you telling me that in a 3-node cluster, quorum is lost when one of the nodes ip is down? yes. Its the limitation with Pacemaker/Corosync. If the nodes participating in cluster cannot communicate with majority of them (quorum is lost

Re: [Gluster-users] gluster nfs-ganesha enable fails and is driving me crazy

2015-12-09 Thread Soumya Koduri
On 12/10/2015 02:51 AM, Marco Antonio Carcano wrote: Hi Kaleb, thank you very much for the quick reply I tried what you suggested, but I got the same error I tried both HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local" VIP_glstr01.carcano.local="192.168.65.250" VIP_glstr02.carc

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2015-12-22 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-users] [Gluster-devel] REMINDER: Gluster Bug Triage timing-poll

2015-12-22 Thread Soumya Koduri
+gluster-users On 12/22/2015 06:03 PM, Hari Gowtham wrote: Hi all, There was a poll conducted to find the timing that suits best for the people who want to participate in the weekly Gluster bug triage meeting. The result for the poll is yet to be announced but we would like to get more polls.

[Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (22nd Dec 2015)

2015-12-22 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attend the meeting. Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-12-22/gluster_bug_triage.2015-12-22-12.00.html Minutes (text): http://meetbot.fedoraproject.org/gl

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory usage: === root 5416 34.2 78.5 2

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote: What units Cache_Size is measured in? Bytes? Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you please run ganesha process under valgrind? Will help in detecting leaks. Thanks, Soumya 25.12.2015 16:58, Soumya Koduri

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-26 Thread Soumya Koduri
. https://gist.github.com/e4602a50d3c98f7a2766 One may see GlusterFS-related leaks here as well. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2015-12-28 Thread Soumya Koduri
- Original Message - > From: "Pranith Kumar Karampuri" > To: "Oleksandr Natalenko" , "Soumya Koduri" > > Cc: gluster-users@gluster.org, gluster-de...@gluster.org > Sent: Monday, December 28, 2015 9:32:07 AM > Subject: Re: [Gluster-deve

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2015-12-31 Thread Soumya Koduri
On 12/28/2015 02:32 PM, Soumya Koduri wrote: - Original Message - From: "Pranith Kumar Karampuri" To: "Oleksandr Natalenko" , "Soumya Koduri" Cc: gluster-users@gluster.org, gluster-de...@gluster.org Sent: Monday, December 28, 2015 9:32:07 AM Subject

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
page cache is the cause). There are ~1.8M files on this test volume. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount o

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
which I have pasted below apply to gfapi/nfs-ganesha applications. Also, to resolve the nfs-ganesha issue which I had mentioned below (in case if Entries_HWMARK option gets changed), I have posted below fix - https://review.gerrithub.io/#/c/258687 Thanks, Soumya Ideas? 05.01.2016

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
к, 5 січня 2016 р. 22:52:25 EET Soumya Koduri wrote: On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote: Unfortunately, both patches didn't make any difference for me. I've patched 3.7.6 with both patches, recompiled and installed patched GlusterFS package on client side and mounted volu

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-08 Thread Soumya Koduri
.com/30f0129d16e25d4a5a52 ganesha.conf: https://gist.github.com/9b5e59b8d6d8cb84c85d How I mount NFS share: === mount -t nfs4 127.0.0.1:/mail_boxes /mnt/tmp -o defaults,_netdev,minorversion=2,noac,noacl,lookupcache=none,timeo=100 === On четвер, 7 січня 2016 р. 12:06:42 EET Soumya Koduri wro

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
a On 01/08/2016 05:04 PM, Soumya Koduri wrote: I could reproduce while testing deep directories with in the mount point. I root caus'ed the issue & had discussion with Pranith to understand the purpose and recommended way of taking nlookup on inodes. I shall make changes to my existing fi

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
oumya 11.01.2016 12:26, Soumya Koduri написав: I have made changes to fix the lookup leak in a different way (as discussed with Pranith) and uploaded them in the latest patch set #4 - http://review.gluster.org/#/c/13096/ Please check if it resolves the mem leak and hopefully doesn't res

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
ocations are listed as lost. But most of the inodes should have got purged when we drop vfs cache. Did you do drop vfs cache before exiting the process? I shall add some log statements and check that part Thanks, Soumya 12.01.2016 08:24, Soumya Koduri написав: For fuse client, I trie

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
athieu CHATEAU http://www.lotp.fr 2016-01-12 7:24 GMT+01:00 Soumya Koduri mailto:skod...@redhat.com>>: On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote: Brief test shows that Ganesha stopped leaking and crashing, so it seems to be good for

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
On 01/13/2016 04:08 PM, Soumya Koduri wrote: On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: Just in case, here is Valgrind output on FUSE client with 3.7.6 + API-related patches we discussed before: https://gist.github.com/cd6605ca19734c1496a4 Thanks for sharing the results. I made

Re: [Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-27 Thread Soumya Koduri
On 01/27/2016 09:39 PM, Kris Laib wrote: Hi all, We're getting ready to roll out Gluster using standard NFS from the clients, and CTDB and RRDNS to help facilitate HA. I thought we were good to know, but recently had an issue where there wasn't enough memory on one of the gluster nodes in a

Re: [Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-27 Thread Soumya Koduri
eadline gets extended. The FUSE client may be a good option for us as well, but I can't seem to get speeds higher than 30 MB/s using the Gluster FUSE client (I posted more details on that earlier today to this group as well, looking for advice there). -Kris ________

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Soumya Koduri
ITHLEY (1): fuse: use-after-free fix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash Soumya Koduri (3): gfapi: Fix inode nlookup counts inode: Retire the inodes from the lru list in inode_table_destroy upcall: free the xdr* allocations === With

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
e global/volume-level, client/server/both. Thanks, Soumya 01.02.2016 09:54, Soumya Koduri написав: On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote: Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.gi

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 02:48 PM, Xavier Hernandez wrote: Hi, On 01/02/16 09:54, Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't w

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-11 Thread Soumya Koduri
b2902bba1 [10] https://gist.github.com/385bbb95ca910ec9766f [11] https://gist.github.com/685c4d3e13d31f597722 10.02.2016 15:37, Oleksandr Natalenko написав: Hi, folks. Here go new test results regarding client memory leak. I use v3.7.8 with the following patches: === Soumya Koduri (2):

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/12/2016 11:27 AM, Soumya Koduri wrote: On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote: And "API" test. I used custom API app [1] and did brief file manipulations through it (create/remove/stat). Then I performed drop_caches, finished API [2] and got the following Valgr

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote: Hmm, OK. I've rechecked 3.7.8 with the following patches (latest revisions): === Soumya Koduri (3): gfapi: Use inode_forget in case of handle objects inode: Retire the inodes from the lru list in inode_table_destroy

Re: [Gluster-users] Troubleshooting Gluster Client Mount Disconnection

2016-02-19 Thread Soumya Koduri
On 02/20/2016 02:08 AM, Takemura, Won wrote: I am new user / admin of Gluster. I am involved in a support issue where access to a gluster mount fails and the error is displayed when users attempt to access the mount: -bash: cd: /app/: Transport endpoint is not connected This error is typi

Re: [Gluster-users] NFS Client issues with Gluster Server 3.6.9

2016-03-08 Thread Soumya Koduri
The log file didn't have any errors logged. Please check the NFS client logs in '/var/log/messages' or using dmesg and brick logs as well. Probably strace or packet trace could help too. You could use the below command to capture the pkt trace while running the I/Os on the node where gluster-n

Re: [Gluster-users] nfs-ganesha volume null errors

2016-03-14 Thread Soumya Koduri
Hi, On 03/14/2016 04:06 AM, ML Wong wrote: Running CentOS Linux release 7.2.1511, glusterfs 3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha-gluster-2.3.0-1.el7.x86_64 1) Ensured the connectivity between gluster nodes by using PING 2) Disabled NetworkManager (Loaded: loaded (/usr/lib/sy

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 2.5 hours)

2016-03-15 Thread Soumya Koduri
Hi, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in your

[Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (Mar 15 2016)

2016-03-15 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended the meeting. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-03-15/gluster_bug_triage.2016-03-15-12.00.html Minutes (text): https://meetbot.fedoraproject.or

Re: [Gluster-users] nfs-ganesha volume null errors

2016-03-22 Thread Soumya Koduri
d throw the errors returned by the script on the console during cluster setup. Please give a try and let me know if you see any errors. Thanks, Soumya Testing Environment: Running CentOS Linux release 7.2.1511, glusterfs 3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha-gluster-2.3.0-1.el7.x86

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread Soumya Koduri
Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs. Could you please check the log file '/var/lib/glusterfs/nfs.log' if there are any errors logged with respect protocol registration failures. Thanks, Soumya On 05/04/2016

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread Soumya Koduri
n Wed, May 4, 2016 at 11:33 AM, Soumya Koduri mailto:skod...@redhat.com>> wrote: Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs. Could you please check the log fil

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-03 Thread Soumya Koduri
per 103 udp111 portmapper 102 udp111 portmapper 153 tcp 2049 mountd 151 tcp 2049 mountd 133 tcp 2049 nfs 1002273 tcp 2049 On Wed, May 4, 2016 at 12:09 PM, Soumya Koduri mailto:skod..

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
Even on my setup if I change nfs.port, all the other services also started registering on those ports. Can you please file a bug for it. That seems like a bug (or is it intentional..Niels?). 153 tcp 2049 mountd 151 tcp 2049 mountd 133 tcp 2049 n

  1   2   >