Re: [Gluster-users] Latest NFS-Ganesha Gluster Integration docs

2020-06-29 Thread Soumya Koduri
Hi, On 6/29/20 7:30 AM, wkm...@bneit.com wrote: For many years, we have maintained a number of standalone, hyperconverged Gluster/Libvirt clusters  Replica 2 + Arbiter using Fuse mount and Sharding. Performance has been mostly acceptable. The clusters have high availability and we have had

Re: [Gluster-users] boot auto mount NFS-Ganesha exports failed

2020-03-15 Thread Soumya Koduri
Hi, Since its working on your test machine, most likely could be NFS client-side issue. Please check if there are any kernel fixes between those versions which may have caused this. I see similar issue reported in below threads [1] [2]. As suggested there, could you try disabling kerberos

Re: [Gluster-users] Reg Performance issue in GlusterFS

2019-10-06 Thread Soumya Koduri
Hi Pratik, Offhand I do not see any issue with the configuration. But I think for VM images store, using gfapi may give better performance than compared to fuse. CC'in Kritika and Gobinda who have been working on this use-case and may be able to guide you. Thanks, Soumya On 10/5/19 11:25

Re: [Gluster-users] Release 6.5: Expected tagging on 5th August

2019-08-01 Thread Soumya Koduri
Hi Hari, [1] is a critical patch which addresses issue affecting upcall processing by applications such as NFS-Ganesha. As soon as it gets merged in master, I shall backport it to release-7/6/5 branches. Kindly consider the same. Thanks, Soumya [1]

Re: [Gluster-users] upgrade best practices

2019-03-31 Thread Soumya Koduri
On 3/29/19 10:39 PM, Poornima Gurusiddaiah wrote: On Fri, Mar 29, 2019, 10:03 PM Jim Kinney > wrote: Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and out of sync, need heal files. We need to migrate the three replica servers

Re: [Gluster-users] Gluster GEO replication fault after write over nfs-ganesha

2019-03-28 Thread Soumya Koduri
On 3/27/19 7:39 PM, Alexey Talikov wrote: I have two clusters with dispersed volumes (2+1) with GEO replication It works fine till I use glusterfs-fuse, but as even one file written over nfs-ganesha replication goes to Fault and recovers after I remove this file (sometimes after stop/start)

Re: [Gluster-users] glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write

2019-03-28 Thread Soumya Koduri
On 2/8/19 11:53 AM, Soumya Koduri wrote: On 2/8/19 3:20 AM, Maurits Lamers wrote: Hi, [2019-02-07 10:11:24.812606] E [MSGID: 104055] [glfs-fops.c:4955:glfs_cbk_upcall_data] 0-gfapi: Synctak for Upcall event_type(1) and gfid(yøêÙ  Mz„–îSL4_@) failed [2019-02-07 10:11:24.819376

Re: [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Soumya Koduri
On 3/27/19 12:55 PM, Xavi Hernandez wrote: Hi Raghavendra, On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa mailto:rgowd...@redhat.com>> wrote: All, Glusterfs cleans up POSIX locks held on an fd when the client/mount through which those locks are held disconnects from

Re: [Gluster-users] glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write

2019-02-07 Thread Soumya Koduri
On 2/8/19 3:20 AM, Maurits Lamers wrote: Hi, [2019-02-07 10:11:24.812606] E [MSGID: 104055] [glfs-fops.c:4955:glfs_cbk_upcall_data] 0-gfapi: Synctak for Upcall event_type(1) and gfid(yøêÙ  Mz„–îSL4_@) failed [2019-02-07 10:11:24.819376] E [MSGID: 104055]

Re: [Gluster-users] glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write

2019-02-07 Thread Soumya Koduri
On 2/7/19 6:01 PM, Maurits Lamers wrote: Hi all, I am trying to find out more about why a nfs mount through nfs-ganesha of a glusterfs volume freezes. Little bit of a background: The system consists of one glusterfs volume across 5 nodes. Every node runs Ubuntu 16.04, gluster 4.1.7 and

Re: [Gluster-users] Is there any performance impact in setting up every gluster client as a NFS server?

2017-11-14 Thread Soumya Koduri
Hi, On 11/14/2017 11:45 PM, Jeevan Patnaik wrote: Hi, We have around 60 hosts and each of them acts as glusterFs clients as well as server. To achieve HA, my underatanding is that we can use Ganesha NFS alone (and not Kernel NFS) and for above 3.10 versions, the HA packages are not ready

Re: [Gluster-users] nfs-ganesha locking problems

2017-10-06 Thread Soumya Koduri
On 10/03/2017 02:15 AM, Bernhard Dübi wrote: Hi Soumya, what I can say so far: it is working on a standalone system but not on the clustered system Hi, Sorry for the delay. Locking seem to have failed due to below nsm_monitor error : 03/10/2017 14:27:38 : epoch 59cbce8c :

Re: [Gluster-users] nfs-ganesha locking problems

2017-10-02 Thread Soumya Koduri
Hi On 09/29/2017 09:09 PM, Bernhard Dübi wrote: Hi, I have a problem with nfs-ganesha serving gluster volumes I can read and write files but then one of the DBAs tried to dump an Oracle DB onto the NFS share and got the following errors: Export: Release 11.2.0.4.0 - Production on Wed Sep 27

Re: [Gluster-users] Restrict root clients / experimental patch

2017-09-22 Thread Soumya Koduri
Hi, On 09/21/2017 07:32 PM, Pierre C wrote: Hi All, I would like to use glusterfs in an environment where storage servers are managed by an IT service - myself :) - and several users in the organization can mount the distributed fs. The users are root on their machines. As far as I know

Re: [Gluster-users] ganesha error ?

2017-09-02 Thread Soumya Koduri
On 09/02/2017 02:09 AM, Renaud Fortier wrote: Hi, I got these errors 3 times since I’m testing gluster with nfs-ganesha. The clients are php apps and when this happen, clients got strange php session error. Below, the first error only happen once but other errors happen every time a clients

Re: [Gluster-users] Slow write times to gluster disk

2017-08-07 Thread Soumya Koduri
- Original Message - > From: "Pat Haley" <pha...@mit.edu> > To: "Soumya Koduri" <skod...@redhat.com>, gluster-users@gluster.org, "Pranith > Kumar Karampuri" <pkara...@redhat.com> > Cc: "Ben Turner" <btur...@redhat

Re: [Gluster-users] NFS Ganesha

2017-07-27 Thread Soumya Koduri
++Kaleb On 07/06/2017 10:04 PM, Anthony Valentine wrote: Hello! I am attempting to setup a Gluster install using Ganesha for NFS using the guide found here http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/ The Gluster portion is

Re: [Gluster-users] Slow write times to gluster disk

2017-07-13 Thread Soumya Koduri
ther? The current stable/maintained/tested combination is nfs-ganesha2.4/2.5 + glusterfs-3.8/3.10. But however incase you cannot upgrade, you can still use nfs-ganesha-2.3* with glusterfs-3.8/3.7 Hope it is clear. Thanks, Soumya Pat On 07/07/2017 01:31 PM, Soumya Koduri wrote: Hi, On 07/07/2017 0

Re: [Gluster-users] Gluster native mount is really slow compared to nfs

2017-07-11 Thread Soumya Koduri
+ Ambarish On 07/11/2017 02:31 PM, Jo Goossens wrote: Hello, We tried tons of settings to get a php app running on a native gluster mount: e.g.: 192.168.140.41:/www /var/www glusterfs defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable 0 0 I

Re: [Gluster-users] Ganesha "Failed to create client in recovery dir" in logs

2017-07-07 Thread Soumya Koduri
On 07/07/2017 11:36 PM, Renaud Fortier wrote: Hi all, I have this entry in ganesha.log file on server when mounting the volume on client : « GLUSTER-NODE3 : ganesha.nfsd-54084[work-27] nfs4_add_clid :CLIENT ID :EVENT :Failed to create client in recovery dir

Re: [Gluster-users] Slow write times to gluster disk

2017-07-07 Thread Soumya Koduri
Pat On 07/04/2017 05:01 AM, Soumya Koduri wrote: On 07/03/2017 09:01 PM, Pat Haley wrote: Hi Soumya, When I originally did the tests I ran tcpdump on the client. I have rerun the tests, doing tcpdump on the server tcpdump -i any -nnSs 0 host 172.16.1.121 -w /root/capture_nfsfail.

Re: [Gluster-users] Slow write times to gluster disk

2017-07-04 Thread Soumya Koduri
a fuse mounted gluster volume. I am having Steve confirm this. I tried to find the fuse-mnt logs but failed. Where should I look for them? Thanks Pat On 07/03/2017 07:58 AM, Soumya Koduri wrote: On 06/30/2017 07:56 PM, Pat Haley wrote: Hi, I was wondering if there were any additional test

Re: [Gluster-users] Slow write times to gluster disk

2017-07-03 Thread Soumya Koduri
longing to too many group. We have seen the above problem even with a user belonging to only 4 groups. Let me know what additional information I can provide. Thanks Pat On 06/27/2017 02:45 AM, Soumya Koduri wrote: On 06/27/2017 10:17 AM, Pranith Kumar Karampuri wrote: The only problem with

Re: [Gluster-users] Slow write times to gluster disk

2017-06-27 Thread Soumya Koduri
On 06/27/2017 10:17 AM, Pranith Kumar Karampuri wrote: The only problem with using gluster mounted via NFS is that it does not respect the group write permissions which we need. We have an exercise coming up in the a couple of weeks. It seems to me that in order to improve our write times

Re: [Gluster-users] Floating IPv6 in a cluster (as NFS-Ganesha VIP)

2017-05-31 Thread Soumya Koduri
+Andrew and Ken On 05/29/2017 11:48 PM, Jan wrote: Hi all, I love this project, Gluster and Ganesha are amazing. Thank you for this great work! The only thing that I miss is IPv6 support. I know that there are some challenges and that’s OK. For me it’s not important whether Gluster servers

Re: [Gluster-users] Slow write times to gluster disk

2017-05-31 Thread Soumya Koduri
On 05/31/2017 07:24 AM, Pranith Kumar Karampuri wrote: Thanks this is good information. +Soumya Soumya, We are trying to find why kNFS is performing way better than plain distribute glusterfs+fuse. What information do you think will benefit us to compare the operations with kNFS vs

Re: [Gluster-users] Is there difference when Nfs-Ganesha is unavailable

2017-05-10 Thread Soumya Koduri
On 05/10/2017 04:18 AM, ML Wong wrote: While I m troubleshooting the failover of Nfs-Ganesha, the failover is always successful when I shutdown Nfs-Ganesha service online while the OS is running. However, it always failed when I did a either shutdown -r or power-reset. During the failure, the

Re: [Gluster-users] gdeploy not starting all the daemons for NFS-ganesha :(

2017-05-09 Thread Soumya Koduri
CCin Sac, Manisha, Arthy who could help with troubleshooting. Thanks, Soumya On 05/09/2017 08:31 PM, hvjunk wrote: Hi there, Given the following config file, what am I doing wrong causing the error at the bottom & no mounted /gluster_shared_storage? Hendrik

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-05 Thread Soumya Koduri
;mailto:ad.ruc...@gmail.com>> *Sent:* Wednesday, May 3, 2017 12:09:58 PM *To:* Soumya Koduri *Cc:* gluster-users@gluster.org <mailto:gluster-users@gluster.org> *Subject:* Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot Hi Soumya, than

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-03 Thread Soumya Koduri
n behalf of Adam Ru <ad.ruc...@gmail.com> *Sent:* Wednesday, May 3, 2017 12:09:58 PM *To:* Soumya Koduri *Cc:* gluster-users@gluster.org *Subject:* Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot Hi Soumya, thank you very much for your reply. I enabled pcsd during

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-03 Thread Soumya Koduri
at 8:49 AM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi, On 05/02/2017 01:34 AM, Rudolf wrote: Hi Gluster users, First, I'd like to thank you all for this amazing open-source! Thank you! I'm working on home pro

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-02 Thread Soumya Koduri
Hi, On 05/02/2017 01:34 AM, Rudolf wrote: Hi Gluster users, First, I'd like to thank you all for this amazing open-source! Thank you! I'm working on home project – three servers with Gluster and NFS-Ganesha. My goal is to create HA NFS share with three copies of each file on each server. My

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-26 Thread Soumya Koduri
Hi Shyam, On 04/25/2017 07:38 PM, Shyam wrote: On 04/25/2017 07:40 AM, Pranith Kumar Karampuri wrote: On Thu, Apr 13, 2017 at 8:17 PM, Shyam > wrote: On 02/28/2017 10:17 AM, Shyam wrote: 1) Halo - Initial Cut (@pranith) Sorry for

Re: [Gluster-users] Slow write times to gluster disk

2017-04-17 Thread Soumya Koduri
On 04/14/2017 10:27 AM, Ravishankar N wrote: I'm not sure if the version you are running (glusterfs 3.7.11 ) works with NFS-Ganesha as the link seems to suggest version >=3.8 as a per-requisite. Adding Soumya for help. If it is not supported, then you might have to go the plain glusterNFS way.

Re: [Gluster-users] Working and up to date guide for ganesha ? nfs-ganesha gluster-ganesha

2017-04-04 Thread Soumya Koduri
Hi Travis, On 04/04/2017 03:49 AM, Travis Eddy wrote: Hello, I've tried all the guides I can find. The is alot of descrepency on the ganesha.conf file and how it interacts with gluster. none of the examples I found worked, also none of them have been updated in the last year or so, even

Re: [Gluster-users] nfs-ganesha logs

2017-03-01 Thread Soumya Koduri
I am not sure if there are any outstanding issues with exposing shard volume via gfapi. CCin Krutika. On 02/28/2017 01:29 PM, Mahdi Adnan wrote: Hi, We have a Gluster volume hosting VMs for ESXi exported via Ganesha. Im getting the following messages in ganesha-gfapi.log and ganesha.log

Re: [Gluster-users] NFS crashing

2017-02-27 Thread Soumya Koduri
On 02/27/2017 07:19 PM, Kevin Lemonnier wrote: Hi, I have a simple glusterFS configured on a VM, with a single volume on a single brick. It's setup that way to replicate the production conditions as close as possible, but with no replica as it's just for dev. Every few hours, the NFS

Re: [Gluster-users] Strange - Missing hostname-trigger_ip-1 resources

2017-02-02 Thread Soumya Koduri
Hi, On 02/03/2017 07:52 AM, ML Wong wrote: Hello All, Any pointers will be very-much appreciated. Thanks in advance! Environment: Running centOS 7.2.511 Gluster: 3.7.16, with nfs-ganesha on 2.3.0.1 from centos-gluster37 repo sha1: cab5df4064e3a31d1d92786d91bd41d91517fba8 ganesha-ha.sh we

Re: [Gluster-users] question about nfs lookup result

2016-11-16 Thread Soumya Koduri
the reason it was working well till now. FYI - there was an issue in the lookup logic code path which we fixed as part of http://review.gluster.org/14911 . I will not be surprised if there are any more lurking around :) Thanks, Soumya 2016-11-16 21:45 GMT+08:00 Soumya Koduri <skod...@redhat.com

Re: [Gluster-users] question about nfs lookup result

2016-11-16 Thread Soumya Koduri
On 11/16/2016 06:38 PM, Pranith Kumar Karampuri wrote: Added people who know nfs code. On Wed, Nov 16, 2016 at 6:21 PM, jin deng > wrote: Hi all, I'm reading the code of 3.6.9 version and got a question.And I'm not very

[Gluster-users] Minutes from yesterday's Gluster Community Bug Triage meeting (Nov 16 2016)

2016-11-16 Thread Soumya Koduri
Hi, Sorry for the delay. Please find the minutes of yesterday's Gluster Community Bug Triage meeting below. Meeting summary agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (skoduri, 12:00:57) Roll Call (skoduri, 12:01:03) Next weeks meeting host (skoduri,

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 5 minutes)

2016-11-15 Thread Soumya Koduri
Hi all, Apologies for the late notice. This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date:

Re: [Gluster-users] [Nfs-ganesha-devel] Understandings of ganesha-ha.sh

2016-11-07 Thread Soumya Koduri
Hi, On 11/05/2016 12:29 AM, ML Wong wrote: I like to ask for some recommendations here. 1) For /usr/libexec/ganesha/ganesha-ha.sh, as we have been taking advantages of using pacemaker+corosync for some other services, however, we always run into the issue of losing other resources we setup in

Re: [Gluster-users] High cpu usage when using Ganesha with gluster FSAL

2016-11-03 Thread Soumya Koduri
Hi, Similar issue was reported in nfs-ganesha github [1]. As mentioned in the link, there is upcall thread (actively polling in a loop) spawned for every export which might be consuming the CPU. There are few optimizations needed here - * Make this behavior optional by checking existing

Re: [Gluster-users] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-06 Thread Soumya Koduri
On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote: On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi, With http://review.gluster.org/#/c/15051/ <http://review.gluster.org/#/c/15051/>, performace

[Gluster-users] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Soumya Koduri
Hi, With http://review.gluster.org/#/c/15051/, performace/client-io-threads is enabled by default. But with that we see regression caused to nfs-ganesha application trying to un/re-export any glusterfs volume. This shall be the same case with any gfapi application using glfs_fini(). More

[Gluster-users] Minutes from today's Gluster Community Bug Triage meeting (Oct 4 2016)

2016-10-04 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting at the links posted below. We had very few participants today as many are traveling. Thanks to hgowtham and ankitraj for joining. Minutes:

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-10-04 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Soumya Koduri
Hi, On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote: > +- Component GlusterFS > | > | > | +Subcomponent nfs Maybe its time to change it to 'gluster-NFS/native NFS'. Niels/Kaleb? +- Component gdeploy | | | +Subcomponent sambha | +Subcomponent hyperconvergence | +Subcomponent RHSC

Re: [Gluster-users] pacemaker VIP routing latency to gluster node.

2016-09-25 Thread Soumya Koduri
lover to SN2 as my configuration, afterward I could mount the gluster volume via VIP’s IP x.x.x.001 on the client 1. Any idea ?? Thanks, ~ Vic Le On Sep 23, 2016, at 1:33 AM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: On 09/23/2016 02:34 AM, Dung Le wrote:

Re: [Gluster-users] pacemaker VIP routing latency to gluster node.

2016-09-23 Thread Soumya Koduri
On 09/23/2016 02:34 AM, Dung Le wrote: Hello, I have a pretty straight forward configuration as below: 3 storage nodes running version 3.7.11 with replica of 3 and it using native gluster NFS. corosync version 1.4.7 and pacemaker version 1.1.12 I have DNS round-robin on 3 VIPs living on the

Re: [Gluster-users] Checklist for ganesha FSAL plugin integration testing for 3.9

2016-09-06 Thread Soumya Koduri
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply. Thanks, Soumya On 09/06/2016 12:19 PM, Soumya Koduri wrote: On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote: hi, Did you get a chance to decide on the nfs-ganesha integrations tests that need to be

Re: [Gluster-users] NFS-Ganesha lo traffic

2016-08-09 Thread Soumya Koduri
On 08/09/2016 09:06 PM, Mahdi Adnan wrote: Hi, Thank you for your reply. The traffic is related to GlusterFS; 18:31:20.419056 IP 192.168.208.134.49058 > 192.168.208.134.49153: Flags [.], ack 3876, win 24576, options [nop,nop,TS val 247718812 ecr 247718772], length 0 18:31:20.419080 IP

Re: [Gluster-users] NFS-Ganesha lo traffic

2016-08-09 Thread Soumya Koduri
On 08/09/2016 03:33 PM, Mahdi Adnan wrote: Hi, Im using NFS-Ganesha to access my volume, it's working fine for now but im seeing lots of traffic on the Loopback interface, in fact it's the same amount of traffic on the bonding interface, can anyone please explain to me why is this happening ?

Re: [Gluster-users] Gluster 3.7.13 NFS Crash

2016-07-30 Thread Soumya Koduri
Inode stored in the shard xlator local is NULL. CCin Kruthika to comment. Thanks, Soumya (gdb) bt #0 0x7f196acab210 in pthread_spin_lock () from /lib64/libpthread.so.0 #1 0x7f196be7bcd5 in fd_anonymous (inode=0x0) at fd.c:804 #2 0x7f195deb1787 in shard_common_inode_write_do

[Gluster-users] Minutes from today's Gluster Community Bug Triage meeting (July 12 2016)

2016-07-12 Thread Soumya Koduri
Hi, Thanks to everyone who joined the meeting. Please find the minutes of today's Gluster Community Bug Triage meeting at the below links. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-07-12/gluster_bug_triage.2016-07-12-12.00.html Minutes (text):

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-07-12 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

[Gluster-users] **Reminder** Triaging and Updating Bug status

2016-06-28 Thread Soumya Koduri
Hi, We have noticed that many of the bugs (esp., in the recent past the ones filed against 'tests' component) which are being actively worked upon do not have either 'Triaged' keyword set or bug status(/assignee) updated appropriately. Sometimes even many of the active community members fail

[Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (May 17 2016)

2016-05-17 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended the meeting. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-05-17/gluster_bug_triage.2016-05-17-12.01.html Minutes (text):

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
Even on my setup if I change nfs.port, all the other services also started registering on those ports. Can you please file a bug for it. That seems like a bug (or is it intentional..Niels?). 153 tcp 2049 mountd 151 tcp 2049 mountd 133 tcp 2049

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
per 103 udp111 portmapper 102 udp111 portmapper 153 tcp 2049 mountd 151 tcp 2049 mountd 133 tcp 2049 nfs 1002273 tcp 2049 On Wed, May 4, 2016 at 12:09 PM, Soumya Koduri <skod...@

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
e it. Regards, Abhishek On Wed, May 4, 2016 at 11:33 AM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs.

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs. Could you please check the log file '/var/lib/glusterfs/nfs.log' if there are any errors logged with respect protocol registration failures. Thanks, Soumya On 05/04/2016

Re: [Gluster-users] nfs-ganesha volume null errors

2016-03-22 Thread Soumya Koduri
console during cluster setup. Please give a try and let me know if you see any errors. Thanks, Soumya Testing Environment: Running CentOS Linux release 7.2.1511, glusterfs 3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha-gluster-2.3.0-1.el7.x86_64 On Mon, Mar 14, 2016 at 2:05 AM, S

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 2.5 hours)

2016-03-15 Thread Soumya Koduri
Hi, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in

Re: [Gluster-users] nfs-ganesha volume null errors

2016-03-14 Thread Soumya Koduri
Hi, On 03/14/2016 04:06 AM, ML Wong wrote: Running CentOS Linux release 7.2.1511, glusterfs 3.7.8 (glusterfs-server-3.7.8-2.el7.x86_64), nfs-ganesha-gluster-2.3.0-1.el7.x86_64 1) Ensured the connectivity between gluster nodes by using PING 2) Disabled NetworkManager (Loaded: loaded

Re: [Gluster-users] NFS Client issues with Gluster Server 3.6.9

2016-03-08 Thread Soumya Koduri
The log file didn't have any errors logged. Please check the NFS client logs in '/var/log/messages' or using dmesg and brick logs as well. Probably strace or packet trace could help too. You could use the below command to capture the pkt trace while running the I/Os on the node where

Re: [Gluster-users] Troubleshooting Gluster Client Mount Disconnection

2016-02-19 Thread Soumya Koduri
On 02/20/2016 02:08 AM, Takemura, Won wrote: I am new user / admin of Gluster. I am involved in a support issue where access to a gluster mount fails and the error is displayed when users attempt to access the mount: -bash: cd: /app/: Transport endpoint is not connected This error is

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote: Hmm, OK. I've rechecked 3.7.8 with the following patches (latest revisions): === Soumya Koduri (3): gfapi: Use inode_forget in case of handle objects inode: Retire the inodes from the lru list in inode_table_destroy rpc

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/12/2016 11:27 AM, Soumya Koduri wrote: On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote: And "API" test. I used custom API app [1] and did brief file manipulations through it (create/remove/stat). Then I performed drop_caches, finished API [2] and got the following Valgr

Re: [Gluster-users] [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

2016-02-11 Thread Soumya Koduri
b2902bba1 [10] https://gist.github.com/385bbb95ca910ec9766f [11] https://gist.github.com/685c4d3e13d31f597722 10.02.2016 15:37, Oleksandr Natalenko написав: Hi, folks. Here go new test results regarding client memory leak. I use v3.7.8 with the following patches: === Soumya Koduri (2):

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
-level, client/server/both. Thanks, Soumya 01.02.2016 09:54, Soumya Koduri написав: On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote: Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.github.com/ fc1647de09

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 02:48 PM, Xavier Hernandez wrote: Hi, On 01/02/16 09:54, Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited

Re: [Gluster-users] [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Soumya Koduri
ix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash Soumya Koduri (3): gfapi: Fix inode nlookup counts inode: Retire the inodes from the lru list in inode_table_destroy upcall: free the xdr* allocations === With those patches we got API leaks fix

Re: [Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-27 Thread Soumya Koduri
USE client may be a good option for us as well, but I can't seem to get speeds higher than 30 MB/s using the Gluster FUSE client (I posted more details on that earlier today to this group as well, looking for advice there). -Kris ____ From: Soumya Ko

Re: [Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?

2016-01-27 Thread Soumya Koduri
On 01/27/2016 09:39 PM, Kris Laib wrote: Hi all, We're getting ready to roll out Gluster using standard NFS from the clients, and CTDB and RRDNS to help facilitate HA. I thought we were good to know, but recently had an issue where there wasn't enough memory on one of the gluster nodes in a

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
as lost. But most of the inodes should have got purged when we drop vfs cache. Did you do drop vfs cache before exiting the process? I shall add some log statements and check that part Thanks, Soumya 12.01.2016 08:24, Soumya Koduri написав: For fuse client, I tried vfs drop_caches

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
On 01/13/2016 04:08 PM, Soumya Koduri wrote: On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: Just in case, here is Valgrind output on FUSE client with 3.7.6 + API-related patches we discussed before: https://gist.github.com/cd6605ca19734c1496a4 Thanks for sharing the results. I made

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
Cordialement, Mathieu CHATEAU http://www.lotp.fr 2016-01-12 7:24 GMT+01:00 Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>>: On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote: Brief test shows that Ganesha stopped leaking a

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
On 01/08/2016 05:04 PM, Soumya Koduri wrote: I could reproduce while testing deep directories with in the mount point. I root caus'ed the issue & had discussion with Pranith to understand the purpose and recommended way of taking nlookup on inodes. I shall make changes to my existing fix and

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
11.01.2016 12:26, Soumya Koduri написав: I have made changes to fix the lookup leak in a different way (as discussed with Pranith) and uploaded them in the latest patch set #4 - http://review.gluster.org/#/c/13096/ Please check if it resolves the mem leak and hopefully doesn't result in any

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
re ~1.8M files on this test volume. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
ave pasted below apply to gfapi/nfs-ganesha applications. Also, to resolve the nfs-ganesha issue which I had mentioned below (in case if Entries_HWMARK option gets changed), I have posted below fix - https://review.gerrithub.io/#/c/258687 Thanks, Soumya Ideas? 05.01.2016 12:31, Sou

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
2016 р. 22:52:25 EET Soumya Koduri wrote: On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote: Unfortunately, both patches didn't make any difference for me. I've patched 3.7.6 with both patches, recompiled and installed patched GlusterFS package on client side and mounted volume with ~2M of files

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2015-12-31 Thread Soumya Koduri
On 12/28/2015 02:32 PM, Soumya Koduri wrote: - Original Message - From: "Pranith Kumar Karampuri" <pkara...@redhat.com> To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri" <skod...@redhat.com> Cc: gluster-users@glu

Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2015-12-28 Thread Soumya Koduri
- Original Message - > From: "Pranith Kumar Karampuri" <pkara...@redhat.com> > To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri" > <skod...@redhat.com> > Cc: gluster-users@gluster.org, gluster-de...@gluste

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-26 Thread Soumya Koduri
. https://gist.github.com/e4602a50d3c98f7a2766 One may see GlusterFS-related leaks here as well. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote: What units Cache_Size is measured in? Bytes? Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you please run ganesha process under valgrind? Will help in detecting leaks. Thanks, Soumya 25.12.2015 16:58, Soumya Koduri

Re: [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory usage: === root 5416 34.2 78.5

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2015-12-22 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-users] [Gluster-devel] REMINDER: Gluster Bug Triage timing-poll

2015-12-22 Thread Soumya Koduri
+gluster-users On 12/22/2015 06:03 PM, Hari Gowtham wrote: Hi all, There was a poll conducted to find the timing that suits best for the people who want to participate in the weekly Gluster bug triage meeting. The result for the poll is yet to be announced but we would like to get more

[Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (22nd Dec 2015)

2015-12-22 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attend the meeting. Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-12-22/gluster_bug_triage.2015-12-22-12.00.html Minutes (text):

Re: [Gluster-users] gluster nfs-ganesha enable fails and is driving me crazy

2015-12-09 Thread Soumya Koduri
On 12/10/2015 02:51 AM, Marco Antonio Carcano wrote: Hi Kaleb, thank you very much for the quick reply I tried what you suggested, but I got the same error I tried both HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local" VIP_glstr01.carcano.local="192.168.65.250"

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-30 Thread Soumya Koduri
Hi, But are you telling me that in a 3-node cluster, quorum is lost when one of the nodes ip is down? yes. Its the limitation with Pacemaker/Corosync. If the nodes participating in cluster cannot communicate with majority of them (quorum is lost), then the cluster is shut down. However i

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-30 Thread Soumya Koduri
On 11/30/2015 03:26 PM, Soumya Koduri wrote: Hi, But are you telling me that in a 3-node cluster, quorum is lost when one of the nodes ip is down? yes. Its the limitation with Pacemaker/Corosync. If the nodes participating in cluster cannot communicate with majority of them (quorum is lost

Re: [Gluster-users] 3 node NFS-Ganesha Cluster

2015-11-27 Thread Soumya Koduri
Hi, On 11/27/2015 01:58 PM, ml wrote: Dear All, I am trying to get a nfs-ganesha ha cluster running, with 3, CentOS Linux release 7.1.1503 nodes. I use the package glusterfs-ganesha-3.7.6 -1.el7.x86_64 to get the HA scripts. So far it works fine when i stop the nfs-ganesha service on one of

Re: [Gluster-users] nfs mounting and copying files

2015-11-18 Thread Soumya Koduri
On 11/17/2015 08:50 PM, Pierre Léonard wrote: Hi all, Y have a cluster with 14 nodes. I have build a volume stripe 7 with the 14 nodes. Underlying I use XFS. Locally I mount the global volume with nfs : mount -t nfs 127.0.0.1:gvExport /glusterfs/gvExport -o _netdev,nosuid,bg,exec then I

Re: [Gluster-users] Configuring Ganesha and gluster on separate nodes?

2015-11-18 Thread Soumya Koduri
On 11/17/2015 10:21 PM, Surya K Ghatty wrote: Hi: I am trying to understand if it is technically feasible to have gluster nodes on one machine, and export a volume from one of these nodes using a nfs-ganesha server installed on a totally different machine? I tried the below and showmount -e

Re: [Gluster-users] Question on HA Active-Active Ganesha setup

2015-11-06 Thread Soumya Koduri
On 11/05/2015 08:43 PM, Surya K Ghatty wrote: All... I need your help! I am trying to setup Highly available Active-Active Ganesha configuration on two glusterfs nodes based on instructions here: https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/

Re: [Gluster-users] volume répliqué.

2015-09-28 Thread Soumya Koduri
On 09/25/2015 04:04 PM, Pierre Léonard wrote: Hi all, I have 14 nodes with a replicated 2 volume. I want to suppress the replication function. I that possible without losing data. Or Do I need to make a save of the data before. The only way I can imagine to avoid replication in case of

  1   2   >