Re: [Gluster-devel] [Gluster-users] Release 6.5: Expected tagging on 5th August

2019-08-01 Thread Soumya Koduri
Hi Hari, [1] is a critical patch which addresses issue affecting upcall processing by applications such as NFS-Ganesha. As soon as it gets merged in master, I shall backport it to release-7/6/5 branches. Kindly consider the same. Thanks, Soumya [1]

Re: [Gluster-devel] Release 4.1.10: Expected tagging on July 15th

2019-07-16 Thread Soumya Koduri
On 7/11/19 5:34 PM, Pasi Kärkkäinen wrote: On Tue, Jul 09, 2019 at 02:32:46PM +0530, Hari Gowtham wrote: Hi, Expected tagging date for release-4.1.10 is on July, 15th 2019. NOTE: This is the last release for 4 series. Branch 4 will be EOLed after this. So if there are any critical patches

[Gluster-devel] Backlog/Improvements tracking

2019-04-30 Thread Soumya Koduri
Hi, To track any new feature or improvements we are currently using github . I assume those issues refer to the ones which are actively being worked upon. How do we track backlogs which may not get addressed (at least in the near future)? For eg., I am planning to close couple of RFE BZs

Re: [Gluster-devel] Issue with posix locks

2019-04-01 Thread Soumya Koduri
On 4/1/19 2:23 PM, Xavi Hernandez wrote: On Mon, Apr 1, 2019 at 10:15 AM Soumya Koduri <mailto:skod...@redhat.com>> wrote: On 4/1/19 10:02 AM, Pranith Kumar Karampuri wrote: > > > On Sun, Mar 31, 2019 at 11:29 PM Soumya Koduri mailto:

Re: [Gluster-devel] Issue with posix locks

2019-04-01 Thread Soumya Koduri
On 4/1/19 10:02 AM, Pranith Kumar Karampuri wrote: On Sun, Mar 31, 2019 at 11:29 PM Soumya Koduri <mailto:skod...@redhat.com>> wrote: On 3/29/19 11:55 PM, Xavi Hernandez wrote: > Hi all, > > there is one potential problem with pos

Re: [Gluster-devel] Issue with posix locks

2019-03-31 Thread Soumya Koduri
On 3/29/19 11:55 PM, Xavi Hernandez wrote: Hi all, there is one potential problem with posix locks when used in a replicated or dispersed volume. Some background: Posix locks allow any process to lock a region of a file multiple times, but a single unlock on a given region will release

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Soumya Koduri
On 3/27/19 12:55 PM, Xavi Hernandez wrote: Hi Raghavendra, On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa mailto:rgowd...@redhat.com>> wrote: All, Glusterfs cleans up POSIX locks held on an fd when the client/mount through which those locks are held disconnects from

Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-19 Thread Soumya Koduri
On 3/19/19 9:49 AM, Sankarshan Mukhopadhyay wrote: is (as might just be widely known) an extensible, portable, support data collection tool primarily aimed at Linux distributions and other UNIX-like operating systems. At present there are 2 plugins

Re: [Gluster-devel] Release 6: Kick off!

2019-01-24 Thread Soumya Koduri
Hi Shyam, Sorry for the late response. I just realized that we had two more new APIs glfs_setattr/fsetattr which uses 'struct stat' made public [1]. As mentioned in one of the patchset review comments, since the goal is to move to glfs_stat in release-6, do we need to update these APIs as

Re: [Gluster-devel] [NFS-Ganesha-Devel] Re: Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-12 Thread Soumya Koduri
On 10/12/18 5:55 PM, Kinglong Mee wrote: On 2018/10/12 14:34, Soumya Koduri wrote:> On 10/12/18 7:22 AM, Kinglong Mee wrote: On 2018/10/11 19:09, Soumya Koduri wrote: NFS-Ganesha's md-cache layer already does extensive caching of attributes and ACLs of each file looked upon. Do you see

Re: [Gluster-devel] [NFS-Ganesha-Devel] Re: Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-12 Thread Soumya Koduri
On 10/12/18 12:04 PM, Soumya Koduri wrote: 1. Cache it separately as posix ACLs (a new option maybe "cache-glusterfs-acl" is added); And make sure _posix_xattr_get_set fills them when lookup requests. I am not sure if posix layer can handle it. Virtual xattrs are

Re: [Gluster-devel] [NFS-Ganesha-Devel] Re: Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-12 Thread Soumya Koduri
On 10/12/18 7:22 AM, Kinglong Mee wrote: On 2018/10/11 19:09, Soumya Koduri wrote: NFS-Ganesha's md-cache layer already does extensive caching of attributes and ACLs of each file looked upon. Do you see any additional benefit with turning on gluster md-cache as well?  More replies inline

Re: [Gluster-devel] Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-11 Thread Soumya Koduri
NFS-Ganesha's md-cache layer already does extensive caching of attributes and ACLs of each file looked upon. Do you see any additional benefit with turning on gluster md-cache as well? More replies inline.. On 10/11/18 7:47 AM, Kinglong Mee wrote: Cc nfs-ganesha, Md-cache has option

Re: [Gluster-devel] On making performance.parallel-readdir as a default option

2018-09-24 Thread Soumya Koduri
Please find my comments inline. On 9/22/18 8:56 AM, Raghavendra Gowdappa wrote: On Fri, Sep 21, 2018 at 11:25 PM Raghavendra Gowdappa mailto:rgowd...@redhat.com>> wrote: Hi all, We've a feature performance.parallel-readdir [1] that is known to improve performance of readdir

Re: [Gluster-devel] Release 4.0: Schedule and scope clarity (responses needed)

2017-11-21 Thread Soumya Koduri
Hi Shyam, Now, glusterfs/github [1] reads ~50 issues as being targets in 4.0, and among this about 2-4 are marked closed (or done). Ask1: Request each of you to go through the issue list and coordinate with a maintainer, to either mark an issues milestone correctly (i.e retain it in 4.0 or

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.13: (STM release) Details

2017-09-27 Thread Soumya Koduri
Hi Shyam, On 09/11/2017 07:51 PM, Shyam Ranganathan wrote: Hi, The next community release of Gluster is 3.13, which is a short term maintenance release is slated for release on 30th Nov [1] [2]. Thus giving a 2 month head room to get to 4.0 work done, while maintaining the cadence of

Re: [Gluster-devel] Proposed Protocol changes for 4.0: Need feedback.

2017-09-01 Thread Soumya Koduri
On 08/11/2017 06:04 PM, Amar Tumballi wrote: Hi All, Below are the proposed protocol changes (ie, XDR changes on the wire) we are thinking for Gluster 4.0. Poornima and I were discussing if we can include volume uuid as part of Handshake protocol between protocol/client and

Re: [Gluster-devel] Release 3.12 and 4.0: Thoughts on scope

2017-05-18 Thread Soumya Koduri
On 05/16/2017 02:10 PM, Kaushal M wrote: On 16 May 2017 06:16, "Shyam" > wrote: Hi, Let's start a bit early on 3.12 and 4.0 roadmap items, as there have been quite a few discussions around this in various meetups. Here is what

Re: [Gluster-devel] [Gluster-users] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-26 Thread Soumya Koduri
Hi Shyam, On 04/25/2017 07:38 PM, Shyam wrote: On 04/25/2017 07:40 AM, Pranith Kumar Karampuri wrote: On Thu, Apr 13, 2017 at 8:17 PM, Shyam > wrote: On 02/28/2017 10:17 AM, Shyam wrote: 1) Halo - Initial Cut (@pranith) Sorry for

[Gluster-devel] Proposal for an extended READDIRPLUS operation via gfAPI

2017-04-21 Thread Soumya Koduri
Hi, We currently have readdirplus operation to fetch stat for each of the dirents. But that may not be sufficient and often applications may need extra information, like for eg., NFS-Ganesha like applications which operate on handles need to generate handles for each of those dirents

Re: [Gluster-devel] What does xdata mean? "gfid-req"?

2017-03-20 Thread Soumya Koduri
On 03/18/2017 06:51 PM, Zhitao Li wrote: Hello, everyone, I am investigating the difference between stat and lookup operations in GlusterFs now. In the translator named "md_cache", stat operation will hit the cache generally, while lookup operation will miss the cache. The reason is that

Re: [Gluster-devel] Consistent time attributes (ctime, atime and mtime) across replica set and distribution set

2017-03-20 Thread Soumya Koduri
On 03/20/2017 08:53 AM, Vijay Bellur wrote: On Sun, Mar 19, 2017 at 10:14 AM, Amar Tumballi <atumb...@redhat.com <mailto:atumb...@redhat.com>> wrote: On Thu, Mar 16, 2017 at 6:52 AM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote:

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 5 minutes)

2016-11-15 Thread Soumya Koduri
Hi all, Apologies for the late notice. This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date:

Re: [Gluster-devel] Possible problem introduced by http://review.gluster.org/15573

2016-10-21 Thread Soumya Koduri
On 10/21/2016 02:03 PM, Xavier Hernandez wrote: Hi Niels, On 21/10/16 10:03, Niels de Vos wrote: On Fri, Oct 21, 2016 at 09:03:30AM +0200, Xavier Hernandez wrote: Hi, I've just tried Gluster 3.8.5 with Proxmox using gfapi and I consistently see a crash each time an attempt to connect to

Re: [Gluster-devel] Possible problem introduced by http://review.gluster.org/15573

2016-10-21 Thread Soumya Koduri
Hi Xavi, On 10/21/2016 12:57 PM, Xavier Hernandez wrote: Looking at the code, I think that the added fd_unref() should only be called if the fop preparation fails. Otherwise the callback already unreferences the fd. Code flow: * glfs_fsync_async_common() takes an fd ref and calls STACK_WIND

Re: [Gluster-devel] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-06 Thread Soumya Koduri
On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote: On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi, With http://review.gluster.org/#/c/15051/ <http://review.gluster.org/#/c/15051/>, performace

[Gluster-devel] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Soumya Koduri
Hi, With http://review.gluster.org/#/c/15051/, performace/client-io-threads is enabled by default. But with that we see regression caused to nfs-ganesha application trying to un/re-export any glusterfs volume. This shall be the same case with any gfapi application using glfs_fini(). More

[Gluster-devel] Minutes from today's Gluster Community Bug Triage meeting (Oct 4 2016)

2016-10-04 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting at the links posted below. We had very few participants today as many are traveling. Thanks to hgowtham and ankitraj for joining. Minutes:

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-10-04 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Soumya Koduri
On 09/30/2016 10:08 AM, Pranith Kumar Karampuri wrote: Does samba/gfapi/nfs-ganesha have options to disable readdirp? AFAIK, currently there is no option to disable/enable readdirp in gfapi & nfs-ganesha (not sure about samba). But looks like nfs-ganesha seem to be always using readdir,

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Soumya Koduri
s, Soumya Regards, Poornima *From: *"Pranith Kumar Karampuri" <pkara...@redhat.com> *To: *"Raghavendra Gowdappa" <rgowd...@redhat.com>, "Poornima Gurusiddaiah" <pguru...@redhat.com>, "Raghavendra Ta

Re: [Gluster-devel] [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Soumya Koduri
Hi, On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote: > +- Component GlusterFS > | > | > | +Subcomponent nfs Maybe its time to change it to 'gluster-NFS/native NFS'. Niels/Kaleb? +- Component gdeploy | | | +Subcomponent sambha | +Subcomponent hyperconvergence | +Subcomponent RHSC

Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-23 Thread Soumya Koduri
On 09/23/2016 11:48 AM, Poornima Gurusiddaiah wrote: - Original Message - From: "Niels de Vos" To: "Raghavendra Gowdappa" Cc: "Gluster Devel" Sent: Wednesday, September 21, 2016 3:52:39 AM Subject: Re:

Re: [Gluster-devel] Fixing setfsuid/gid problems in posix xlator

2016-09-23 Thread Soumya Koduri
On 09/23/2016 08:28 AM, Pranith Kumar Karampuri wrote: hi, Jiffin found an interesting problem in posix xlator where we have never been using setfsuid/gid (http://review.gluster.org/#/c/15545/), what I am seeing regressions after this is, if the files are created using non-root user then

Re: [Gluster-devel] Upcall details for NLINK

2016-09-19 Thread Soumya Koduri
On 09/19/2016 10:08 AM, Niels de Vos wrote: Duh, and now with the attachment. I'm going to get some coffee now. On Mon, Sep 19, 2016 at 06:22:58AM +0200, Niels de Vos wrote: Hey Soumya, do we have a description of the different actions that we expect/advise users of upcall to take? I'm

Re: [Gluster-devel] Gluster Developer Summit 2016 Talk Schedule

2016-09-15 Thread Soumya Koduri
On 09/16/2016 03:48 AM, Amye Scavarda wrote: On Thu, Sep 15, 2016 at 8:26 AM, Pranith Kumar Karampuri <pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote: On Thu, Sep 15, 2016 at 2:37 PM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>>

Re: [Gluster-devel] Gluster Developer Summit 2016 Talk Schedule

2016-09-15 Thread Soumya Koduri
Hi Amye, Is there any plan to record these talks? Thanks, Soumya On 09/15/2016 03:09 AM, Amye Scavarda wrote: Thanks to all that submitted talks, and thanks to the program committee who helped select this year's content. This will be posted on the main Summit page as well:

Re: [Gluster-devel] Checklist for ganesha FSAL plugin integration testing for 3.9

2016-09-06 Thread Soumya Koduri
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply. Thanks, Soumya On 09/06/2016 12:19 PM, Soumya Koduri wrote: On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote: hi, Did you get a chance to decide on the nfs-ganesha integrations tests that need to be

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-29 Thread Soumya Koduri
, Csaba 4) SELinux on gluster volumes: Feature owners: Niels, Manikandan 5) Native sub-directory mounts: Feature owners: Kaushal, Pranith 6) RichACL support for GlusterFS: Feature owners: Rajesh Joseph 7) Sharemodes/Share reservations: Feature owners: Raghavendra Talur, Poornima G, Soumya Koduri

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-08-10 Thread Soumya Koduri
) on the server-side. I have updated the feature-spec[1] with the details. Comments are welcome. Thanks, Soumya [1] http://review.gluster.org/#/c/15053/3/under_review/reclaim-locks.md On 07/28/2016 07:29 PM, Soumya Koduri wrote: On 07/27/2016 02:38 AM, Vijay Bellur wrote: On 07/26/2016 05:56 AM

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-28 Thread Soumya Koduri
On 07/27/2016 02:38 AM, Vijay Bellur wrote: On 07/26/2016 05:56 AM, Soumya Koduri wrote: Hi Vijay, On 07/26/2016 12:13 AM, Vijay Bellur wrote: On 07/22/2016 08:44 AM, Soumya Koduri wrote: Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-26 Thread Soumya Koduri
Hi Vijay, On 07/26/2016 12:13 AM, Vijay Bellur wrote: On 07/22/2016 08:44 AM, Soumya Koduri wrote: Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail-over/connect to a different glusterFS client while the I/O is happening. In such cases until

[Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-22 Thread Soumya Koduri
Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail-over/connect to a different glusterFS client while the I/O is happening. In such cases until there is a ping timer expiry and glusterFS server cleans up the locks held by the older glusterFS

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri
On 07/20/2016 12:41 PM, Soumya Koduri wrote: On 07/20/2016 12:00 PM, Soumya Koduri wrote: On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri
On 07/20/2016 12:00 PM, Soumya Koduri wrote: On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya for root causing this. Thanks and Regards

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri
On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya for root causing this. Thanks and Regards, Kotresh H R arbiter-mount.t has failed despite

[Gluster-devel] Minutes from today's Gluster Community Bug Triage meeting (July 12 2016)

2016-07-12 Thread Soumya Koduri
Hi, Thanks to everyone who joined the meeting. Please find the minutes of today's Gluster Community Bug Triage meeting at the below links. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-07-12/gluster_bug_triage.2016-07-12-12.00.html Minutes (text):

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-07-12 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-devel] [NFS-ganesha] unlink file remains in ./glusterfs/unlinks after delete file

2016-07-01 Thread Soumya Koduri
FYI - "http://review.gluster.org/#/c/14840 " contains the fix for 3.7 branch. Thanks, Soumya On 07/01/2016 11:38 AM, Soumya Koduri wrote: Hi, On 06/30/2016 11:56 AM, 梁正和 wrote: Hi, I'm trying to export gluster-volume by nfs-ganesha. After create --> Some I/O --> delete fil

Re: [Gluster-devel] [NFS-ganesha] unlink file remains in ./glusterfs/unlinks after delete file

2016-07-01 Thread Soumya Koduri
Hi, On 06/30/2016 11:56 AM, 梁正和 wrote: Hi, I'm trying to export gluster-volume by nfs-ganesha. After create --> Some I/O --> delete file from nfs mount point. The file has been moved to ./glusterfs/unlinkls. There was an fd leak when a file is created using gfapi handleops (which

[Gluster-devel] **Reminder** Triaging and Updating Bug status

2016-06-28 Thread Soumya Koduri
Hi, We have noticed that many of the bugs (esp., in the recent past the ones filed against 'tests' component) which are being actively worked upon do not have either 'Triaged' keyword set or bug status(/assignee) updated appropriately. Sometimes even many of the active community members fail

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-06-28 Thread Soumya Koduri
ccept it. 3. Kotresh will work on new changes to make sure changelog makes correct use of rpc-clnt. [1] http://review.gluster.org/#/c/13592 [2] http://review.gluster.org/#/c/1359 regards, Raghavendra. Thanks and Regards, Kotresh H R - Original Message - From: "Soumya Koduri" &l

Re: [Gluster-devel] [Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (May 17 2016)

2016-05-17 Thread Soumya Koduri
On 05/17/2016 07:09 PM, M S Vishwanath Bhat wrote: On 17 May 2016 at 18:51, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended

[Gluster-devel] Minutes of today's Gluster Community Bug Triage meeting (May 17 2016)

2016-05-17 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended the meeting. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-05-17/gluster_bug_triage.2016-05-17-12.01.html Minutes (text):

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 2.5 hours)

2016-05-17 Thread Soumya Koduri
Hi, This meeting is scheduled for anyone, who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in your

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Soumya Koduri
On 05/11/2016 10:17 PM, Soumya Koduri wrote: On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" <rgowd...@redhat.com> To: "Soumya Koduri" <skod...@redhat.com> Cc: "Gluster Devel" <gl

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Soumya Koduri
On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" <rgowd...@redhat.com> To: "Soumya Koduri" <skod...@redhat.com> Cc: "Gluster Devel" <gluster-devel@gluster.org> Sent: Wednesday, May

Re: [Gluster-devel] [Gluster-users] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs. Could you please check the log file '/var/lib/glusterfs/nfs.log' if there are any errors logged with respect protocol registration failures. Thanks, Soumya On 05/04/2016

Re: [Gluster-devel] Review request for leases patches

2016-03-08 Thread Soumya Koduri
Hi Poornima, On 03/07/2016 11:24 AM, Poornima Gurusiddaiah wrote: Hi All, Here is the link to feature page: http://review.gluster.org/#/c/11980/ Patches can be found @:

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-04 Thread Soumya Koduri
Thanks and Regards, Kotresh H R - Original Message - > From: "Soumya Koduri" <skod...@redhat.com <mailto:skod...@redhat.com>> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com <mailto:khire...@red

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-03 Thread Soumya Koduri
ttp://review.gluster.org/#/c/13592/ Thanks and Regards, Kotresh H R - Original Message - From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> To: "Soumya Koduri" <skod...@redhat.com> Cc: "Raghavendra G" <raghaven...@gluster.com>, "Gl

Re: [Gluster-devel] Regarding default_forget/releasedir/release() fops

2016-02-23 Thread Soumya Koduri
On 02/23/2016 05:02 PM, Jeff Darcy wrote: Recently while doing some tests (which involved lots of inode_forget()), I have noticed that my log file got flooded with below messages - [2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (-->

[Gluster-devel] Regarding default_forget/releasedir/release() fops

2016-02-22 Thread Soumya Koduri
Hi Jeff, Recently while doing some tests (which involved lots of inode_forget()), I have noticed that my log file got flooded with below messages - [2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x231)[0x7fd00f63c15d]

Re: [Gluster-devel] libgfapi 3.7.8 still memory leak

2016-02-17 Thread Soumya Koduri
Hi Piotr, On 02/17/2016 08:20 PM, Piotr Rybicki wrote: Hi all. I'm trying hard to diagnose memory leaks in libgfapi access. gluster 3.7.8 For this purpose, i've created simplest C code (basically only calling glfs_new() and glfs_fini() ): #include int main (int argc, char** argv) {

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote: Hmm, OK. I've rechecked 3.7.8 with the following patches (latest revisions): === Soumya Koduri (3): gfapi: Use inode_forget in case of handle objects inode: Retire the inodes from the lru list in inode_table_destroy rpc

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/12/2016 11:27 AM, Soumya Koduri wrote: On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote: And "API" test. I used custom API app [1] and did brief file manipulations through it (create/remove/stat). Then I performed drop_caches, finished API [2] and got the following Valgr

Re: [Gluster-devel] libgfapi libvirt memory leak version 3.7.8

2016-02-11 Thread Soumya Koduri
Hi Piotr, Could you apply below gfAPI patch and check the valgrind output - http://review.gluster.org/13125 Thanks, Soumya On 02/11/2016 09:40 PM, Piotr Rybicki wrote: Hi All I have to report, that there is a mem leak latest version of gluster gluster: 3.7.8 libvirt 1.3.1 mem leak

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-11 Thread Soumya Koduri
b2902bba1 [10] https://gist.github.com/385bbb95ca910ec9766f [11] https://gist.github.com/685c4d3e13d31f597722 10.02.2016 15:37, Oleksandr Natalenko написав: Hi, folks. Here go new test results regarding client memory leak. I use v3.7.8 with the following patches: === Soumya Koduri (2):

Re: [Gluster-devel] glusterfsd core on NetBSD (https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14139/consoleFull)

2016-02-10 Thread Soumya Koduri
Thanks Manu. Kotresh, Is this issue related to bug1221629 as well? Thanks, Soumya On 02/10/2016 02:10 PM, Emmanuel Dreyfus wrote: On Wed, Feb 10, 2016 at 12:17:23PM +0530, Soumya Koduri wrote: I see a core generated in this regression run though all the tests seem to have passed. I do

[Gluster-devel] glusterfsd core on NetBSD (https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14139/consoleFull)

2016-02-09 Thread Soumya Koduri
Hi Emmanuel, I see a core generated in this regression run though all the tests seem to have passed. I do not have a netbsd machine to analyze the core. Could you please take a look and let me know what the issue could have been? Thanks, Soumya ___

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-09 Thread Soumya Koduri
On 02/09/2016 12:30 PM, Raghavendra G wrote: Right. But if there are simultaneous access to the same file from any other client and rebalance process, delegations shall not be granted or revoked if granted even though they are operating at

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-08 Thread Soumya Koduri
On 02/08/2016 09:13 AM, Shyam wrote: On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" To: "Sakshi Bansal" , "Susant Palai" Cc: "Gluster Devel"

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-08 Thread Soumya Koduri
On 02/09/2016 10:27 AM, Raghavendra G wrote: On Mon, Feb 8, 2016 at 4:31 PM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: On 02/08/2016 09:13 AM, Shyam wrote: On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote: - Orig

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
-level, client/server/both. Thanks, Soumya 01.02.2016 09:54, Soumya Koduri написав: On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote: Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.github.com/ fc1647de09

Re: [Gluster-devel] [Gluster-users] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 02:48 PM, Xavier Hernandez wrote: Hi, On 01/02/16 09:54, Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Soumya Koduri
ix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash Soumya Koduri (3): gfapi: Fix inode nlookup counts inode: Retire the inodes from the lru list in inode_table_destroy upcall: free the xdr* allocations === With those patches we got API leaks fix

Re: [Gluster-devel] Core from gNFS process

2016-01-15 Thread Soumya Koduri
On 01/14/2016 08:41 PM, Vijay Bellur wrote: On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote: On 14/01/16 14:28, Jiffin Tony Thottan wrote: Hi, The core generated when encryption xlator is enabled [2016-01-14 08:13:15.740835] E [crypt.c:4298:master_set_master_vol_key] 0-test1-crypt:

Re: [Gluster-devel] Core from gNFS process

2016-01-15 Thread Soumya Koduri
On 01/15/2016 06:52 PM, Soumya Koduri wrote: On 01/14/2016 08:41 PM, Vijay Bellur wrote: On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote: On 14/01/16 14:28, Jiffin Tony Thottan wrote: Hi, The core generated when encryption xlator is enabled [2016-01-14 08:13:15.740835] E [crypt.c

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
On 01/13/2016 04:08 PM, Soumya Koduri wrote: On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: Just in case, here is Valgrind output on FUSE client with 3.7.6 + API-related patches we discussed before: https://gist.github.com/cd6605ca19734c1496a4 Thanks for sharing the results. I made

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
Cordialement, Mathieu CHATEAU http://www.lotp.fr 2016-01-12 7:24 GMT+01:00 Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>>: On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote: Brief test shows that Ganesha stopped leaking a

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
On 01/08/2016 05:04 PM, Soumya Koduri wrote: I could reproduce while testing deep directories with in the mount point. I root caus'ed the issue & had discussion with Pranith to understand the purpose and recommended way of taking nlookup on inodes. I shall make changes to my existing fix and

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
11.01.2016 12:26, Soumya Koduri написав: I have made changes to fix the lookup leak in a different way (as discussed with Pranith) and uploaded them in the latest patch set #4 - http://review.gluster.org/#/c/13096/ Please check if it resolves the mem leak and hopefully doesn't result in any

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-08 Thread Soumya Koduri
e25d4a5a52 ganesha.conf: https://gist.github.com/9b5e59b8d6d8cb84c85d How I mount NFS share: === mount -t nfs4 127.0.0.1:/mail_boxes /mnt/tmp -o defaults,_netdev,minorversion=2,noac,noacl,lookupcache=none,timeo=100 === On четвер, 7 січня 2016 р. 12:06:42 EET Soumya Koduri wrote: Entries_HW

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-06 Thread Soumya Koduri
u have taken the latest gluster patch set #3 ? - http://review.gluster.org/#/c/13096/3 If you are hitting the issue even then, please provide the core if possible. Thanks, Soumya 06.01.2016 08:40, Soumya Koduri написав: On 01/06/2016 03:53 AM, Oleksandr Natalenko wrote: OK, I've repeated th

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
re ~1.8M files on this test volume. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
ave pasted below apply to gfapi/nfs-ganesha applications. Also, to resolve the nfs-ganesha issue which I had mentioned below (in case if Entries_HWMARK option gets changed), I have posted below fix - https://review.gerrithub.io/#/c/258687 Thanks, Soumya Ideas? 05.01.2016 12:31, Sou

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
2016 р. 22:52:25 EET Soumya Koduri wrote: On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote: Unfortunately, both patches didn't make any difference for me. I've patched 3.7.6 with both patches, recompiled and installed patched GlusterFS package on client side and mounted volume with ~2M of files

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-28 Thread Soumya Koduri
- Original Message - > From: "Pranith Kumar Karampuri" <pkara...@redhat.com> > To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri" > <skod...@redhat.com> > Cc: gluster-us...@gluster.org, gluster-devel@gluste

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-26 Thread Soumya Koduri
. https://gist.github.com/e4602a50d3c98f7a2766 One may see GlusterFS-related leaks here as well. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote: What units Cache_Size is measured in? Bytes? Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you please run ganesha process under valgrind? Will help in detecting leaks. Thanks, Soumya 25.12.2015 16:58, Soumya Koduri

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory usage: === root 5416 34.2 78.5

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2015-12-22 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-devel] REMINDER: Gluster Bug Triage timing-poll

2015-12-22 Thread Soumya Koduri
+gluster-users On 12/22/2015 06:03 PM, Hari Gowtham wrote: Hi all, There was a poll conducted to find the timing that suits best for the people who want to participate in the weekly Gluster bug triage meeting. The result for the poll is yet to be announced but we would like to get more

[Gluster-devel] Minutes of today's Gluster Community Bug Triage meeting (22nd Dec 2015)

2015-12-22 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attend the meeting. Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-12-22/gluster_bug_triage.2015-12-22-12.00.html Minutes (text):

[Gluster-devel] crash in '_Unwind_Backtrace () from ./lib64/libgcc_s.so.1'

2015-12-22 Thread Soumya Koduri
package on that machine to get full backtrace (as requested in [1]) and update the bug with details. Thanks, Soumya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1293594#c4 On 11/26/2015 03:07 PM, Soumya Koduri wrote: Below are the findings from the core and the logs 1) [2015-11-25 19:06

[Gluster-devel] Storing pNFS related state on GlusterFS

2015-12-09 Thread Soumya Koduri
Hi, pNFS is a feature introduced as part of NFSv4.1 protocol to allow direct client access to storage devices containing file data (in short parallel I/O). Client request for the layouts of entire file or specific range. On receiving the layout information, they shall directly contact the

Re: [Gluster-devel] compound fop design first cut

2015-12-08 Thread Soumya Koduri
On 12/09/2015 11:44 AM, Pranith Kumar Karampuri wrote: On 12/09/2015 06:37 AM, Vijay Bellur wrote: On 12/08/2015 03:45 PM, Jeff Darcy wrote: On December 8, 2015 at 12:53:04 PM, Ira Cooper (i...@redhat.com) wrote: Raghavendra Gowdappa writes: I propose that we define a "compound op"

Re: [Gluster-devel] Upstream regression crash : https://build.gluster.org/job/rackspace-regression-2GB-triggered/16191/consoleFull

2015-11-26 Thread Soumya Koduri
Below are the findings from the core and the logs 1) [2015-11-25 19:06:41.592905] E [crypt.c:4298:master_set_master_vol_key] 0-patchy-crypt: FATAL: missing master key xlator_init() of crypt xlator fails, which I assume gets loaded when features.encryption is on (which the below mentioned .t

Re: [Gluster-devel] Caching support in glusterfs

2015-11-26 Thread Soumya Koduri
On 11/26/2015 03:35 PM, Avik Sil wrote: On Tuesday 24 November 2015 09:58 PM, Vijay Bellur wrote: - Original Message - From: "Avik Sil" To: gluster-devel@gluster.org Sent: Tuesday, November 24, 2015 6:47:44 AM Subject: [Gluster-devel] Caching support in

Re: [Gluster-devel] Logging improvements needed in nfs-ganesha

2015-11-24 Thread Soumya Koduri
Hi Sac, While we understand the intent of this mail, please note that most of the operations performed by ganesha related CLI are executed by the runner threads. AFAIK, apart from the return status, we cannot read any error messages from these threads (request glusterd team to confirm that).

  1   2   >