Re: [Gluster-devel] Possible problem introduced by http://review.gluster.org/15573

2016-10-21 Thread Soumya Koduri
Hi Xavi, On 10/21/2016 12:57 PM, Xavier Hernandez wrote: Looking at the code, I think that the added fd_unref() should only be called if the fop preparation fails. Otherwise the callback already unreferences the fd. Code flow: * glfs_fsync_async_common() takes an fd ref and calls STACK_WIND pa

Re: [Gluster-devel] Possible problem introduced by http://review.gluster.org/15573

2016-10-21 Thread Soumya Koduri
On 10/21/2016 02:03 PM, Xavier Hernandez wrote: Hi Niels, On 21/10/16 10:03, Niels de Vos wrote: On Fri, Oct 21, 2016 at 09:03:30AM +0200, Xavier Hernandez wrote: Hi, I've just tried Gluster 3.8.5 with Proxmox using gfapi and I consistently see a crash each time an attempt to connect to the

Re: [Gluster-devel] Possible problem introduced by http://review.gluster.org/15573

2016-10-21 Thread Soumya Koduri
On 10/21/2016 06:35 PM, Soumya Koduri wrote: Hi Xavi, On 10/21/2016 12:57 PM, Xavier Hernandez wrote: Looking at the code, I think that the added fd_unref() should only be called if the fop preparation fails. Otherwise the callback already unreferences the fd. Code flow

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 5 minutes)

2016-11-15 Thread Soumya Koduri
Hi all, Apologies for the late notice. This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date:

[Gluster-devel] Minutes from yesterday's Gluster Community Bug Triage meeting (Nov 16 2016)

2016-11-16 Thread Soumya Koduri
Hi, Sorry for the delay. Please find the minutes of yesterday's Gluster Community Bug Triage meeting below. Meeting summary agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (skoduri, 12:00:57) Roll Call (skoduri, 12:01:03) Next weeks meeting host (skoduri, 12:03:48)

Re: [Gluster-devel] Consistent time attributes (ctime, atime and mtime) across replica set and distribution set

2017-03-15 Thread Soumya Koduri
Hi Rafi, I haven't thoroughly gone through design. But have few comments/queries which I have posted inline for now . On 02/28/2017 01:11 PM, Mohammed Rafi K C wrote: Thanks for the reply , Comments are inline On 02/28/2017 12:50 PM, Niels de Vos wrote: On Tue, Feb 28, 2017 at 11:21:55AM

Re: [Gluster-devel] Consistent time attributes (ctime, atime and mtime) across replica set and distribution set

2017-03-16 Thread Soumya Koduri
On 03/16/2017 02:27 PM, Mohammed Rafi K C wrote: On 03/15/2017 11:31 PM, Soumya Koduri wrote: Hi Rafi, I haven't thoroughly gone through design. But have few comments/queries which I have posted inline for now . On 02/28/2017 01:11 PM, Mohammed Rafi K C wrote: Thanks for the

Re: [Gluster-devel] Consistent time attributes (ctime, atime and mtime) across replica set and distribution set

2017-03-20 Thread Soumya Koduri
On 03/20/2017 08:53 AM, Vijay Bellur wrote: On Sun, Mar 19, 2017 at 10:14 AM, Amar Tumballi mailto:atumb...@redhat.com>> wrote: On Thu, Mar 16, 2017 at 6:52 AM, Soumya Koduri mailto:skod...@redhat.com>> wrote: On 03/16/2017 02:27 PM, Mohammed Ra

Re: [Gluster-devel] What does xdata mean? "gfid-req"?

2017-03-20 Thread Soumya Koduri
On 03/18/2017 06:51 PM, Zhitao Li wrote: Hello, everyone, I am investigating the difference between stat and lookup operations in GlusterFs now. In the translator named "md_cache", stat operation will hit the cache generally, while lookup operation will miss the cache. The reason is that f

[Gluster-devel] Proposal for an extended READDIRPLUS operation via gfAPI

2017-04-21 Thread Soumya Koduri
Hi, We currently have readdirplus operation to fetch stat for each of the dirents. But that may not be sufficient and often applications may need extra information, like for eg., NFS-Ganesha like applications which operate on handles need to generate handles for each of those dirents returned

Re: [Gluster-devel] [Gluster-users] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-26 Thread Soumya Koduri
Hi Shyam, On 04/25/2017 07:38 PM, Shyam wrote: On 04/25/2017 07:40 AM, Pranith Kumar Karampuri wrote: On Thu, Apr 13, 2017 at 8:17 PM, Shyam mailto:srang...@redhat.com>> wrote: On 02/28/2017 10:17 AM, Shyam wrote: 1) Halo - Initial Cut (@pranith) Sorry for the delay in response. Du

Re: [Gluster-devel] Release 3.12 and 4.0: Thoughts on scope

2017-05-18 Thread Soumya Koduri
On 05/16/2017 02:10 PM, Kaushal M wrote: On 16 May 2017 06:16, "Shyam" mailto:srang...@redhat.com>> wrote: Hi, Let's start a bit early on 3.12 and 4.0 roadmap items, as there have been quite a few discussions around this in various meetups. Here is what we are hearing (or hav

Re: [Gluster-devel] Proposed Protocol changes for 4.0: Need feedback.

2017-09-01 Thread Soumya Koduri
On 08/11/2017 06:04 PM, Amar Tumballi wrote: Hi All, Below are the proposed protocol changes (ie, XDR changes on the wire) we are thinking for Gluster 4.0. Poornima and I were discussing if we can include volume uuid as part of Handshake protocol between protocol/client and protocol/server

Re: [Gluster-devel] [Gluster-Maintainers] Release 3.13: (STM release) Details

2017-09-27 Thread Soumya Koduri
Hi Shyam, On 09/11/2017 07:51 PM, Shyam Ranganathan wrote: Hi, The next community release of Gluster is 3.13, which is a short term maintenance release is slated for release on 30th Nov [1] [2]. Thus giving a 2 month head room to get to 4.0 work done, while maintaining the cadence of releasi

Re: [Gluster-devel] Release 4.0: Schedule and scope clarity (responses needed)

2017-11-21 Thread Soumya Koduri
Hi Shyam, Now, glusterfs/github [1] reads ~50 issues as being targets in 4.0, and among this about 2-4 are marked closed (or done). Ask1: Request each of you to go through the issue list and coordinate with a maintainer, to either mark an issues milestone correctly (i.e retain it in 4.0 or m

Re: [Gluster-devel] compound fop design first cut

2015-12-08 Thread Soumya Koduri
On 12/09/2015 11:44 AM, Pranith Kumar Karampuri wrote: On 12/09/2015 06:37 AM, Vijay Bellur wrote: On 12/08/2015 03:45 PM, Jeff Darcy wrote: On December 8, 2015 at 12:53:04 PM, Ira Cooper (i...@redhat.com) wrote: Raghavendra Gowdappa writes: I propose that we define a "compound op" that

[Gluster-devel] Storing pNFS related state on GlusterFS

2015-12-09 Thread Soumya Koduri
Hi, pNFS is a feature introduced as part of NFSv4.1 protocol to allow direct client access to storage devices containing file data (in short parallel I/O). Client request for the layouts of entire file or specific range. On receiving the layout information, they shall directly contact the ser

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2015-12-22 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-devel] REMINDER: Gluster Bug Triage timing-poll

2015-12-22 Thread Soumya Koduri
+gluster-users On 12/22/2015 06:03 PM, Hari Gowtham wrote: Hi all, There was a poll conducted to find the timing that suits best for the people who want to participate in the weekly Gluster bug triage meeting. The result for the poll is yet to be announced but we would like to get more polls.

[Gluster-devel] Minutes of today's Gluster Community Bug Triage meeting (22nd Dec 2015)

2015-12-22 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attend the meeting. Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-12-22/gluster_bug_triage.2015-12-22-12.00.html Minutes (text): http://meetbot.fedoraproject.org/gl

[Gluster-devel] crash in '_Unwind_Backtrace () from ./lib64/libgcc_s.so.1'

2015-12-22 Thread Soumya Koduri
package on that machine to get full backtrace (as requested in [1]) and update the bug with details. Thanks, Soumya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1293594#c4 On 11/26/2015 03:07 PM, Soumya Koduri wrote: Below are the findings from the core and the logs 1) [2015-11-25 19:06

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory usage: === root 5416 34.2 78.5 2

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote: What units Cache_Size is measured in? Bytes? Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you please run ganesha process under valgrind? Will help in detecting leaks. Thanks, Soumya 25.12.2015 16:58, Soumya Koduri

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-26 Thread Soumya Koduri
. https://gist.github.com/e4602a50d3c98f7a2766 One may see GlusterFS-related leaks here as well. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-28 Thread Soumya Koduri
- Original Message - > From: "Pranith Kumar Karampuri" > To: "Oleksandr Natalenko" , "Soumya Koduri" > > Cc: gluster-us...@gluster.org, gluster-devel@gluster.org > Sent: Monday, December 28, 2015 9:32:07 AM > Subject: Re: [Gluster-deve

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-31 Thread Soumya Koduri
On 12/28/2015 02:32 PM, Soumya Koduri wrote: - Original Message - From: "Pranith Kumar Karampuri" To: "Oleksandr Natalenko" , "Soumya Koduri" Cc: gluster-us...@gluster.org, gluster-devel@gluster.org Sent: Monday, December 28, 2015 9:32:07 AM Subject

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
page cache is the cause). There are ~1.8M files on this test volume. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount o

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
which I have pasted below apply to gfapi/nfs-ganesha applications. Also, to resolve the nfs-ganesha issue which I had mentioned below (in case if Entries_HWMARK option gets changed), I have posted below fix - https://review.gerrithub.io/#/c/258687 Thanks, Soumya Ideas? 05.01.2016

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
к, 5 січня 2016 р. 22:52:25 EET Soumya Koduri wrote: On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote: Unfortunately, both patches didn't make any difference for me. I've patched 3.7.6 with both patches, recompiled and installed patched GlusterFS package on client side and mounted volu

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-06 Thread Soumya Koduri
u confirm if you have taken the latest gluster patch set #3 ? - http://review.gluster.org/#/c/13096/3 If you are hitting the issue even then, please provide the core if possible. Thanks, Soumya 06.01.2016 08:40, Soumya Koduri написав: On 01/06/2016 03:53 AM, Oleksandr Natalenko wrote: OK, I

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-08 Thread Soumya Koduri
.com/30f0129d16e25d4a5a52 ganesha.conf: https://gist.github.com/9b5e59b8d6d8cb84c85d How I mount NFS share: === mount -t nfs4 127.0.0.1:/mail_boxes /mnt/tmp -o defaults,_netdev,minorversion=2,noac,noacl,lookupcache=none,timeo=100 === On четвер, 7 січня 2016 р. 12:06:42 EET Soumya Koduri wro

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
a On 01/08/2016 05:04 PM, Soumya Koduri wrote: I could reproduce while testing deep directories with in the mount point. I root caus'ed the issue & had discussion with Pranith to understand the purpose and recommended way of taking nlookup on inodes. I shall make changes to my existing fi

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
oumya 11.01.2016 12:26, Soumya Koduri написав: I have made changes to fix the lookup leak in a different way (as discussed with Pranith) and uploaded them in the latest patch set #4 - http://review.gluster.org/#/c/13096/ Please check if it resolves the mem leak and hopefully doesn't res

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
ocations are listed as lost. But most of the inodes should have got purged when we drop vfs cache. Did you do drop vfs cache before exiting the process? I shall add some log statements and check that part Thanks, Soumya 12.01.2016 08:24, Soumya Koduri написав: For fuse client, I trie

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
athieu CHATEAU http://www.lotp.fr 2016-01-12 7:24 GMT+01:00 Soumya Koduri mailto:skod...@redhat.com>>: On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote: Brief test shows that Ganesha stopped leaking and crashing, so it seems to be good for

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
On 01/13/2016 04:08 PM, Soumya Koduri wrote: On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: Just in case, here is Valgrind output on FUSE client with 3.7.6 + API-related patches we discussed before: https://gist.github.com/cd6605ca19734c1496a4 Thanks for sharing the results. I made

Re: [Gluster-devel] Core from gNFS process

2016-01-15 Thread Soumya Koduri
On 01/14/2016 08:41 PM, Vijay Bellur wrote: On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote: On 14/01/16 14:28, Jiffin Tony Thottan wrote: Hi, The core generated when encryption xlator is enabled [2016-01-14 08:13:15.740835] E [crypt.c:4298:master_set_master_vol_key] 0-test1-crypt: FATA

Re: [Gluster-devel] Core from gNFS process

2016-01-15 Thread Soumya Koduri
On 01/15/2016 06:52 PM, Soumya Koduri wrote: On 01/14/2016 08:41 PM, Vijay Bellur wrote: On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote: On 14/01/16 14:28, Jiffin Tony Thottan wrote: Hi, The core generated when encryption xlator is enabled [2016-01-14 08:13:15.740835] E [crypt.c

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Soumya Koduri
ITHLEY (1): fuse: use-after-free fix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash Soumya Koduri (3): gfapi: Fix inode nlookup counts inode: Retire the inodes from the lru list in inode_table_destroy upcall: free the xdr* allocations === With

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
e global/volume-level, client/server/both. Thanks, Soumya 01.02.2016 09:54, Soumya Koduri написав: On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote: Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.gi

Re: [Gluster-devel] [Gluster-users] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 02:48 PM, Xavier Hernandez wrote: Hi, On 01/02/16 09:54, Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't w

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-08 Thread Soumya Koduri
On 02/08/2016 09:13 AM, Shyam wrote: On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" To: "Sakshi Bansal" , "Susant Palai" Cc: "Gluster Devel" , "Nithya Balachandran" , "Shyamsundar Ranganathan" Sent: Friday, February 5, 2016 4

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-08 Thread Soumya Koduri
On 02/09/2016 10:27 AM, Raghavendra G wrote: On Mon, Feb 8, 2016 at 4:31 PM, Soumya Koduri mailto:skod...@redhat.com>> wrote: On 02/08/2016 09:13 AM, Shyam wrote: On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote: - Original M

[Gluster-devel] glusterfsd core on NetBSD (https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14139/consoleFull)

2016-02-09 Thread Soumya Koduri
Hi Emmanuel, I see a core generated in this regression run though all the tests seem to have passed. I do not have a netbsd machine to analyze the core. Could you please take a look and let me know what the issue could have been? Thanks, Soumya ___ G

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-09 Thread Soumya Koduri
On 02/09/2016 12:30 PM, Raghavendra G wrote: Right. But if there are simultaneous access to the same file from any other client and rebalance process, delegations shall not be granted or revoked if granted even though they are operating at dif

Re: [Gluster-devel] glusterfsd core on NetBSD (https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14139/consoleFull)

2016-02-10 Thread Soumya Koduri
Thanks Manu. Kotresh, Is this issue related to bug1221629 as well? Thanks, Soumya On 02/10/2016 02:10 PM, Emmanuel Dreyfus wrote: On Wed, Feb 10, 2016 at 12:17:23PM +0530, Soumya Koduri wrote: I see a core generated in this regression run though all the tests seem to have passed. I do not

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-11 Thread Soumya Koduri
b2902bba1 [10] https://gist.github.com/385bbb95ca910ec9766f [11] https://gist.github.com/685c4d3e13d31f597722 10.02.2016 15:37, Oleksandr Natalenko написав: Hi, folks. Here go new test results regarding client memory leak. I use v3.7.8 with the following patches: === Soumya Koduri (2):

Re: [Gluster-devel] libgfapi libvirt memory leak version 3.7.8

2016-02-11 Thread Soumya Koduri
Hi Piotr, Could you apply below gfAPI patch and check the valgrind output - http://review.gluster.org/13125 Thanks, Soumya On 02/11/2016 09:40 PM, Piotr Rybicki wrote: Hi All I have to report, that there is a mem leak latest version of gluster gluster: 3.7.8 libvirt 1.3.1 mem leak exis

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/12/2016 11:27 AM, Soumya Koduri wrote: On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote: And "API" test. I used custom API app [1] and did brief file manipulations through it (create/remove/stat). Then I performed drop_caches, finished API [2] and got the following Valgr

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote: Hmm, OK. I've rechecked 3.7.8 with the following patches (latest revisions): === Soumya Koduri (3): gfapi: Use inode_forget in case of handle objects inode: Retire the inodes from the lru list in inode_table_destroy

Re: [Gluster-devel] libgfapi 3.7.8 still memory leak

2016-02-17 Thread Soumya Koduri
Hi Piotr, On 02/17/2016 08:20 PM, Piotr Rybicki wrote: Hi all. I'm trying hard to diagnose memory leaks in libgfapi access. gluster 3.7.8 For this purpose, i've created simplest C code (basically only calling glfs_new() and glfs_fini() ): #include int main (int argc, char** argv) {

[Gluster-devel] Regarding default_forget/releasedir/release() fops

2016-02-22 Thread Soumya Koduri
Hi Jeff, Recently while doing some tests (which involved lots of inode_forget()), I have noticed that my log file got flooded with below messages - [2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x231)[0x7fd00f63c15d] (-

Re: [Gluster-devel] Regarding default_forget/releasedir/release() fops

2016-02-23 Thread Soumya Koduri
On 02/23/2016 05:02 PM, Jeff Darcy wrote: Recently while doing some tests (which involved lots of inode_forget()), I have noticed that my log file got flooded with below messages - [2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_cal

[Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-02-29 Thread Soumya Koduri
Hi Aravinda/Kotresh, With [1], I consistently see cores generated with the test './tests/geo-rep/georep-basic-dr-tarssh.t' in release-3.7 branch. From the cores, looks like we are trying to dereference a freed changelog_rpc_clnt_t(crpc) object in changelog_rpc_notify(). Strangely this was not

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-02 Thread Soumya Koduri
no try for rpc invocation after DISCONNECT. It will be cleaned up otherwise. [1] http://review.gluster.org/#/c/13507/ Thanks and Regards, Kotresh H R - Original Message - > From: "Kotresh Hiremath Ravishankar" mailto:khire...@redhat.com>>

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-03 Thread Soumya Koduri
c_clnt_trigger_destroy'. http://review.gluster.org/#/c/13592/ Thanks and Regards, Kotresh H R - Original Message - From: "Kotresh Hiremath Ravishankar" To: "Soumya Koduri" Cc: "Raghavendra G" , "Gluster Devel" Sent: Thursday, March 3, 2016

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-04 Thread Soumya Koduri
Kotresh H R ----- Original Message - > From: "Soumya Koduri" mailto:skod...@redhat.com>> > To: "Kotresh Hiremath Ravishankar" mailto:khire...@redhat.com>>, "Raghavendra G" mailto:raghaven...@gluster.com>> >

Re: [Gluster-devel] Review request for leases patches

2016-03-08 Thread Soumya Koduri
Hi Poornima, On 03/07/2016 11:24 AM, Poornima Gurusiddaiah wrote: Hi All, Here is the link to feature page: http://review.gluster.org/#/c/11980/ Patches can be found @: http://review.gluster.org/#/q/status:

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 2.5 hours)

2016-03-15 Thread Soumya Koduri
Hi, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in your

[Gluster-devel] Minutes of today's Gluster Community Bug Triage meeting (Mar 15 2016)

2016-03-15 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended the meeting. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-03-15/gluster_bug_triage.2016-03-15-12.00.html Minutes (text): https://meetbot.fedoraproject.or

Re: [Gluster-devel] Report ESTALE as ENOENT

2016-03-23 Thread Soumya Koduri
27;t we fix the server to send ENOENT then? Thanks, Soumya - Original Message - > From: "Raghavendra Gowdappa" > To: "Soumya Koduri" , "Poornima Gurusiddaiah" > , "Raghavendra Talur" > > Cc: "Shyamsundar Ranganathan&qu

Re: [Gluster-devel] Report ESTALE as ENOENT

2016-03-24 Thread Soumya Koduri
ilto:srang...@redhat.com>> wrote: On 03/23/2016 12:07 PM, Ravishankar N wrote: On 03/23/2016 09:16 PM, Soumya Koduri wrote: If it occurs only when the file/dir is not actually present at the back-end, shouldn't we fix the server to send ENOENT t

Re: [Gluster-devel] [Gluster-users] Exporting Gluster Volume

2016-05-03 Thread Soumya Koduri
Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs. Could you please check the log file '/var/lib/glusterfs/nfs.log' if there are any errors logged with respect protocol registration failures. Thanks, Soumya On 05/04/2016

Re: [Gluster-devel] [Gluster-users] Exporting Gluster Volume

2016-05-03 Thread Soumya Koduri
n Wed, May 4, 2016 at 11:33 AM, Soumya Koduri mailto:skod...@redhat.com>> wrote: Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs. Could you please check the log fil

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Soumya Koduri
On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" To: "Soumya Koduri" Cc: "Gluster Devel" Sent: Wednesday, May 11, 2016 4:28:28 PM Subject: Re: [Gluster-devel] gfapi, readdirplus and f

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Soumya Koduri
On 05/11/2016 10:17 PM, Soumya Koduri wrote: On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" To: "Soumya Koduri" Cc: "Gluster Devel" Sent: Wednesday, May 11, 2016 4:28:28 PM Subj

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 2.5 hours)

2016-05-17 Thread Soumya Koduri
Hi, This meeting is scheduled for anyone, who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in your

[Gluster-devel] Minutes of today's Gluster Community Bug Triage meeting (May 17 2016)

2016-05-17 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended the meeting. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-05-17/gluster_bug_triage.2016-05-17-12.01.html Minutes (text): https://meetbot.fedoraproject.or

Re: [Gluster-devel] [Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (May 17 2016)

2016-05-17 Thread Soumya Koduri
On 05/17/2016 07:09 PM, M S Vishwanath Bhat wrote: On 17 May 2016 at 18:51, Soumya Koduri mailto:skod...@redhat.com>> wrote: Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended the meeting.

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-06-28 Thread Soumya Koduri
rpc-clnt. [1] http://review.gluster.org/#/c/13592 [2] http://review.gluster.org/#/c/1359 regards, Raghavendra. Thanks and Regards, Kotresh H R - Original Message - From: "Soumya Koduri" To: "Kotresh Hiremath Ravishankar" , "Raghavendra G" Cc: "Gl

[Gluster-devel] **Reminder** Triaging and Updating Bug status

2016-06-28 Thread Soumya Koduri
Hi, We have noticed that many of the bugs (esp., in the recent past the ones filed against 'tests' component) which are being actively worked upon do not have either 'Triaged' keyword set or bug status(/assignee) updated appropriately. Sometimes even many of the active community members fail

Re: [Gluster-devel] [NFS-ganesha] unlink file remains in ./glusterfs/unlinks after delete file

2016-06-30 Thread Soumya Koduri
Hi, On 06/30/2016 11:56 AM, 梁正和 wrote: Hi, I'm trying to export gluster-volume by nfs-ganesha. After create --> Some I/O --> delete file from nfs mount point. The file has been moved to ./glusterfs/unlinkls. There was an fd leak when a file is created using gfapi handleops (which NFS-Ganesh

Re: [Gluster-devel] [NFS-ganesha] unlink file remains in ./glusterfs/unlinks after delete file

2016-06-30 Thread Soumya Koduri
FYI - "http://review.gluster.org/#/c/14840 " contains the fix for 3.7 branch. Thanks, Soumya On 07/01/2016 11:38 AM, Soumya Koduri wrote: Hi, On 06/30/2016 11:56 AM, 梁正和 wrote: Hi, I'm trying to export gluster-volume by nfs-ganesha. After create --> Some I/O --> delet

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-07-12 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

[Gluster-devel] Minutes from today's Gluster Community Bug Triage meeting (July 12 2016)

2016-07-12 Thread Soumya Koduri
Hi, Thanks to everyone who joined the meeting. Please find the minutes of today's Gluster Community Bug Triage meeting at the below links. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-07-12/gluster_bug_triage.2016-07-12-12.00.html Minutes (text): https://meetbot.fedoraproj

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-19 Thread Soumya Koduri
On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya for root causing this. Thanks and Regards, Kotresh H R arbiter-mount.t has failed despite ha

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri
On 07/20/2016 12:00 PM, Soumya Koduri wrote: On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya for root causing this. Thanks and Regards

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri
On 07/20/2016 12:41 PM, Soumya Koduri wrote: On 07/20/2016 12:00 PM, Soumya Koduri wrote: On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya

[Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-22 Thread Soumya Koduri
Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail-over/connect to a different glusterFS client while the I/O is happening. In such cases until there is a ping timer expiry and glusterFS server cleans up the locks held by the older glusterFS cli

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-26 Thread Soumya Koduri
Hi Vijay, On 07/26/2016 12:13 AM, Vijay Bellur wrote: On 07/22/2016 08:44 AM, Soumya Koduri wrote: Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail-over/connect to a different glusterFS client while the I/O is happening. In such cases until

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-28 Thread Soumya Koduri
On 07/27/2016 02:38 AM, Vijay Bellur wrote: On 07/26/2016 05:56 AM, Soumya Koduri wrote: Hi Vijay, On 07/26/2016 12:13 AM, Vijay Bellur wrote: On 07/22/2016 08:44 AM, Soumya Koduri wrote: Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail

[Gluster-devel] Possible spurious failure: tests/bugs/replicate/bug-1297695.t on 3.8

2016-07-31 Thread Soumya Koduri
Kindly take a look - https://build.gluster.org/job/rackspace-regression-2GB-triggered/22610/consoleFull Thanks, Soumya ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-08-10 Thread Soumya Koduri
r) on the server-side. I have updated the feature-spec[1] with the details. Comments are welcome. Thanks, Soumya [1] http://review.gluster.org/#/c/15053/3/under_review/reclaim-locks.md On 07/28/2016 07:29 PM, Soumya Koduri wrote: On 07/27/2016 02:38 AM, Vijay Bellur wrote: On 07/26/2016 05:

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-28 Thread Soumya Koduri
Niels, Csaba 4) SELinux on gluster volumes: Feature owners: Niels, Manikandan 5) Native sub-directory mounts: Feature owners: Kaushal, Pranith 6) RichACL support for GlusterFS: Feature owners: Rajesh Joseph 7) Sharemodes/Share reservations: Feature owners: Raghavendra Talur, Poornima G, Soumya K

Re: [Gluster-devel] Checklist for ganesha FSAL plugin integration testing for 3.9

2016-09-06 Thread Soumya Koduri
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply. Thanks, Soumya On 09/06/2016 12:19 PM, Soumya Koduri wrote: On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote: hi, Did you get a chance to decide on the nfs-ganesha integrations tests that need to be

Re: [Gluster-devel] Gluster Developer Summit 2016 Talk Schedule

2016-09-15 Thread Soumya Koduri
Hi Amye, Is there any plan to record these talks? Thanks, Soumya On 09/15/2016 03:09 AM, Amye Scavarda wrote: Thanks to all that submitted talks, and thanks to the program committee who helped select this year's content. This will be posted on the main Summit page as well: gluster.org/events/

Re: [Gluster-devel] Gluster Developer Summit 2016 Talk Schedule

2016-09-15 Thread Soumya Koduri
On 09/16/2016 03:48 AM, Amye Scavarda wrote: On Thu, Sep 15, 2016 at 8:26 AM, Pranith Kumar Karampuri mailto:pkara...@redhat.com>> wrote: On Thu, Sep 15, 2016 at 2:37 PM, Soumya Koduri mailto:skod...@redhat.com>> wrote: Hi Amye, Is there any plan to record

Re: [Gluster-devel] Upcall details for NLINK

2016-09-19 Thread Soumya Koduri
On 09/19/2016 10:08 AM, Niels de Vos wrote: Duh, and now with the attachment. I'm going to get some coffee now. On Mon, Sep 19, 2016 at 06:22:58AM +0200, Niels de Vos wrote: Hey Soumya, do we have a description of the different actions that we expect/advise users of upcall to take? I'm look

Re: [Gluster-devel] Fixing setfsuid/gid problems in posix xlator

2016-09-23 Thread Soumya Koduri
On 09/23/2016 08:28 AM, Pranith Kumar Karampuri wrote: hi, Jiffin found an interesting problem in posix xlator where we have never been using setfsuid/gid (http://review.gluster.org/#/c/15545/), what I am seeing regressions after this is, if the files are created using non-root user then the

Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-23 Thread Soumya Koduri
On 09/23/2016 11:48 AM, Poornima Gurusiddaiah wrote: - Original Message - From: "Niels de Vos" To: "Raghavendra Gowdappa" Cc: "Gluster Devel" Sent: Wednesday, September 21, 2016 3:52:39 AM Subject: Re: [Gluster-devel] review request - Change the way client uuid is built On Wed,

Re: [Gluster-devel] [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-27 Thread Soumya Koduri
Hi, On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote: > +- Component GlusterFS > | > | > | +Subcomponent nfs Maybe its time to change it to 'gluster-NFS/native NFS'. Niels/Kaleb? +- Component gdeploy | | | +Subcomponent sambha | +Subcomponent hyperconvergence | +Subcomponent RHSC 2.0

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Soumya Koduri
underlying xlators? Thanks, Soumya Regards, Poornima *From: *"Pranith Kumar Karampuri" *To: *"Raghavendra Gowdappa" , "Poornima Gurusiddaiah" , "Raghavendra Talur" , "Soumya Kodur

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Soumya Koduri
On 09/30/2016 10:08 AM, Pranith Kumar Karampuri wrote: Does samba/gfapi/nfs-ganesha have options to disable readdirp? AFAIK, currently there is no option to disable/enable readdirp in gfapi & nfs-ganesha (not sure about samba). But looks like nfs-ganesha seem to be always using readdir, whi

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-10-04 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

[Gluster-devel] Minutes from today's Gluster Community Bug Triage meeting (Oct 4 2016)

2016-10-04 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting at the links posted below. We had very few participants today as many are traveling. Thanks to hgowtham and ankitraj for joining. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-10-04/gluster_bug_tria

[Gluster-devel] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Soumya Koduri
Hi, With http://review.gluster.org/#/c/15051/, performace/client-io-threads is enabled by default. But with that we see regression caused to nfs-ganesha application trying to un/re-export any glusterfs volume. This shall be the same case with any gfapi application using glfs_fini(). More det

Re: [Gluster-devel] Regression caused to gfapi applications with enabling client-io-threads by default

2016-10-05 Thread Soumya Koduri
On 10/05/2016 07:32 PM, Pranith Kumar Karampuri wrote: On Wed, Oct 5, 2016 at 2:00 PM, Soumya Koduri mailto:skod...@redhat.com>> wrote: Hi, With http://review.gluster.org/#/c/15051/ <http://review.gluster.org/#/c/15051/>, performace/client-io-threads is enable

Re: [Gluster-devel] glfs_resolve new file force lookup

2014-11-23 Thread Soumya Koduri
Hi Siva, On 11/22/2014 08:44 AM, Rudra Siva wrote: Thanks for the response. In my case, I am trying to avoid doing the network level lookup - since I use the same resolve only pass a null for the attribute structure - essentially in my case, it is an atomic multiple object read/write so I only w

[Gluster-devel] Upcalls Infrastructure

2014-12-11 Thread Soumya Koduri
Hi, This framework has been designed to maintain state in the glusterfsd process, for each the files being accessed (including the clients info accessing those files) and send notifications to the respective glusterfs clients incase of any change in that state. Few of the use-cases (currentl

Re: [Gluster-devel] Upcalls Infrastructure

2014-12-14 Thread Soumya Koduri
et/set for notification requests). Right. I shall add this in the feature page. Thanks for bringing this up. -Soumya Shyam On 12/12/2014 02:17 AM, Soumya Koduri wrote: Hi, This framework has been designed to maintain state in the glusterfsd process, for each the files being accessed (including t

  1   2   >