[Gluster-devel] Change in itransform/deitransform logic - dht/afr

2014-06-30 Thread Soumya Koduri
Hi, With Change-Id:Ieba9a7071829d51860b7c131982f12e0136b9855 , dht itansform/deitransform was improved to encode 64-bit brick offset along with the brick-id in the global d_off. More details regarding this change are in : http://review.gluster.org/#/c/4711/ But now with afr using the same

Re: [Gluster-devel] Duplicate entries and other weirdness in a 3*4 volume

2014-07-21 Thread Soumya Koduri
On 07/21/2014 07:33 PM, Anders Blomdell wrote: On 2014-07-21 14:36, Soumya Koduri wrote: On 07/21/2014 05:35 PM, Xavier Hernandez wrote: On Monday 21 July 2014 13:53:19 Anders Blomdell wrote: On 2014-07-21 13:49, Pranith Kumar Karampuri wrote: On 07/21/2014 05:17 PM, Anders Blomdell

Re: [Gluster-devel] 回复:   glfs_creat this method hang up

2014-08-28 Thread Soumya Koduri
-- *From: * Soumya Koduri;skod...@redhat.com; *Date: * Wed, Aug 27, 2014 07:42 PM *To: * Pranith Kumar Karampuripkara...@redhat.com; ABC-new360532...@qq.com; *Cc: * Gluster Develgluster-devel@gluster.org; *Subject: * Re: [Gluster-devel] glfs_creat this method hang up Could you please share

Re: [Gluster-devel] 回复: 回复:   glfs_creat this method hang up

2014-08-28 Thread Soumya Koduri
*replicate*.node-uuid=c445c335-1d7e-4753-bd13-a83c4877083a root 16907 17242 0 16:31 pts/000:00:00 grep glusterfs‍ Thank you, Lixiaopo -- 原始邮件 -- *发件人:* Soumya Koduri;skod...@redhat.com; *发送时间:* 2014å¹´8月28æ—¥(星期四) 下午4:11 *收ä

Re: [Gluster-devel] GlusterFS with NFS-Ganesha Compilation

2014-09-04 Thread Soumya Koduri
for SDFS. If it is fine then how can i start. Thank you Very much for your response. On Wed, Sep 3, 2014 at 12:58 AM, Soumya Koduri skod...@redhat.com mailto:skod...@redhat.com wrote: Hi Samuthira, I have documented on how NFS-Ganesha is integrated with glusterfs in the below blog post

Re: [Gluster-devel] glfs_resolve new file force lookup

2014-11-23 Thread Soumya Koduri
Hi Siva, On 11/22/2014 08:44 AM, Rudra Siva wrote: Thanks for the response. In my case, I am trying to avoid doing the network level lookup - since I use the same resolve only pass a null for the attribute structure - essentially in my case, it is an atomic multiple object read/write so I only

[Gluster-devel] Upcalls Infrastructure

2014-12-11 Thread Soumya Koduri
Hi, This framework has been designed to maintain state in the glusterfsd process, for each the files being accessed (including the clients info accessing those files) and send notifications to the respective glusterfs clients incase of any change in that state. Few of the use-cases

Re: [Gluster-devel] Upcalls Infrastructure

2014-12-14 Thread Soumya Koduri
for notification requests). Right. I shall add this in the feature page. Thanks for bringing this up. -Soumya Shyam On 12/12/2014 02:17 AM, Soumya Koduri wrote: Hi, This framework has been designed to maintain state in the glusterfsd process, for each the files being accessed (including the clients

Re: [Gluster-devel] Portable filesystem ACLs?

2014-12-14 Thread Soumya Koduri
On 12/12/2014 11:06 PM, Niels de Vos wrote: Hi, I started to look into getting some form of support for ACLs in gfapi. After a short discussion with Shyam, some investigation showed that our current implementation of ACLs is not very portable. There definitely seem to be issues with ACLs

Re: [Gluster-devel] Upcalls Infrastructure

2014-12-14 Thread Soumya Koduri
On 12/15/2014 09:01 AM, Krishnan Parthasarathi wrote: Here are a few questions that I had after reading the feature page. - Is there a new connection from glusterfsd (upcall xlator) to a client accessing a file? If so, how does the upcall xlator reuse connections when the same client

Re: [Gluster-devel] [Gluster-users] Input/Output Error on Gluster NFS

2015-02-05 Thread Soumya Koduri
volume info' only if we explicitly modify its value to ON/OFF. Can you please verify if the filesystem where your Gluster bricks have been created has been mounted with ACLs enabled. Thanks, Soumya Thanks -Peter From: Soumya Koduri [skod...@redhat.com

Re: [Gluster-devel] [Gluster-user] Sybase backup server failed to write to Gluster NFS

2015-01-21 Thread Soumya Koduri
Hi Peter, Can you please try manually mounting those volumes using any/other nfs client and check if you are able to perform write operations. Also please collect the gluster nfs log while doing so. Thanks, Soumya On 01/22/2015 08:18 AM, Peter Auyeung wrote: Hi, We have been having 5

Re: [Gluster-devel] Upcalls Infrastructure

2015-01-22 Thread Soumya Koduri
Hi, I have updated the feature page with more design details and the dependencies/limitations this support has. http://www.gluster.org/community/documentation/index.php/Features/Upcall-infrastructure#Dependencies Kindly check the same and provide your inputs. Few of them which may be

Re: [Gluster-devel] Upcalls Infrastructure

2015-02-18 Thread Soumya Koduri
. * store Upcall entries in inode/fd_ctxt for faster lookup. Thanks, Soumya On 01/22/2015 02:31 PM, Soumya Koduri wrote: Hi, I have updated the feature page with more design details and the dependencies/limitations this support has. http://www.gluster.org/community/documentation/index.php

Re: [Gluster-devel] Upcalls Infrastructure

2015-02-22 Thread Soumya Koduri
will be submitted in the new patches after addressing the proposed changes discussed in the earlier mail. Thanks, Soumya On 02/19/2015 12:30 PM, Soumya Koduri wrote: Hi, We recently have uncovered few issues with respect to lease_locks support and had discussions around the same. Thanks to everyone

[Gluster-devel] Reg Upcall Cache Invalidation: asynchronous notification + Cleanup of expired states

2015-04-23 Thread Soumya Koduri
Hi Shyam/Niels, To re-iterate the issues, a) at present, when two clients access same file, we send 'cache_invalidation' upcall notification to the first client in the fop cbk path of the second client. This may affect brick latency esp., for the directories (where there are more chances of

Re: [Gluster-devel] Upcall state + Data Tiering

2015-04-20 Thread Soumya Koduri
like to leverage the common solution infra for passing this extra metadata to destination. - Original Message - From: Dan Lambright dlamb...@redhat.com To: Niels de Vos nde...@redhat.com Cc: Joseph Fernandes josfe...@redhat.com, gluster Devel gluster-devel@gluster.org, Soumya Koduri skod

Re: [Gluster-devel] REMINDER: NFS-Ganesha features demo on Google Hangout in ~1 hour from now

2015-04-30 Thread Soumya Koduri
We are sorry for the inconvenience caused during the hangout session. There is a network outage at our place. We shall do the recording again and share the link sometime next week. Thanks, Soumya On 04/30/2015 06:08 PM, Niels de Vos wrote: On Wed, Apr 29, 2015 at 11:20:20AM -0400, Meghana

Re: [Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-05 Thread Soumya Koduri
I consistently see this failure for one of my patches - http://review.gluster.org/#/c/10568/ - http://build.gluster.org/job/rackspace-regression-2GB-triggered/8483/consoleFull This test passed when I ran it on my workspace. Thanks, Soumya On 05/02/2015 08:00 AM, Pranith Kumar Karampuri wrote:

Re: [Gluster-devel] Upcalls Infrastructure

2015-04-16 Thread Soumya Koduri
of you are interested to work on any of those items. Would be happy to assist you :) Thanks, Soumya On 01/22/2015 02:31 PM, Soumya Koduri wrote: Hi, I have updated the feature page with more design details and the dependencies/limitations this support has. http://www.gluster.org/community

[Gluster-devel] Lease-lock Design notes

2015-04-16 Thread Soumya Koduri
Hi, Below link contains the lease-lock design notes (as per the latest discussion we had). Thanks to everyone involved (CC'ed). http://www.gluster.org/community/documentation/index.php/Features/Upcall-infrastructure#delegations.2Flease-locks Kindly go through the same and provide us your

[Gluster-devel] Upcall state + Data Tiering

2015-04-16 Thread Soumya Koduri
Hi Dan/Joseph, As part of upcall support on the server-side, we maintain certain state to notify clients of the cache-invalidation and recall-leaselk events. We have certain known limitations with Rebalance and Self-Heal. Details in the below link -

Re: [Gluster-devel] libgfapi usage issues: overwrites 'THIS' and use after free

2015-04-18 Thread Soumya Koduri
On 04/17/2015 04:51 PM, Raghavendra Talur wrote: On Friday 17 April 2015 03:53 PM, Poornima Gurusiddaiah wrote: Hi, There are two concerns in the usage of libgfapi which have been present from day one, but now with new users of libgfapi its a necessity to fix these: 1. When libgfapi is used

Re: [Gluster-devel] GF_FOP_IPC changes

2015-06-25 Thread Soumya Koduri
On 06/25/2015 09:00 AM, Pranith Kumar Karampuri wrote: On 06/25/2015 02:49 AM, Jeff Darcy wrote: It knows which bricks are up/down. But they may not be the latest. Will that matter? AFAIK it's sufficient at this point to know which are up/down. In that case, we need two functions which

Re: [Gluster-devel] GF_FOP_IPC changes

2015-06-24 Thread Soumya Koduri
On 06/24/2015 10:14 AM, Krishnan Parthasarathi wrote: - Original Message - I've been looking at the recent patches to redirect GF_FOP_IPC to an active subvolume instead of always to the first. Specifically, these: http://review.gluster.org/11346 for DHT

Re: [Gluster-devel] Future of access-control translator ?

2015-06-10 Thread Soumya Koduri
On 06/11/2015 04:29 AM, Niels de Vos wrote: On Wed, Jun 10, 2015 at 11:42:27PM +0530, Jiffin Tony Thottan wrote: Hi, In the current implementation of access-control translator, it takes care of the following : a.) conversion of acl xattr - gluster supported posix-acl format (at the backend

[Gluster-devel] Reserving slave[0-1].cloud.gluster.org machine

2015-06-15 Thread Soumya Koduri
Hi, I would like to reserve one of the below machines to run a regression test for debugging. Please let me know if any of you are using them currently. Thanks, Soumya On 05/21/2015 10:46 PM, Justin Clift wrote: There are two extra CentOS 6 VM's online for debugging stuff with, but they're

Re: [Gluster-devel] Reserving slave[0-1].cloud.gluster.org machine

2015-06-15 Thread Soumya Koduri
Thanks to Ravishankar. We could reproduce the issue on our test machines. I shall no longer need these machines. Thanks, Soumya On 06/15/2015 11:55 AM, Soumya Koduri wrote: Hi, I would like to reserve one of the below machines to run a regression test for debugging. Please let me know if any

Re: [Gluster-devel] GF_FOP_IPC changes

2015-06-29 Thread Soumya Koduri
On 06/29/2015 08:18 PM, Niels de Vos wrote: On Wed, Jun 24, 2015 at 07:44:13PM +0530, Soumya Koduri wrote: On 06/24/2015 10:14 AM, Krishnan Parthasarathi wrote: - Original Message - I've been looking at the recent patches to redirect GF_FOP_IPC to an active subvolume instead

Re: [Gluster-devel] glusterfs+nfs-ganesha+windows2008 issues

2015-05-26 Thread Soumya Koduri
Hi Louis, AFAIK, we never tested nfs-ganesha+glusterfs using windows client. May be it would be good if you can collect and provide packet trace/cores or logs (with nfs-ganesha atleast in NIV_DEBUG level) on the server side while you run these tests to debug futher. Thanks, Soumya On

Re: [Gluster-devel] Implementing Flat Hierarchy for trashed files

2015-08-17 Thread Soumya Koduri
This approach sounds good. Few inputs/queries inline. On 08/17/2015 06:20 PM, Anoop C S wrote: Hi all, As we move forward, in order to fix the limitations with current trash translator we are planning to replace the existing criteria for trashed files inside trash directory with a general

Re: [Gluster-devel] Lease-lock Design notes

2015-08-24 Thread Soumya Koduri
healing etc. Request your inputs/comments. Thanks, Soumya Poornima On 07/22/2015 09:22 PM, Soumya Koduri wrote: On 07/22/2015 06:33 PM, Shyam wrote: Thanks for the responses. some comments inline Who is doing/attempting client side caching improvements for Gluster 4.0 (or before

Re: [Gluster-devel] Lease-lock Design notes

2015-07-22 Thread Soumya Koduri
on it right now. But once we have leases.md (capturing latest design changes) ready, we planned to take inputs from him and the community. On 07/21/2015 09:20 AM, Soumya Koduri wrote: On 07/21/2015 02:49 PM, Poornima Gurusiddaiah wrote: Hi Shyam, Please find my reply inline. Rgards, Poornima

Re: [Gluster-devel] Lease-lock Design notes

2015-07-21 Thread Soumya Koduri
the motivation behind the current manner of implementing leases. Yes sure, we are updating the leases.md, will send it at the earliest. Thanks, Shyam On 04/16/2015 07:37 AM, Soumya Koduri wrote: Hi, Below link contains the lease-lock design notes (as per the latest discussion we had). Thanks

[Gluster-devel] Contributing to GlusterFS: Fixing Coverity defects

2015-09-08 Thread Soumya Koduri
Hi, If you would like to contribute to GlusterFS, one of the easiest ways which shall help you to analyze the code is by fixing defects reported by Coverity Scan tool. The detailed process is mentioned in [1]. To summarize, * Signup as a member of https://scan.coverity.com/projects/987 *

Re: [Gluster-devel] Syncop usage issue

2015-09-14 Thread Soumya Koduri
On 09/14/2015 09:47 PM, Sathyendra Prabhu wrote: Hello developers, I am a glusterfs learning programmer. I am implementing a few translators. Is it possible to use 'syncop's on the server side (than using stack wind /unwinds)...?? I didn't get any help content on the topic. Please someone

[Gluster-devel] Storing pNFS related state on GlusterFS

2015-12-09 Thread Soumya Koduri
Hi, pNFS is a feature introduced as part of NFSv4.1 protocol to allow direct client access to storage devices containing file data (in short parallel I/O). Client request for the layouts of entire file or specific range. On receiving the layout information, they shall directly contact the

Re: [Gluster-devel] compound fop design first cut

2015-12-08 Thread Soumya Koduri
On 12/09/2015 11:44 AM, Pranith Kumar Karampuri wrote: On 12/09/2015 06:37 AM, Vijay Bellur wrote: On 12/08/2015 03:45 PM, Jeff Darcy wrote: On December 8, 2015 at 12:53:04 PM, Ira Cooper (i...@redhat.com) wrote: Raghavendra Gowdappa writes: I propose that we define a "compound op"

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
re ~1.8M files on this test volume. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
ave pasted below apply to gfapi/nfs-ganesha applications. Also, to resolve the nfs-ganesha issue which I had mentioned below (in case if Entries_HWMARK option gets changed), I have posted below fix - https://review.gerrithub.io/#/c/258687 Thanks, Soumya Ideas? 05.01.2016 12:31, Sou

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-05 Thread Soumya Koduri
2016 р. 22:52:25 EET Soumya Koduri wrote: On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote: Unfortunately, both patches didn't make any difference for me. I've patched 3.7.6 with both patches, recompiled and installed patched GlusterFS package on client side and mounted volume with ~2M of files

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-06 Thread Soumya Koduri
u have taken the latest gluster patch set #3 ? - http://review.gluster.org/#/c/13096/3 If you are hitting the issue even then, please provide the core if possible. Thanks, Soumya 06.01.2016 08:40, Soumya Koduri написав: On 01/06/2016 03:53 AM, Oleksandr Natalenko wrote: OK, I've repeated th

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-08 Thread Soumya Koduri
e25d4a5a52 ganesha.conf: https://gist.github.com/9b5e59b8d6d8cb84c85d How I mount NFS share: === mount -t nfs4 127.0.0.1:/mail_boxes /mnt/tmp -o defaults,_netdev,minorversion=2,noac,noacl,lookupcache=none,timeo=100 === On четвер, 7 січня 2016 р. 12:06:42 EET Soumya Koduri wrote: Entries_HW

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote: What units Cache_Size is measured in? Bytes? Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you please run ganesha process under valgrind? Will help in detecting leaks. Thanks, Soumya 25.12.2015 16:58, Soumya Koduri

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-25 Thread Soumya Koduri
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS-Ganesha also consumes huge amount of memory while doing ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory usage: === root 5416 34.2 78.5

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-26 Thread Soumya Koduri
. https://gist.github.com/e4602a50d3c98f7a2766 One may see GlusterFS-related leaks here as well. On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote: On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote: Another addition: it seems to be GlusterFS API library memory leak because NFS

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2015-12-28 Thread Soumya Koduri
- Original Message - > From: "Pranith Kumar Karampuri" <pkara...@redhat.com> > To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri" > <skod...@redhat.com> > Cc: gluster-us...@gluster.org, gluster-devel@gluste

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2015-12-22 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone that is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

Re: [Gluster-devel] REMINDER: Gluster Bug Triage timing-poll

2015-12-22 Thread Soumya Koduri
+gluster-users On 12/22/2015 06:03 PM, Hari Gowtham wrote: Hi all, There was a poll conducted to find the timing that suits best for the people who want to participate in the weekly Gluster bug triage meeting. The result for the poll is yet to be announced but we would like to get more

[Gluster-devel] Minutes of today's Gluster Community Bug Triage meeting (22nd Dec 2015)

2015-12-22 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attend the meeting. Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-12-22/gluster_bug_triage.2015-12-22-12.00.html Minutes (text):

[Gluster-devel] crash in '_Unwind_Backtrace () from ./lib64/libgcc_s.so.1'

2015-12-22 Thread Soumya Koduri
package on that machine to get full backtrace (as requested in [1]) and update the bug with details. Thanks, Soumya [1] https://bugzilla.redhat.com/show_bug.cgi?id=1293594#c4 On 11/26/2015 03:07 PM, Soumya Koduri wrote: Below are the findings from the core and the logs 1) [2015-11-25 19:06

Re: [Gluster-devel] Upstream regression crash : https://build.gluster.org/job/rackspace-regression-2GB-triggered/16191/consoleFull

2015-11-26 Thread Soumya Koduri
Below are the findings from the core and the logs 1) [2015-11-25 19:06:41.592905] E [crypt.c:4298:master_set_master_vol_key] 0-patchy-crypt: FATAL: missing master key xlator_init() of crypt xlator fails, which I assume gets loaded when features.encryption is on (which the below mentioned .t

Re: [Gluster-devel] Caching support in glusterfs

2015-11-26 Thread Soumya Koduri
On 11/26/2015 03:35 PM, Avik Sil wrote: On Tuesday 24 November 2015 09:58 PM, Vijay Bellur wrote: - Original Message - From: "Avik Sil" To: gluster-devel@gluster.org Sent: Tuesday, November 24, 2015 6:47:44 AM Subject: [Gluster-devel] Caching support in

Re: [Gluster-devel] Logging improvements needed in nfs-ganesha

2015-11-24 Thread Soumya Koduri
Hi Sac, While we understand the intent of this mail, please note that most of the operations performed by ganesha related CLI are executed by the runner threads. AFAIK, apart from the return status, we cannot read any error messages from these threads (request glusterd team to confirm that).

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
On 01/08/2016 05:04 PM, Soumya Koduri wrote: I could reproduce while testing deep directories with in the mount point. I root caus'ed the issue & had discussion with Pranith to understand the purpose and recommended way of taking nlookup on inodes. I shall make changes to my existing fix and

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
On 01/13/2016 04:08 PM, Soumya Koduri wrote: On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: Just in case, here is Valgrind output on FUSE client with 3.7.6 + API-related patches we discussed before: https://gist.github.com/cd6605ca19734c1496a4 Thanks for sharing the results. I made

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-13 Thread Soumya Koduri
Cordialement, Mathieu CHATEAU http://www.lotp.fr 2016-01-12 7:24 GMT+01:00 Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>>: On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote: Brief test shows that Ganesha stopped leaking a

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-11 Thread Soumya Koduri
11.01.2016 12:26, Soumya Koduri написав: I have made changes to fix the lookup leak in a different way (as discussed with Pranith) and uploaded them in the latest patch set #4 - http://review.gluster.org/#/c/13096/ Please check if it resolves the mem leak and hopefully doesn't result in any

Re: [Gluster-devel] Core from gNFS process

2016-01-15 Thread Soumya Koduri
On 01/14/2016 08:41 PM, Vijay Bellur wrote: On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote: On 14/01/16 14:28, Jiffin Tony Thottan wrote: Hi, The core generated when encryption xlator is enabled [2016-01-14 08:13:15.740835] E [crypt.c:4298:master_set_master_vol_key] 0-test1-crypt:

Re: [Gluster-devel] Core from gNFS process

2016-01-15 Thread Soumya Koduri
On 01/15/2016 06:52 PM, Soumya Koduri wrote: On 01/14/2016 08:41 PM, Vijay Bellur wrote: On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote: On 14/01/16 14:28, Jiffin Tony Thottan wrote: Hi, The core generated when encryption xlator is enabled [2016-01-14 08:13:15.740835] E [crypt.c

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-06-28 Thread Soumya Koduri
ccept it. 3. Kotresh will work on new changes to make sure changelog makes correct use of rpc-clnt. [1] http://review.gluster.org/#/c/13592 [2] http://review.gluster.org/#/c/1359 regards, Raghavendra. Thanks and Regards, Kotresh H R - Original Message - From: "Soumya Koduri" &l

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-08 Thread Soumya Koduri
On 02/08/2016 09:13 AM, Shyam wrote: On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" To: "Sakshi Bansal" , "Susant Palai" Cc: "Gluster Devel"

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-08 Thread Soumya Koduri
On 02/09/2016 10:27 AM, Raghavendra G wrote: On Mon, Feb 8, 2016 at 4:31 PM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: On 02/08/2016 09:13 AM, Shyam wrote: On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote: - Orig

Re: [Gluster-devel] libgfapi libvirt memory leak version 3.7.8

2016-02-11 Thread Soumya Koduri
Hi Piotr, Could you apply below gfAPI patch and check the valgrind output - http://review.gluster.org/13125 Thanks, Soumya On 02/11/2016 09:40 PM, Piotr Rybicki wrote: Hi All I have to report, that there is a mem leak latest version of gluster gluster: 3.7.8 libvirt 1.3.1 mem leak

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-11 Thread Soumya Koduri
b2902bba1 [10] https://gist.github.com/385bbb95ca910ec9766f [11] https://gist.github.com/685c4d3e13d31f597722 10.02.2016 15:37, Oleksandr Natalenko написав: Hi, folks. Here go new test results regarding client memory leak. I use v3.7.8 with the following patches: === Soumya Koduri (2):

[Gluster-devel] glusterfsd core on NetBSD (https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14139/consoleFull)

2016-02-09 Thread Soumya Koduri
Hi Emmanuel, I see a core generated in this regression run though all the tests seem to have passed. I do not have a netbsd machine to analyze the core. Could you please take a look and let me know what the issue could have been? Thanks, Soumya ___

Re: [Gluster-devel] Rebalance data migration and corruption

2016-02-09 Thread Soumya Koduri
On 02/09/2016 12:30 PM, Raghavendra G wrote: Right. But if there are simultaneous access to the same file from any other client and rebalance process, delegations shall not be granted or revoked if granted even though they are operating at

Re: [Gluster-devel] glusterfsd core on NetBSD (https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14139/consoleFull)

2016-02-10 Thread Soumya Koduri
Thanks Manu. Kotresh, Is this issue related to bug1221629 as well? Thanks, Soumya On 02/10/2016 02:10 PM, Emmanuel Dreyfus wrote: On Wed, Feb 10, 2016 at 12:17:23PM +0530, Soumya Koduri wrote: I see a core generated in this regression run though all the tests seem to have passed. I do

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-01-31 Thread Soumya Koduri
ix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash Soumya Koduri (3): gfapi: Fix inode nlookup counts inode: Retire the inodes from the lru list in inode_table_destroy upcall: free the xdr* allocations === With those patches we got API leaks fix

Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
-level, client/server/both. Thanks, Soumya 01.02.2016 09:54, Soumya Koduri написав: On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote: Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.github.com/ fc1647de09

Re: [Gluster-devel] [Gluster-users] GlusterFS FUSE client leaks summary — part I

2016-02-01 Thread Soumya Koduri
On 02/01/2016 02:48 PM, Xavier Hernandez wrote: Hi, On 01/02/16 09:54, Soumya Koduri wrote: On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote: Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while. I haven't waited

Re: [Gluster-devel] Regarding default_forget/releasedir/release() fops

2016-02-23 Thread Soumya Koduri
On 02/23/2016 05:02 PM, Jeff Darcy wrote: Recently while doing some tests (which involved lots of inode_forget()), I have noticed that my log file got flooded with below messages - [2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (-->

[Gluster-devel] Regarding default_forget/releasedir/release() fops

2016-02-22 Thread Soumya Koduri
Hi Jeff, Recently while doing some tests (which involved lots of inode_forget()), I have noticed that my log file got flooded with below messages - [2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x231)[0x7fd00f63c15d]

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/12/2016 11:27 AM, Soumya Koduri wrote: On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote: And "API" test. I used custom API app [1] and did brief file manipulations through it (create/remove/stat). Then I performed drop_caches, finished API [2] and got the following Valgr

Re: [Gluster-devel] [Gluster-users] GlusterFS v3.7.8 client leaks summary — part II

2016-02-16 Thread Soumya Koduri
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote: Hmm, OK. I've rechecked 3.7.8 with the following patches (latest revisions): === Soumya Koduri (3): gfapi: Use inode_forget in case of handle objects inode: Retire the inodes from the lru list in inode_table_destroy rpc

Re: [Gluster-devel] libgfapi 3.7.8 still memory leak

2016-02-17 Thread Soumya Koduri
Hi Piotr, On 02/17/2016 08:20 PM, Piotr Rybicki wrote: Hi all. I'm trying hard to diagnose memory leaks in libgfapi access. gluster 3.7.8 For this purpose, i've created simplest C code (basically only calling glfs_new() and glfs_fini() ): #include int main (int argc, char** argv) {

Re: [Gluster-devel] Review request for leases patches

2016-03-08 Thread Soumya Koduri
Hi Poornima, On 03/07/2016 11:24 AM, Poornima Gurusiddaiah wrote: Hi All, Here is the link to feature page: http://review.gluster.org/#/c/11980/ Patches can be found @:

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-04 Thread Soumya Koduri
Thanks and Regards, Kotresh H R - Original Message - > From: "Soumya Koduri" <skod...@redhat.com <mailto:skod...@redhat.com>> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com <mailto:khire...@red

Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-03 Thread Soumya Koduri
ttp://review.gluster.org/#/c/13592/ Thanks and Regards, Kotresh H R - Original Message - From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com> To: "Soumya Koduri" <skod...@redhat.com> Cc: "Raghavendra G" <raghaven...@gluster.com>, "Gl

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Soumya Koduri
On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" <rgowd...@redhat.com> To: "Soumya Koduri" <skod...@redhat.com> Cc: "Gluster Devel" <gluster-devel@gluster.org> Sent: Wednesday, May

Re: [Gluster-devel] gfapi, readdirplus and forced lookup after inode_link

2016-05-11 Thread Soumya Koduri
On 05/11/2016 10:17 PM, Soumya Koduri wrote: On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote: - Original Message - From: "Raghavendra Gowdappa" <rgowd...@redhat.com> To: "Soumya Koduri" <skod...@redhat.com> Cc: "Gluster Devel" <gl

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ~(in 2.5 hours)

2016-05-17 Thread Soumya Koduri
Hi, This meeting is scheduled for anyone, who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in your

[Gluster-devel] Minutes of today's Gluster Community Bug Triage meeting (May 17 2016)

2016-05-17 Thread Soumya Koduri
Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended the meeting. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-05-17/gluster_bug_triage.2016-05-17-12.01.html Minutes (text):

Re: [Gluster-devel] [Gluster-users] Minutes of today's Gluster Community Bug Triage meeting (May 17 2016)

2016-05-17 Thread Soumya Koduri
On 05/17/2016 07:09 PM, M S Vishwanath Bhat wrote: On 17 May 2016 at 18:51, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>> wrote: Hi, Please find the minutes of today's Gluster Community Bug Triage meeting below. Thanks to everyone who have attended

Re: [Gluster-devel] [Gluster-users] Exporting Gluster Volume

2016-05-04 Thread Soumya Koduri
Hi Abhishek, Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the reason client is not able set ACLs. Could you please check the log file '/var/lib/glusterfs/nfs.log' if there are any errors logged with respect protocol registration failures. Thanks, Soumya On 05/04/2016

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-28 Thread Soumya Koduri
On 07/27/2016 02:38 AM, Vijay Bellur wrote: On 07/26/2016 05:56 AM, Soumya Koduri wrote: Hi Vijay, On 07/26/2016 12:13 AM, Vijay Bellur wrote: On 07/22/2016 08:44 AM, Soumya Koduri wrote: Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-26 Thread Soumya Koduri
Hi Vijay, On 07/26/2016 12:13 AM, Vijay Bellur wrote: On 07/22/2016 08:44 AM, Soumya Koduri wrote: Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail-over/connect to a different glusterFS client while the I/O is happening. In such cases until

Re: [Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-08-10 Thread Soumya Koduri
) on the server-side. I have updated the feature-spec[1] with the details. Comments are welcome. Thanks, Soumya [1] http://review.gluster.org/#/c/15053/3/under_review/reclaim-locks.md On 07/28/2016 07:29 PM, Soumya Koduri wrote: On 07/27/2016 02:38 AM, Vijay Bellur wrote: On 07/26/2016 05:56 AM

[Gluster-devel] Minutes from today's Gluster Community Bug Triage meeting (July 12 2016)

2016-07-12 Thread Soumya Koduri
Hi, Thanks to everyone who joined the meeting. Please find the minutes of today's Gluster Community Bug Triage meeting at the below links. Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-07-12/gluster_bug_triage.2016-07-12-12.00.html Minutes (text):

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri
On 07/20/2016 12:00 PM, Soumya Koduri wrote: On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya for root causing this. Thanks and Regards

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri
On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya for root causing this. Thanks and Regards, Kotresh H R arbiter-mount.t has failed despite

Re: [Gluster-devel] Regression failures in last 3 days

2016-07-20 Thread Soumya Koduri
On 07/20/2016 12:41 PM, Soumya Koduri wrote: On 07/20/2016 12:00 PM, Soumya Koduri wrote: On 07/20/2016 11:55 AM, Ravishankar N wrote: On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote: Hi, Here is the patch for br-stub.t failures. http://review.gluster.org/14960 Thanks Soumya

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 30 minutes)

2016-07-12 Thread Soumya Koduri
Hi all, This meeting is scheduled for anyone who is interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC (https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC

[Gluster-devel] Support to reclaim locks (posix) provided lkowner & range matches

2016-07-22 Thread Soumya Koduri
Hi, In certain scenarios (esp.,in highly available environments), the application may have to fail-over/connect to a different glusterFS client while the I/O is happening. In such cases until there is a ping timer expiry and glusterFS server cleans up the locks held by the older glusterFS

Re: [Gluster-devel] [NFS-ganesha] unlink file remains in ./glusterfs/unlinks after delete file

2016-07-01 Thread Soumya Koduri
Hi, On 06/30/2016 11:56 AM, 梁正和 wrote: Hi, I'm trying to export gluster-volume by nfs-ganesha. After create --> Some I/O --> delete file from nfs mount point. The file has been moved to ./glusterfs/unlinkls. There was an fd leak when a file is created using gfapi handleops (which

Re: [Gluster-devel] [NFS-ganesha] unlink file remains in ./glusterfs/unlinks after delete file

2016-07-01 Thread Soumya Koduri
FYI - "http://review.gluster.org/#/c/14840 " contains the fix for 3.7 branch. Thanks, Soumya On 07/01/2016 11:38 AM, Soumya Koduri wrote: Hi, On 06/30/2016 11:56 AM, 梁正和 wrote: Hi, I'm trying to export gluster-volume by nfs-ganesha. After create --> Some I/O --> delete fil

[Gluster-devel] **Reminder** Triaging and Updating Bug status

2016-06-28 Thread Soumya Koduri
Hi, We have noticed that many of the bugs (esp., in the recent past the ones filed against 'tests' component) which are being actively worked upon do not have either 'Triaged' keyword set or bug status(/assignee) updated appropriately. Sometimes even many of the active community members fail

Re: [Gluster-devel] Checklist for ganesha FSAL plugin integration testing for 3.9

2016-09-06 Thread Soumya Koduri
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply. Thanks, Soumya On 09/06/2016 12:19 PM, Soumya Koduri wrote: On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote: hi, Did you get a chance to decide on the nfs-ganesha integrations tests that need to be

Re: [Gluster-devel] Gluster Developer Summit 2016 Talk Schedule

2016-09-15 Thread Soumya Koduri
On 09/16/2016 03:48 AM, Amye Scavarda wrote: On Thu, Sep 15, 2016 at 8:26 AM, Pranith Kumar Karampuri <pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote: On Thu, Sep 15, 2016 at 2:37 PM, Soumya Koduri <skod...@redhat.com <mailto:skod...@redhat.com>>

Re: [Gluster-devel] Dht readdir filtering out names

2016-09-30 Thread Soumya Koduri
On 09/30/2016 10:08 AM, Pranith Kumar Karampuri wrote: Does samba/gfapi/nfs-ganesha have options to disable readdirp? AFAIK, currently there is no option to disable/enable readdirp in gfapi & nfs-ganesha (not sure about samba). But looks like nfs-ganesha seem to be always using readdir,

  1   2   >