Hi,
With Change-Id:Ieba9a7071829d51860b7c131982f12e0136b9855 , dht
itansform/deitransform was improved to encode 64-bit brick offset along
with the brick-id in the global d_off.
More details regarding this change are in :
http://review.gluster.org/#/c/4711/
But now with afr using the same
On 07/21/2014 07:33 PM, Anders Blomdell wrote:
On 2014-07-21 14:36, Soumya Koduri wrote:
On 07/21/2014 05:35 PM, Xavier Hernandez wrote:
On Monday 21 July 2014 13:53:19 Anders Blomdell wrote:
On 2014-07-21 13:49, Pranith Kumar Karampuri wrote:
On 07/21/2014 05:17 PM, Anders Blomdell
--
*From: * Soumya Koduri;skod...@redhat.com;
*Date: * Wed, Aug 27, 2014 07:42 PM
*To: * Pranith Kumar Karampuripkara...@redhat.com;
ABC-new360532...@qq.com;
*Cc: * Gluster Develgluster-devel@gluster.org;
*Subject: * Re: [Gluster-devel] glfs_creat this method hang up
Could you please share
*replicate*.node-uuid=c445c335-1d7e-4753-bd13-a83c4877083a
root 16907 17242 0 16:31 pts/000:00:00 grep glusterfsâ€
Thank you,
Lixiaopo
-- 原始邮件 --
*å‘件人:* Soumya Koduri;skod...@redhat.com;
*å‘é€æ—¶é—´:* 2014å¹´8月28æ—¥(星期四) 下åˆ4:11
*收ä
for SDFS. If it is fine then how can
i start.
Thank you Very much for your response.
On Wed, Sep 3, 2014 at 12:58 AM, Soumya Koduri skod...@redhat.com
mailto:skod...@redhat.com wrote:
Hi Samuthira,
I have documented on how NFS-Ganesha is integrated with glusterfs in
the below blog post
Hi Siva,
On 11/22/2014 08:44 AM, Rudra Siva wrote:
Thanks for the response. In my case, I am trying to avoid doing the
network level lookup - since I use the same resolve only pass a null
for the attribute structure - essentially in my case, it is an atomic
multiple object read/write so I only
Hi,
This framework has been designed to maintain state in the glusterfsd
process, for each the files being accessed (including the clients info
accessing those files) and send notifications to the respective
glusterfs clients incase of any change in that state.
Few of the use-cases
for notification requests).
Right. I shall add this in the feature page.
Thanks for bringing this up.
-Soumya
Shyam
On 12/12/2014 02:17 AM, Soumya Koduri wrote:
Hi,
This framework has been designed to maintain state in the glusterfsd
process, for each the files being accessed (including the clients
On 12/12/2014 11:06 PM, Niels de Vos wrote:
Hi,
I started to look into getting some form of support for ACLs in gfapi.
After a short discussion with Shyam, some investigation showed that our
current implementation of ACLs is not very portable. There definitely
seem to be issues with ACLs
On 12/15/2014 09:01 AM, Krishnan Parthasarathi wrote:
Here are a few questions that I had after reading the feature
page.
- Is there a new connection from glusterfsd (upcall xlator) to
a client accessing a file? If so, how does the upcall xlator reuse
connections when the same client
volume
info' only if we explicitly modify its value to ON/OFF.
Can you please verify if the filesystem where your Gluster bricks have
been created has been mounted with ACLs enabled.
Thanks,
Soumya
Thanks
-Peter
From: Soumya Koduri [skod...@redhat.com
Hi Peter,
Can you please try manually mounting those volumes using any/other nfs
client and check if you are able to perform write operations. Also
please collect the gluster nfs log while doing so.
Thanks,
Soumya
On 01/22/2015 08:18 AM, Peter Auyeung wrote:
Hi,
We have been having 5
Hi,
I have updated the feature page with more design details and the
dependencies/limitations this support has.
http://www.gluster.org/community/documentation/index.php/Features/Upcall-infrastructure#Dependencies
Kindly check the same and provide your inputs.
Few of them which may be
.
* store Upcall entries in inode/fd_ctxt for faster lookup.
Thanks,
Soumya
On 01/22/2015 02:31 PM, Soumya Koduri wrote:
Hi,
I have updated the feature page with more design details and the
dependencies/limitations this support has.
http://www.gluster.org/community/documentation/index.php
will be submitted in the new patches after
addressing the proposed changes discussed in the earlier mail.
Thanks,
Soumya
On 02/19/2015 12:30 PM, Soumya Koduri wrote:
Hi,
We recently have uncovered few issues with respect to lease_locks
support and had discussions around the same. Thanks to everyone
Hi Shyam/Niels,
To re-iterate the issues,
a) at present, when two clients access same file, we send
'cache_invalidation' upcall notification to the first client in the fop
cbk path of the second client. This may affect brick latency esp., for
the directories (where there are more chances of
like to leverage the common solution infra for passing this extra metadata
to destination.
- Original Message -
From: Dan Lambright dlamb...@redhat.com
To: Niels de Vos nde...@redhat.com
Cc: Joseph Fernandes josfe...@redhat.com, gluster Devel gluster-devel@gluster.org,
Soumya Koduri skod
We are sorry for the inconvenience caused during the hangout session.
There is a network outage at our place. We shall do the recording again
and share the link sometime next week.
Thanks,
Soumya
On 04/30/2015 06:08 PM, Niels de Vos wrote:
On Wed, Apr 29, 2015 at 11:20:20AM -0400, Meghana
I consistently see this failure for one of my patches -
http://review.gluster.org/#/c/10568/ -
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8483/consoleFull
This test passed when I ran it on my workspace.
Thanks,
Soumya
On 05/02/2015 08:00 AM, Pranith Kumar Karampuri wrote:
of you are interested to work on any of those items.
Would be happy to assist you :)
Thanks,
Soumya
On 01/22/2015 02:31 PM, Soumya Koduri wrote:
Hi,
I have updated the feature page with more design details and the
dependencies/limitations this support has.
http://www.gluster.org/community
Hi,
Below link contains the lease-lock design notes (as per the latest
discussion we had). Thanks to everyone involved (CC'ed).
http://www.gluster.org/community/documentation/index.php/Features/Upcall-infrastructure#delegations.2Flease-locks
Kindly go through the same and provide us your
Hi Dan/Joseph,
As part of upcall support on the server-side, we maintain certain state
to notify clients of the cache-invalidation and recall-leaselk events.
We have certain known limitations with Rebalance and Self-Heal. Details
in the below link -
On 04/17/2015 04:51 PM, Raghavendra Talur wrote:
On Friday 17 April 2015 03:53 PM, Poornima Gurusiddaiah wrote:
Hi,
There are two concerns in the usage of libgfapi which have been
present from day one, but now
with new users of libgfapi its a necessity to fix these:
1. When libgfapi is used
On 06/25/2015 09:00 AM, Pranith Kumar Karampuri wrote:
On 06/25/2015 02:49 AM, Jeff Darcy wrote:
It knows which bricks are up/down. But they may not be the latest. Will
that matter?
AFAIK it's sufficient at this point to know which are up/down.
In that case, we need two functions which
On 06/24/2015 10:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
I've been looking at the recent patches to redirect GF_FOP_IPC to an active
subvolume instead of always to the first. Specifically, these:
http://review.gluster.org/11346 for DHT
On 06/11/2015 04:29 AM, Niels de Vos wrote:
On Wed, Jun 10, 2015 at 11:42:27PM +0530, Jiffin Tony Thottan wrote:
Hi,
In the current implementation of access-control translator, it takes care
of the following :
a.) conversion of acl xattr - gluster supported posix-acl format
(at the backend
Hi,
I would like to reserve one of the below machines to run a regression
test for debugging. Please let me know if any of you are using them
currently.
Thanks,
Soumya
On 05/21/2015 10:46 PM, Justin Clift wrote:
There are two extra CentOS 6 VM's online for debugging stuff with,
but they're
Thanks to Ravishankar. We could reproduce the issue on our test
machines. I shall no longer need these machines.
Thanks,
Soumya
On 06/15/2015 11:55 AM, Soumya Koduri wrote:
Hi,
I would like to reserve one of the below machines to run a regression
test for debugging. Please let me know if any
On 06/29/2015 08:18 PM, Niels de Vos wrote:
On Wed, Jun 24, 2015 at 07:44:13PM +0530, Soumya Koduri wrote:
On 06/24/2015 10:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
I've been looking at the recent patches to redirect GF_FOP_IPC to an active
subvolume instead
Hi Louis,
AFAIK, we never tested nfs-ganesha+glusterfs using windows client. May
be it would be good if you can collect and provide packet trace/cores or
logs (with nfs-ganesha atleast in NIV_DEBUG level) on the server side
while you run these tests to debug futher.
Thanks,
Soumya
On
This approach sounds good. Few inputs/queries inline.
On 08/17/2015 06:20 PM, Anoop C S wrote:
Hi all,
As we move forward, in order to fix the limitations with current trash
translator we are planning to replace the existing criteria for trashed
files inside trash directory with a general
healing etc.
Request your inputs/comments.
Thanks,
Soumya
Poornima
On 07/22/2015 09:22 PM, Soumya Koduri wrote:
On 07/22/2015 06:33 PM, Shyam wrote:
Thanks for the responses. some comments inline
Who is doing/attempting client side caching improvements for Gluster 4.0
(or before
on it right
now. But once we have leases.md (capturing latest design changes) ready,
we planned to take inputs from him and the community.
On 07/21/2015 09:20 AM, Soumya Koduri wrote:
On 07/21/2015 02:49 PM, Poornima Gurusiddaiah wrote:
Hi Shyam,
Please find my reply inline.
Rgards,
Poornima
the motivation behind the current manner of implementing leases.
Yes sure, we are updating the leases.md, will send it at the earliest.
Thanks,
Shyam
On 04/16/2015 07:37 AM, Soumya Koduri wrote:
Hi,
Below link contains the lease-lock design notes (as per the latest
discussion we had). Thanks
Hi,
If you would like to contribute to GlusterFS, one of the easiest ways
which shall help you to analyze the code is by fixing defects reported
by Coverity Scan tool.
The detailed process is mentioned in [1]. To summarize,
* Signup as a member of https://scan.coverity.com/projects/987
*
On 09/14/2015 09:47 PM, Sathyendra Prabhu wrote:
Hello developers,
I am a glusterfs learning programmer. I am implementing a few
translators. Is it possible to use 'syncop's on the server side (than
using stack wind /unwinds)...?? I didn't get any help content on the
topic. Please someone
Hi,
pNFS is a feature introduced as part of NFSv4.1 protocol to allow direct
client access to storage devices containing file data (in short parallel
I/O). Client request for the layouts of entire file or specific range.
On receiving the layout information, they shall directly contact the
On 12/09/2015 11:44 AM, Pranith Kumar Karampuri wrote:
On 12/09/2015 06:37 AM, Vijay Bellur wrote:
On 12/08/2015 03:45 PM, Jeff Darcy wrote:
On December 8, 2015 at 12:53:04 PM, Ira Cooper (i...@redhat.com) wrote:
Raghavendra Gowdappa writes:
I propose that we define a "compound op"
re ~1.8M files on this test volume.
On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote:
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS-Ganesha also consumes huge amount of memory while doing
ordinary "
ave pasted
below apply to gfapi/nfs-ganesha applications.
Also, to resolve the nfs-ganesha issue which I had mentioned below (in
case if Entries_HWMARK option gets changed), I have posted below fix -
https://review.gerrithub.io/#/c/258687
Thanks,
Soumya
Ideas?
05.01.2016 12:31, Sou
2016 р. 22:52:25 EET Soumya Koduri wrote:
On 01/05/2016 05:56 PM, Oleksandr Natalenko wrote:
Unfortunately, both patches didn't make any difference for me.
I've patched 3.7.6 with both patches, recompiled and installed patched
GlusterFS package on client side and mounted volume with ~2M of files
u have taken the latest gluster patch set
#3 ?
- http://review.gluster.org/#/c/13096/3
If you are hitting the issue even then, please provide the core if possible.
Thanks,
Soumya
06.01.2016 08:40, Soumya Koduri написав:
On 01/06/2016 03:53 AM, Oleksandr Natalenko wrote:
OK, I've repeated th
e25d4a5a52
ganesha.conf:
https://gist.github.com/9b5e59b8d6d8cb84c85d
How I mount NFS share:
===
mount -t nfs4 127.0.0.1:/mail_boxes /mnt/tmp -o
defaults,_netdev,minorversion=2,noac,noacl,lookupcache=none,timeo=100
===
On четвер, 7 січня 2016 р. 12:06:42 EET Soumya Koduri wrote:
Entries_HW
On 12/25/2015 08:56 PM, Oleksandr Natalenko wrote:
What units Cache_Size is measured in? Bytes?
Its actually (Cache_Size * sizeof_ptr) bytes. If possible, could you
please run ganesha process under valgrind? Will help in detecting leaks.
Thanks,
Soumya
25.12.2015 16:58, Soumya Koduri
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS-Ganesha also consumes huge amount of memory while doing
ordinary "find . -type f" via NFSv4.2 on remote client. Here is memory
usage:
===
root 5416 34.2 78.5
.
https://gist.github.com/e4602a50d3c98f7a2766
One may see GlusterFS-related leaks here as well.
On пʼятниця, 25 грудня 2015 р. 20:28:13 EET Soumya Koduri wrote:
On 12/24/2015 09:17 PM, Oleksandr Natalenko wrote:
Another addition: it seems to be GlusterFS API library memory leak
because NFS
- Original Message -
> From: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> To: "Oleksandr Natalenko" <oleksa...@natalenko.name>, "Soumya Koduri"
> <skod...@redhat.com>
> Cc: gluster-us...@gluster.org, gluster-devel@gluste
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
+gluster-users
On 12/22/2015 06:03 PM, Hari Gowtham wrote:
Hi all,
There was a poll conducted to find the timing that suits best for the people
who want to participate
in the weekly Gluster bug triage meeting. The result for the poll is yet to be
announced but we would
like to get more
Hi,
Please find the minutes of today's Gluster Community Bug Triage meeting
below. Thanks to everyone who have attend the meeting.
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-12-22/gluster_bug_triage.2015-12-22-12.00.html
Minutes (text):
package on that machine to get full backtrace (as
requested in [1]) and update the bug with details.
Thanks,
Soumya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1293594#c4
On 11/26/2015 03:07 PM, Soumya Koduri wrote:
Below are the findings from the core and the logs
1) [2015-11-25 19:06
Below are the findings from the core and the logs
1) [2015-11-25 19:06:41.592905] E
[crypt.c:4298:master_set_master_vol_key] 0-patchy-crypt: FATAL: missing
master key
xlator_init() of crypt xlator fails, which I assume gets loaded when
features.encryption is on (which the below mentioned .t
On 11/26/2015 03:35 PM, Avik Sil wrote:
On Tuesday 24 November 2015 09:58 PM, Vijay Bellur wrote:
- Original Message -
From: "Avik Sil"
To: gluster-devel@gluster.org
Sent: Tuesday, November 24, 2015 6:47:44 AM
Subject: [Gluster-devel] Caching support in
Hi Sac,
While we understand the intent of this mail, please note that most of
the operations performed by ganesha related CLI are executed by the
runner threads. AFAIK, apart from the return status, we cannot read any
error messages from these threads (request glusterd team to confirm that).
On 01/08/2016 05:04 PM, Soumya Koduri wrote:
I could reproduce while testing deep directories with in the mount
point. I root caus'ed the issue & had discussion with Pranith to
understand the purpose and recommended way of taking nlookup on inodes.
I shall make changes to my existing fix and
On 01/13/2016 04:08 PM, Soumya Koduri wrote:
On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote:
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:
https://gist.github.com/cd6605ca19734c1496a4
Thanks for sharing the results. I made
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 7:24 GMT+01:00 Soumya Koduri <skod...@redhat.com
<mailto:skod...@redhat.com>>:
On 01/11/2016 05:11 PM, Oleksandr Natalenko wrote:
Brief test shows that Ganesha stopped leaking a
11.01.2016 12:26, Soumya Koduri написав:
I have made changes to fix the lookup leak in a different way (as
discussed with Pranith) and uploaded them in the latest patch set #4
- http://review.gluster.org/#/c/13096/
Please check if it resolves the mem leak and hopefully doesn't result
in any
On 01/14/2016 08:41 PM, Vijay Bellur wrote:
On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote:
On 14/01/16 14:28, Jiffin Tony Thottan wrote:
Hi,
The core generated when encryption xlator is enabled
[2016-01-14 08:13:15.740835] E
[crypt.c:4298:master_set_master_vol_key] 0-test1-crypt:
On 01/15/2016 06:52 PM, Soumya Koduri wrote:
On 01/14/2016 08:41 PM, Vijay Bellur wrote:
On 01/14/2016 04:11 AM, Jiffin Tony Thottan wrote:
On 14/01/16 14:28, Jiffin Tony Thottan wrote:
Hi,
The core generated when encryption xlator is enabled
[2016-01-14 08:13:15.740835] E
[crypt.c
ccept it.
3. Kotresh will work on new changes to make sure changelog makes correct
use of rpc-clnt.
[1] http://review.gluster.org/#/c/13592
[2] http://review.gluster.org/#/c/1359
regards,
Raghavendra.
Thanks and Regards,
Kotresh H R
- Original Message -
From: "Soumya Koduri" &l
On 02/08/2016 09:13 AM, Shyam wrote:
On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote:
- Original Message -
From: "Raghavendra Gowdappa"
To: "Sakshi Bansal" , "Susant Palai"
Cc: "Gluster Devel"
On 02/09/2016 10:27 AM, Raghavendra G wrote:
On Mon, Feb 8, 2016 at 4:31 PM, Soumya Koduri <skod...@redhat.com
<mailto:skod...@redhat.com>> wrote:
On 02/08/2016 09:13 AM, Shyam wrote:
On 02/06/2016 06:36 PM, Raghavendra Gowdappa wrote:
- Orig
Hi Piotr,
Could you apply below gfAPI patch and check the valgrind output -
http://review.gluster.org/13125
Thanks,
Soumya
On 02/11/2016 09:40 PM, Piotr Rybicki wrote:
Hi All
I have to report, that there is a mem leak latest version of gluster
gluster: 3.7.8
libvirt 1.3.1
mem leak
b2902bba1
[10] https://gist.github.com/385bbb95ca910ec9766f
[11] https://gist.github.com/685c4d3e13d31f597722
10.02.2016 15:37, Oleksandr Natalenko написав:
Hi, folks.
Here go new test results regarding client memory leak.
I use v3.7.8 with the following patches:
===
Soumya Koduri (2):
Hi Emmanuel,
I see a core generated in this regression run though all the tests seem
to have passed. I do not have a netbsd machine to analyze the core.
Could you please take a look and let me know what the issue could have been?
Thanks,
Soumya
___
On 02/09/2016 12:30 PM, Raghavendra G wrote:
Right. But if there are simultaneous access to the same file from
any other client and rebalance process, delegations shall
not be
granted or revoked if granted even though they are operating at
Thanks Manu.
Kotresh,
Is this issue related to bug1221629 as well?
Thanks,
Soumya
On 02/10/2016 02:10 PM, Emmanuel Dreyfus wrote:
On Wed, Feb 10, 2016 at 12:17:23PM +0530, Soumya Koduri wrote:
I see a core generated in this regression run though all the tests seem to
have passed. I do
ix in fuse-bridge, revisited
Pranith Kumar K
(1):
mount/fuse: Fix use-after-free crash
Soumya Koduri (3):
gfapi: Fix inode nlookup counts
inode: Retire the inodes from the lru
list in inode_table_destroy
upcall: free the xdr* allocations
===
With those patches we got API leaks fix
-level, client/server/both.
Thanks,
Soumya
01.02.2016 09:54, Soumya Koduri написав:
On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote:
Unfortunately, this patch doesn't help.
RAM usage on "find" finish is ~9G.
Here is statedump before drop_caches: https://gist.github.com/
fc1647de09
On 02/01/2016 02:48 PM, Xavier Hernandez wrote:
Hi,
On 01/02/16 09:54, Soumya Koduri wrote:
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:
Wait. It seems to be my bad.
Before unmounting I do drop_caches (2), and glusterfs process CPU usage
goes to 100% for a while. I haven't waited
On 02/23/2016 05:02 PM, Jeff Darcy wrote:
Recently while doing some tests (which involved lots of inode_forget()),
I have noticed that my log file got flooded with below messages -
[2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (-->
Hi Jeff,
Recently while doing some tests (which involved lots of inode_forget()),
I have noticed that my log file got flooded with below messages -
[2016-02-22 08:57:44.025565] W [defaults.c:2889:default_forget] (-->
/usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x231)[0x7fd00f63c15d]
On 02/12/2016 11:27 AM, Soumya Koduri wrote:
On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote:
And "API" test.
I used custom API app [1] and did brief file manipulations through it
(create/remove/stat).
Then I performed drop_caches, finished API [2] and got the following
Valgr
On 02/16/2016 08:06 PM, Oleksandr Natalenko wrote:
Hmm, OK. I've rechecked 3.7.8 with the following patches (latest
revisions):
===
Soumya Koduri (3):
gfapi: Use inode_forget in case of handle objects
inode: Retire the inodes from the lru list in inode_table_destroy
rpc
Hi Piotr,
On 02/17/2016 08:20 PM, Piotr Rybicki wrote:
Hi all.
I'm trying hard to diagnose memory leaks in libgfapi access.
gluster 3.7.8
For this purpose, i've created simplest C code (basically only calling
glfs_new() and glfs_fini() ):
#include
int main (int argc, char** argv) {
Hi Poornima,
On 03/07/2016 11:24 AM, Poornima Gurusiddaiah wrote:
Hi All,
Here is the link to feature page: http://review.gluster.org/#/c/11980/
Patches can be found @:
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Soumya Koduri" <skod...@redhat.com <mailto:skod...@redhat.com>>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com
<mailto:khire...@red
ttp://review.gluster.org/#/c/13592/
Thanks and Regards,
Kotresh H R
- Original Message -
From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
To: "Soumya Koduri" <skod...@redhat.com>
Cc: "Raghavendra G" <raghaven...@gluster.com>, "Gl
On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote:
- Original Message -
From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
To: "Soumya Koduri" <skod...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Wednesday, May
On 05/11/2016 10:17 PM, Soumya Koduri wrote:
On 05/11/2016 06:12 PM, Raghavendra Gowdappa wrote:
- Original Message -
From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
To: "Soumya Koduri" <skod...@redhat.com>
Cc: "Gluster Devel" <gl
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your
Hi,
Please find the minutes of today's Gluster Community Bug Triage meeting
below. Thanks to everyone who have attended the meeting.
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-17/gluster_bug_triage.2016-05-17-12.01.html
Minutes (text):
On 05/17/2016 07:09 PM, M S Vishwanath Bhat wrote:
On 17 May 2016 at 18:51, Soumya Koduri <skod...@redhat.com
<mailto:skod...@redhat.com>> wrote:
Hi,
Please find the minutes of today's Gluster Community Bug Triage
meeting below. Thanks to everyone who have attended
Hi Abhishek,
Below 'rpcinfo' output doesn't list 'nfsacl' protocol. That must be the
reason client is not able set ACLs. Could you please check the log file
'/var/lib/glusterfs/nfs.log' if there are any errors logged with respect
protocol registration failures.
Thanks,
Soumya
On 05/04/2016
On 07/27/2016 02:38 AM, Vijay Bellur wrote:
On 07/26/2016 05:56 AM, Soumya Koduri wrote:
Hi Vijay,
On 07/26/2016 12:13 AM, Vijay Bellur wrote:
On 07/22/2016 08:44 AM, Soumya Koduri wrote:
Hi,
In certain scenarios (esp.,in highly available environments), the
application may have to fail
Hi Vijay,
On 07/26/2016 12:13 AM, Vijay Bellur wrote:
On 07/22/2016 08:44 AM, Soumya Koduri wrote:
Hi,
In certain scenarios (esp.,in highly available environments), the
application may have to fail-over/connect to a different glusterFS
client while the I/O is happening. In such cases until
) on the
server-side.
I have updated the feature-spec[1] with the details. Comments are welcome.
Thanks,
Soumya
[1] http://review.gluster.org/#/c/15053/3/under_review/reclaim-locks.md
On 07/28/2016 07:29 PM, Soumya Koduri wrote:
On 07/27/2016 02:38 AM, Vijay Bellur wrote:
On 07/26/2016 05:56 AM
Hi,
Thanks to everyone who joined the meeting. Please find the minutes of
today's Gluster Community Bug Triage meeting at the below links.
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-12/gluster_bug_triage.2016-07-12-12.00.html
Minutes (text):
On 07/20/2016 12:00 PM, Soumya Koduri wrote:
On 07/20/2016 11:55 AM, Ravishankar N wrote:
On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote:
Hi,
Here is the patch for br-stub.t failures.
http://review.gluster.org/14960
Thanks Soumya for root causing this.
Thanks and Regards
On 07/20/2016 11:55 AM, Ravishankar N wrote:
On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote:
Hi,
Here is the patch for br-stub.t failures.
http://review.gluster.org/14960
Thanks Soumya for root causing this.
Thanks and Regards,
Kotresh H R
arbiter-mount.t has failed despite
On 07/20/2016 12:41 PM, Soumya Koduri wrote:
On 07/20/2016 12:00 PM, Soumya Koduri wrote:
On 07/20/2016 11:55 AM, Ravishankar N wrote:
On 07/20/2016 11:51 AM, Kotresh Hiremath Ravishankar wrote:
Hi,
Here is the patch for br-stub.t failures.
http://review.gluster.org/14960
Thanks Soumya
Hi all,
This meeting is scheduled for anyone who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
Hi,
In certain scenarios (esp.,in highly available environments), the
application may have to fail-over/connect to a different glusterFS
client while the I/O is happening. In such cases until there is a ping
timer expiry and glusterFS server cleans up the locks held by the older
glusterFS
Hi,
On 06/30/2016 11:56 AM, 梁正和 wrote:
Hi,
I'm trying to export gluster-volume by nfs-ganesha.
After create --> Some I/O --> delete file from nfs mount point.
The file has been moved to ./glusterfs/unlinkls.
There was an fd leak when a file is created using gfapi handleops (which
FYI - "http://review.gluster.org/#/c/14840 " contains the fix for 3.7
branch.
Thanks,
Soumya
On 07/01/2016 11:38 AM, Soumya Koduri wrote:
Hi,
On 06/30/2016 11:56 AM, 梁正和 wrote:
Hi,
I'm trying to export gluster-volume by nfs-ganesha.
After create --> Some I/O --> delete fil
Hi,
We have noticed that many of the bugs (esp., in the recent past the ones
filed against 'tests' component) which are being actively worked upon do
not have either 'Triaged' keyword set or bug status(/assignee) updated
appropriately. Sometimes even many of the active community members fail
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply.
Thanks,
Soumya
On 09/06/2016 12:19 PM, Soumya Koduri wrote:
On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote:
hi,
Did you get a chance to decide on the nfs-ganesha integrations
tests that need to be
On 09/16/2016 03:48 AM, Amye Scavarda wrote:
On Thu, Sep 15, 2016 at 8:26 AM, Pranith Kumar Karampuri
<pkara...@redhat.com <mailto:pkara...@redhat.com>> wrote:
On Thu, Sep 15, 2016 at 2:37 PM, Soumya Koduri <skod...@redhat.com
<mailto:skod...@redhat.com>>
On 09/30/2016 10:08 AM, Pranith Kumar Karampuri wrote:
Does samba/gfapi/nfs-ganesha have options to disable readdirp?
AFAIK, currently there is no option to disable/enable readdirp in gfapi
& nfs-ganesha (not sure about samba). But looks like nfs-ganesha seem to
be always using readdir,
1 - 100 of 136 matches
Mail list logo