- Original Message -
> From: "Kaushal M" <kshlms...@gmail.com>
> To: maintainers@gluster.org, "Vijay Bellur" <vbel...@redhat.com>
> Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Raghavendra Gowdappa"
> <
- Original Message -
> From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> To: "Kaushal M" <kshlms...@gmail.com>
> Cc: maintainers@gluster.org
> Sent: Thursday, June 23, 2016 10:10:22 AM
> Subject: Re: [Gluster-Maintainers] Maintainers a
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: maintainers@gluster.org
> Sent: Tuesday, May 24, 2016 11:46:14 AM
> Subject: [Gluster-Maintainers] I am seeing nokia folks to be active in the
> community
>
> Anyone knows how they are using gluster?
- Original Message -
> From: "Oleksandr Natalenko" <oleksa...@natalenko.name>
> To: "Kaushal M" <kshlms...@gmail.com>
> Cc: "Raghavendra Gowdappa" <rgowd...@redhat.com>, maintainers@gluster.org,
> "Gluster Devel"
There seems to be a major inode leak in fuse-clients:
https://bugzilla.redhat.com/show_bug.cgi?id=1353856
We have found an RCA through code reading (though have a high confidence on the
RCA). Do we want to include this in 3.7.13?
regards,
Raghavendra.
- Original Message -
> From:
Hi all,
Poornima has fixed a memory corruption [1] in bricks which manifested as "xdr
decoding failed" erros on clients when upcall is enabled. I think this should
go in 3.10. Hence dropping this mail.
@Poornima,
Can you create a bug for this on 3.10 and add it as a blocker for release-3.10
- Original Message -
> From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> Cc: "Amar Tumballi" <atumb...@redhat.com>, "GlusterFS Maintainers"
> <maintain
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Michael Scherer"
> Cc: "Amar Tumballi" , "GlusterFS Maintainers"
> , "Gluster Devel"
>
> Sent: Friday,
All,
Got some personal work and will not be able to attend today's meeting. Will
sync up later with meeting minutes.
regards,
Raghavendra
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers
Hi all,
In sync with our cleanup efforts, I think following patches can be abandoned.
If any of you've any concerns, please voice them out and we can discuss.
https://review.gluster.org/11553
https://review.gluster.org/9623
https://review.gluster.org/5908
https://review.gluster.org/11734/
Hi all,
Each one of you have been mentioned as a Peer to one or more components I am a
maintainer of [1] and is relatively new to the component. So, we need to come
up with ways where effective knowledge transfer is done to enable you to take
independent decisions for issues concerned. Some of
- Original Message -
> From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> To: "Nigel Babu" <nig...@redhat.com>
> Cc: "GlusterFS Maintainers" <maintainers@gluster.org>, "Gluster Devel"
> <gluster-de...@glu
Thanks Nigel.
- Original Message -
> From: "Nigel Babu" <nig...@redhat.com>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> Cc: "GlusterFS Maintainers" <maintainers@gluster.org>, "Gluster Devel"
> <gluster-de
- Original Message -
> From: "Nigel Babu" <nig...@redhat.com>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> Cc: "GlusterFS Maintainers" <maintainers@gluster.org>, "Gluster Devel"
> <gluster-de...@glu
A Gentle reminder.
@everyone,
Do we've a process to be followed in this scenario?
regards,
Raghavendra
- Original Message -
> From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> To: "Nigel Babu" <nig...@redhat.com>
> Cc: "GlusterFS Ma
- Original Message -
> From: "Niels de Vos" <nde...@redhat.com>
> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> Cc: maintainers@gluster.org
> Sent: Tuesday, May 30, 2017 11:23:14 AM
> Subject: Re: [Gluster-devel] Time Window before me
- Original Message -
> From: "Zhang Huan"
> To: "Raghavendra G"
> Cc: "GlusterFS Maintainers" , "Gluster Devel"
> , "Kaushal Madappa"
>
> Sent: Tuesday, May 30,
I missed providing the context. This activity is part of the cleanup drive
we've started. Amar had a list of older patches at [1].
[1] https://hackmd.io/GYIwhsEMYOwLQGYAMBOATHALCmm4oEZgBTOGYgVgA5qxiqqY0g==#
- Original Message -
> From: "Raghavendra Gowdappa" <rgo
All,
What is the process to abandon patches owned by others? There are two parts to
this question:
* Permissions - who should I contact to abandon (as I can't abandon others
patches)?
* Process - like the owner to be notified, initiate a discussion with them etc
regards,
Raghavendra
the actual dates/schedule if this time is
comfortable for majority. I am open to alternative time slots too.
regards,
Raghavendra
- Original Message -
> From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
> To: "Rafi" <rkavu...@redhat.com>, "Nithya Balac
- Original Message -
> From: "Shyam Ranganathan"
> To: "Gluster Devel" , "GlusterFS Maintainers"
>
> Sent: Thursday, January 25, 2018 9:49:51 PM
> Subject: Re: [Gluster-Maintainers] [Gluster-devel] Release 4.0:
All,
I am trying to come up with content for release notes for 4.0 summarizing
performance impact. Can you point me to patches/documentation/issues/bugs
that could impact performance in 4.0? Better still, if you can give me a
summary of changes having performance impact, it would be really be
notes out shortly, suggest taking it to direct emails.
>
I added individual owners to CC list. For some reason, they are not
reflected in CC list. But, I guess they would've received direct mails.
- amye
>
> On Tue, Feb 20, 2018 at 8:11 PM, Raghavendra Gowdappa <rgowd...@redhat.com
On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> I am facing different issue in softserve machines. The fuse mount itself
> is failing.
> I tried day before yesterday to debug geo-rep failures. I discussed with
> Raghu,
> but could not root cause it.
>
On Fri, Aug 3, 2018 at 4:01 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Du/Poornima,
>
> I was analysing bitrot and geo-rep failures and I suspect there is a bug
> in some perf xlator
> that was one of the cause. I was seeing following behaviour in few runs.
>
> 1. Geo-rep
On Sun, Aug 12, 2018 at 9:11 AM, Raghavendra Gowdappa
wrote:
>
>
> On Sat, Aug 11, 2018 at 10:33 PM, Shyam Ranganathan
> wrote:
>
>> On 08/09/2018 10:58 PM, Raghavendra Gowdappa wrote:
>> >
>> >
>> > On Fri, Aug 10, 2018 at 1:38 AM, Shyam Ran
On Thu, Aug 9, 2018 at 9:59 AM, Nigel Babu wrote:
> I would trust tooling that prevents merges rather than good faith.
>
+1
I have worked on projects where we trust good faith, but still enforce that
> with tooling[1]. It's highly likely for one or two committers to be unaware
> of an ongoing
Failure is tracked by bz:
https://bugzilla.redhat.com/show_bug.cgi?id=1615096
Earlier this test did following things on M0 and M1 mounted on same
volume:
1 create file M0/testfile
2 open an fd on M0/testfile
3 remove the file from M1, M1/testfile
4 echo "data" >>
Failure of this test is tracked by bz
https://bugzilla.redhat.com/show_bug.cgi?id=1608158.
I was trying to debug regression failures on [1] and observed that
split-brain-resolution.t was failing consistently.
=
TEST 45 (line 88): 0 get_pending_heal_count patchy
Initial RCA to point out commit 7131de81f72dda0ef685ed60d0887c6e14289b8c
caused the issue was done by Nithya. Following was the conversation:
With the latest master, I created a single brick volume and some files
inside it.
[root@rhgs313-6 ~]# umount -f /mnt/fuse1; mount -t glusterfs
Failure of this test is tracked by bz https://bugzilla.redhat.com/
show_bug.cgi?id=1608158.
I was trying to debug regression failures on [1] and observed that
split-brain-resolution.t was failing consistently.
=
TEST 45 (line 88): 0 get_pending_heal_count patchy
On Sat, Aug 11, 2018 at 10:33 PM, Shyam Ranganathan
wrote:
> On 08/09/2018 10:58 PM, Raghavendra Gowdappa wrote:
> >
> >
> > On Fri, Aug 10, 2018 at 1:38 AM, Shyam Ranganathan > <mailto:srang...@redhat.com>> wrote:
> >
> > On 08/08/2018 09:04 P
ltins - Xavi and Mohit
* Improvements to glusterfind - Niklas Hambüchen, Milind and Aravinda V K
* Modification of Quick-read to consume upcall notifications - Poornima
* Exposing trickling-writes in write-behind - Csaba and Shreyas
* Changes to Purging landfill directory in storage/posix - Shrey
On Fri, Mar 2, 2018 at 11:01 AM, Ravishankar N
wrote:
>
> On 03/02/2018 10:11 AM, Ravishankar N wrote:
>
>> + Anoop.
>>
>> It looks like clients on the old (3.12) nodes are not able to talk to the
>> upgraded (4.0) node. I see messages like these on the old clients:
>>
>>
On Thu, Mar 15, 2018 at 9:45 AM, Vijay Bellur wrote:
>
>
> On Wed, Mar 14, 2018 at 5:40 PM, Shyam Ranganathan
> wrote:
>
>> On 03/14/2018 07:04 PM, Joe Julian wrote:
>> >
>> >
>> > On 03/14/2018 02:25 PM, Vijay Bellur wrote:
>> >>
>> >>
>> >> On Tue, Mar
Proposing efforts of adding metrics to various perf xlators. Following are
links to github issues:
https://github.com/gluster/glusterfs/issues/422
https://github.com/gluster/glusterfs/issues/423
https://github.com/gluster/glusterfs/issues/424
https://github.com/gluster/glusterfs/issues/425
I would like to turn features.sdfs on by default starting from 4.1. Details
can be found in issue:
https://github.com/gluster/glusterfs/issues/421
On Fri, Mar 16, 2018 at 1:00 PM, Xavi Hernandez wrote:
> On Tue, Mar 13, 2018 at 2:37 AM, Shyam Ranganathan
On Wed, Oct 10, 2018 at 8:30 PM Shyam Ranganathan
wrote:
> The following options were added post 4.1 and are part of 5.0 as the
> first release for the same. They were added in as part of bugs, and
> hence looking at github issues to track them as enhancements did not
> catch the same.
>
> We
On Thu, Oct 11, 2018 at 9:16 PM Krutika Dhananjay
wrote:
>
>
> On Thu, Oct 11, 2018 at 8:55 PM Shyam Ranganathan
> wrote:
>
>> So we are through with a series of checks and tasks on release-5 (like
>> ensuring all backports to other branches are present in 5, upgrade
>> testing, basic
On Thu, Mar 21, 2019 at 4:16 PM Atin Mukherjee wrote:
> All,
>
> In the last few releases of glusterfs, with stability as a primary theme
> of the releases, there has been lots of changes done on the code
> optimization with an expectation that such changes will have gluster to
> provide better
client crashing every few days with 'Failed to
> dispatch handler'
> Assignee: Du
> @du, we are still waiting to get
> https://review.gluster.org/c/glusterfs/+/22189 merged, right?
>
Yes. I'll be refreshing the patch today.
> Shyam
> On 2/13/19 4:09 AM, Raghavendra Gowda
nee: Du
> @du, we are still waiting to get
> https://review.gluster.org/c/glusterfs/+/22189 merged, right?
>
> Shyam
> On 2/13/19 4:09 AM, Raghavendra Gowdappa wrote:
> >
> >
> > On Wed, Feb 13, 2019 at 2:24 PM Nithya Balachandran > <mailto:nbala...@redh
gt;>
>> All patches against the bug are merged, but bug remains in POST state,
>> as none of the patches claim that the issue "Fixes" the reported problem.
>>
>> Are we awaiting more patches for the same? Du/Milind/Nithya?
>>
>
> The crash fixes - this
I just found a fix for https://bugzilla.redhat.com/show_bug.cgi?id=1674412.
Since its a deadlock I am wondering whether this should be in 6.0. What do
you think?
On Tue, Mar 5, 2019 at 11:47 PM Shyam Ranganathan
wrote:
> Hi,
>
> Release-6 was to be an early March release, and due to finding
On Wed, Feb 13, 2019 at 2:24 PM Nithya Balachandran
wrote:
> Adding Raghavendra G and Milind who are working on the patches so they can
> update on when they should be ready.
>
> On Fri, 8 Feb 2019 at 20:26, Shyam Ranganathan
> wrote:
>
>> Hi,
>>
>> There have been several crashes and issues
45 matches
Mail list logo