- Original Message -
> From: "Raghavendra Gowdappa"
> To: "Shyam"
> Cc: gluster-devel@gluster.org
> Sent: Tuesday, May 19, 2015 11:49:56 AM
> Subject: Re: [Gluster-devel] Moratorium on new patch acceptance
>
>
>
> - Original Message -
> > From: "Raghavendra Gowdappa"
> > To:
- Original Message -
> From: "Raghavendra Gowdappa"
> To: "Shyam"
> Cc: gluster-devel@gluster.org
> Sent: Tuesday, May 19, 2015 11:46:19 AM
> Subject: Re: [Gluster-devel] Moratorium on new patch acceptance
>
>
>
> - Original Message -
> > From: "Shyam"
> > To: gluster-devel@
- Original Message -
> From: "Shyam"
> To: gluster-devel@gluster.org
> Sent: Tuesday, May 19, 2015 6:13:06 AM
> Subject: Re: [Gluster-devel] Moratorium on new patch acceptance
>
> On 05/18/2015 07:05 PM, Shyam wrote:
> > On 05/18/2015 03:49 PM, Shyam wrote:
> >> On 05/18/2015 10:33 AM,
On Tuesday 19 May 2015 06:13 AM, Shyam wrote:
On 05/18/2015 07:05 PM, Shyam wrote:
On 05/18/2015 03:49 PM, Shyam wrote:
On 05/18/2015 10:33 AM, Vijay Bellur wrote:
The etherpad did not call out, ./tests/bugs/distribute/bug-1161156.t
which did not have an owner, and so I took a stab at it and
On 05/18/2015 07:05 PM, Shyam wrote:
On 05/18/2015 03:49 PM, Shyam wrote:
On 05/18/2015 10:33 AM, Vijay Bellur wrote:
The etherpad did not call out, ./tests/bugs/distribute/bug-1161156.t
which did not have an owner, and so I took a stab at it and below are
the results.
I also think failure in
On 05/18/2015 03:49 PM, Shyam wrote:
On 05/18/2015 10:33 AM, Vijay Bellur wrote:
The etherpad did not call out, ./tests/bugs/distribute/bug-1161156.t
which did not have an owner, and so I took a stab at it and below are
the results.
I also think failure in ./tests/bugs/quota/bug-1038598.t is th
On 05/18/2015 10:33 AM, Vijay Bellur wrote:
On 05/16/2015 03:34 PM, Vijay Bellur wrote:
I will send daily status updates from Monday (05/18) about this so that
we are clear about where we are and what needs to be done to remove this
moratorium. Appreciate your help in having a clean set of reg
Here is the issue:
Locking on a volume fails with the following error:
[2015-05-18 09:47:56.038463] E
[glusterd-syncop.c:562:_gd_syncop_mgmt_lock_cbk] 0-management: Could not
find peer with ID 70e65fb9-cc9d-16ba-a4f4-5fb90100
[2015-05-18 09:47:56.038527] E [glusterd-syncop.c:111:gd_collate_e
> > ./tests/bugs/glusterd/bug-974007.t
> I looked at the core generated by this test and it turned to be a mem pool
> corruption. I will continue to investigate on this and keep you posted.
Thank you. It looks like we have another generic memory-management problem
that has surfaced more than once
On 18 May 2015 20:03, "Vijay Bellur" wrote:
>
> On 05/16/2015 03:34 PM, Vijay Bellur wrote:
>
>>
>> I will send daily status updates from Monday (05/18) about this so that
>> we are clear about where we are and what needs to be done to remove this
>> moratorium. Appreciate your help in having a cl
On 05/16/2015 03:34 PM, Vijay Bellur wrote:
I will send daily status updates from Monday (05/18) about this so that
we are clear about where we are and what needs to be done to remove this
moratorium. Appreciate your help in having a clean set of regression
tests going forward!
We have made
Hi All,
GlusterFS 3.7.0 RPMs for *RHEL, CentOS, Fedora* and packages for *Debian*
are available at download.gluster.org [1].
[1] http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/
--Humble
On Thu, May 14, 2015 at 2:49 PM, Vijay Bellur wrote:
>
> Hi All,
>
> I am happy to announce t
On 17 May 2015, at 13:36, Vijay Bellur wrote:
> On 05/17/2015 02:32 PM, Vijay Bellur wrote:
>> [Adding gluster-devel]
>> On 05/16/2015 11:31 PM, Niels de Vos wrote:
>>> On Sat, May 16, 2015 at 06:32:00PM +0200, Niels de Vos wrote:
It seems that many failures of the regression tests (at least
On 05/18/2015 02:15 PM, Emmanuel Dreyfus wrote:
> On Mon, May 18, 2015 at 01:48:52AM -0400, Krishnan Parthasarathi wrote:
>> I am not sure why volume-status isn't working.
>
> My understanding is that glusterd considers a lock is held by the
> NFS comonent, while it is not started.
No that's no
On Mon, May 18, 2015 at 01:48:52AM -0400, Krishnan Parthasarathi wrote:
> I am not sure why volume-status isn't working.
My understanding is that glusterd considers a lock is held by the
NFS comonent, while it is not started.
--
Emmanuel Dreyfus
m...@netbsd.org
_
Thanks Avra for the confirmation.
Regards,
Shubhendu
On 05/18/2015 02:04 PM, Avra Sengupta wrote:
With this option, the volume will be created and explicitly mounted on
all the nodes, which are currently a part of the cluster. Please note
that new nodes added to the cluster will not have the m
With this option, the volume will be created and explicitly mounted on
all the nodes, which are currently a part of the cluster. Please note
that new nodes added to the cluster will not have the meta volume
mounted explicitly.
So in a case where the console tries to use the volume from a peer
17 matches
Mail list logo