On 11/25/2014 06:45 PM, Xavier Hernandez wrote:
On 11/25/2014 02:25 PM, Emmanuel Dreyfus wrote:
On Tue, Nov 25, 2014 at 01:42:21PM +0100, Xavier Hernandez wrote:
It seems to fail only in NetBSD. I'm not sure what priority it has.
Emmanuel
is trying to create a regression test for new patches
- Original Message -
From: Xavier Hernandez xhernan...@datalab.es
To: Emmanuel Dreyfus m...@netbsd.org
Cc: Raghavendra Gowdappa rgowd...@redhat.com, Gluster Devel
gluster-devel@gluster.org
Sent: Wednesday, November 26, 2014 2:05:58 PM
Subject: Re: Wrong behavior on fsync of
Hi All,
We are planning to change the volume status command to show RDMA port for
tcp,rdma volumes. We have four output designs in mind , those are,
1)Modify Port column as TCP,RDMA Ports
Eg:
Status of volume: xcube
Gluster processTCP,RDMA PortOnlinePid
Hi Luis,
We worked with Jens Axobe for a little bit to try and merge things but then
just got busy testing distributed file systems as opposed to raw storage.
We had an email in 2012 from
I encountered a couple of segfaults when modifying the sample configuration
file.
I've thought to
On 11/26/2014 08:19 AM, Mohammed Rafi K C wrote:
Hi All,
We are planning to change the volume status command to show RDMA port for
tcp,rdma volumes. We have four output designs in mind , those are,
1)Modify Port column as TCP,RDMA Ports
Eg:
Status of volume: xcube
Gluster process
This is indeed a misuse. A very similar bug used to be there in io-threads,
but we have moved to using pthread_cond over there since a while.
To fix this problem we could use a pthread_mutex/pthread_cond pair + a
boolean flag in place of the misused mutex. Or, we could just declare
gd_op_sm_lock
Anand Avati av...@gluster.org wrote:
To fix this problem we could use a pthread_mutex/pthread_cond pair + a
boolean flag in place of the misused mutex. Or, we could just declare
gd_op_sm_lock as a synclock_t to achieve the same result.
http://review.gluster.org/9197 passed regression and is
Emmanuel,
Could you explain which sequence of function calls lead to
mutex lock and mutex unlock being called by different threads?
Meanwhile, I am trying to find one such sequence to understand
the problem better.
FWIW, glusterd_do_replace_brick is injecting an event into
the state machine
Thanks Emmanuel. Around about the same time we managed to find the sequence of
function calls that could lead to this. Since the rpc program handling
LOCK/STAGE/COMMIT/UNLOCK requests from other peers invokes the corresponding
handler function in a synctask, I am inclined to use synclock_t in
On Thu, Nov 27, 2014 at 01:42:33AM -0500, Krishnan Parthasarathi wrote:
Thanks Emmanuel. Around about the same time we managed to find the sequence of
function calls that could lead to this. Since the rpc program handling
LOCK/STAGE/COMMIT/UNLOCK requests from other peers invokes the
10 matches
Mail list logo