Re: [Gluster-devel] Wrong behavior on fsync of md-cache ?

2014-11-26 Thread Xavier Hernandez
On 11/25/2014 06:45 PM, Xavier Hernandez wrote: On 11/25/2014 02:25 PM, Emmanuel Dreyfus wrote: On Tue, Nov 25, 2014 at 01:42:21PM +0100, Xavier Hernandez wrote: It seems to fail only in NetBSD. I'm not sure what priority it has. Emmanuel is trying to create a regression test for new patches

Re: [Gluster-devel] Wrong behavior on fsync of md-cache ?

2014-11-26 Thread Raghavendra Gowdappa
- Original Message - From: Xavier Hernandez xhernan...@datalab.es To: Emmanuel Dreyfus m...@netbsd.org Cc: Raghavendra Gowdappa rgowd...@redhat.com, Gluster Devel gluster-devel@gluster.org Sent: Wednesday, November 26, 2014 2:05:58 PM Subject: Re: Wrong behavior on fsync of

[Gluster-devel] Need suggestion to change out put of volume status command

2014-11-26 Thread Mohammed Rafi K C
Hi All, We are planning to change the volume status command to show RDMA port for tcp,rdma volumes. We have four output designs in mind , those are, 1)Modify Port column as TCP,RDMA Ports Eg: Status of volume: xcube Gluster processTCP,RDMA PortOnlinePid

Re: [Gluster-devel] Open source SPC-1 Workload IO Pattern

2014-11-26 Thread Michael O'Sullivan
Hi Luis, We worked with Jens Axobe for a little bit to try and merge things but then just got busy testing distributed file systems as opposed to raw storage. We had an email in 2012 from I encountered a couple of segfaults when modifying the sample configuration file. I've thought to

Re: [Gluster-devel] Need suggestion to change out put of volume status command

2014-11-26 Thread Shyam
On 11/26/2014 08:19 AM, Mohammed Rafi K C wrote: Hi All, We are planning to change the volume status command to show RDMA port for tcp,rdma volumes. We have four output designs in mind , those are, 1)Modify Port column as TCP,RDMA Ports Eg: Status of volume: xcube Gluster process

Re: [Gluster-devel] pthread_mutex misusage in glusterd_op_sm

2014-11-26 Thread Anand Avati
This is indeed a misuse. A very similar bug used to be there in io-threads, but we have moved to using pthread_cond over there since a while. To fix this problem we could use a pthread_mutex/pthread_cond pair + a boolean flag in place of the misused mutex. Or, we could just declare gd_op_sm_lock

Re: [Gluster-devel] pthread_mutex misusage in glusterd_op_sm

2014-11-26 Thread Emmanuel Dreyfus
Anand Avati av...@gluster.org wrote: To fix this problem we could use a pthread_mutex/pthread_cond pair + a boolean flag in place of the misused mutex. Or, we could just declare gd_op_sm_lock as a synclock_t to achieve the same result. http://review.gluster.org/9197 passed regression and is

Re: [Gluster-devel] pthread_mutex misusage in glusterd_op_sm

2014-11-26 Thread Krishnan Parthasarathi
Emmanuel, Could you explain which sequence of function calls lead to mutex lock and mutex unlock being called by different threads? Meanwhile, I am trying to find one such sequence to understand the problem better. FWIW, glusterd_do_replace_brick is injecting an event into the state machine

Re: [Gluster-devel] pthread_mutex misusage in glusterd_op_sm

2014-11-26 Thread Krishnan Parthasarathi
Thanks Emmanuel. Around about the same time we managed to find the sequence of function calls that could lead to this. Since the rpc program handling LOCK/STAGE/COMMIT/UNLOCK requests from other peers invokes the corresponding handler function in a synctask, I am inclined to use synclock_t in

Re: [Gluster-devel] pthread_mutex misusage in glusterd_op_sm

2014-11-26 Thread Emmanuel Dreyfus
On Thu, Nov 27, 2014 at 01:42:33AM -0500, Krishnan Parthasarathi wrote: Thanks Emmanuel. Around about the same time we managed to find the sequence of function calls that could lead to this. Since the rpc program handling LOCK/STAGE/COMMIT/UNLOCK requests from other peers invokes the