Re: [Gluster-devel] Developer Documentation for datastructures in gluster

2014-07-16 Thread Kaushal M
I'll take up documenting the options framework. I'd like take up graph
and dict, if Jeff doesn't mind.

Also, I think we should be aiming to document the complete API
provided by these components instead of just the data structure. That
would be more helpful to everyone IMO.

~kaushal

On Wed, Jul 16, 2014 at 11:21 AM, Raghavendra Gowdappa
rgowd...@redhat.com wrote:
 syncop-framework is not listed here. I would like to take that up. Also, if 
 nobody is willing to pick up runner framework, I can handle that too.

 - Original Message -
 From: Krutika Dhananjay kdhan...@redhat.com
 To: Pranith Kumar Karampuri pkara...@redhat.com
 Cc: Gluster Devel gluster-devel@gluster.org
 Sent: Wednesday, July 16, 2014 10:41:28 AM
 Subject: Re: [Gluster-devel] Developer Documentation for datastructures  
  in  gluster

 Hi,

 I'd like to pick up timer and call-stub.

 -Krutika




 From: Pranith Kumar Karampuri pkara...@redhat.com
 To: Gluster Devel gluster-devel@gluster.org
 Sent: Tuesday, July 15, 2014 4:39:39 PM
 Subject: [Gluster-devel] Developer Documentation for datastructures in
 gluster

 hi,
 Please respond if you guys volunteer to add documentation for any
 of the following things that are not already taken.

 client_t - pranith
 integration with statedump - pranith
 mempool - Pranith

 event-hostory + circ-buff - Raghavendra Bhat
 inode - Raghavendra Bhat

 call-stub
 fd
 iobuf
 graph
 xlator
 option-framework
 rbthash
 runner-framework
 stack/frame
 strfd
 timer
 store
 gid-cache(source is heavily documented)
 dict
 event-poll

 Pranith
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel


 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Developer Documentation for datastructures in gluster

2014-07-16 Thread Niels de Vos
On Tue, Jul 15, 2014 at 04:39:39PM +0530, Pranith Kumar Karampuri wrote:
 hi,
   Please respond if you guys volunteer to add documentation for
 any of the following things that are not already taken.
 
 client_t - pranith
 integration with statedump - pranith
 mempool - Pranith
 
 event-hostory + circ-buff - Raghavendra Bhat
 inode - Raghavendra Bhat
 
 call-stub
 fd
 iobuf
 graph
 xlator
 option-framework
 rbthash
 runner-framework
 stack/frame
 strfd
 timer
 store

I'll take the store part.

Niels

 gid-cache(source is heavily documented)
 dict
 event-poll
 
 Pranith
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Developer Documentation for datastructures in gluster

2014-07-16 Thread Pranith Kumar Karampuri


On 07/16/2014 11:57 AM, Kaushal M wrote:

I'll take up documenting the options framework. I'd like take up graph
and dict, if Jeff doesn't mind.

Also, I think we should be aiming to document the complete API
provided by these components instead of just the data structure. That
would be more helpful to everyone IMO.

Yes. Will keep that in mind while writing the documentation :-)

Pranith


~kaushal

On Wed, Jul 16, 2014 at 11:21 AM, Raghavendra Gowdappa
rgowd...@redhat.com wrote:

syncop-framework is not listed here. I would like to take that up. Also, if 
nobody is willing to pick up runner framework, I can handle that too.

- Original Message -

From: Krutika Dhananjay kdhan...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Wednesday, July 16, 2014 10:41:28 AM
Subject: Re: [Gluster-devel] Developer Documentation for datastructures   
in  gluster

Hi,

I'd like to pick up timer and call-stub.

-Krutika




From: Pranith Kumar Karampuri pkara...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, July 15, 2014 4:39:39 PM
Subject: [Gluster-devel] Developer Documentation for datastructures in
gluster

hi,
Please respond if you guys volunteer to add documentation for any
of the following things that are not already taken.

client_t - pranith
integration with statedump - pranith
mempool - Pranith

event-hostory + circ-buff - Raghavendra Bhat
inode - Raghavendra Bhat

call-stub
fd
iobuf
graph
xlator
option-framework
rbthash
runner-framework
stack/frame
strfd
timer
store
gid-cache(source is heavily documented)
dict
event-poll

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-16 Thread Emmanuel Dreyfus
Harshavardhana har...@harshavardhana.net wrote:

 Its pretty much the same on FreeBSD, i didn't spend much time debugging
 it. Let me do it right away and let you know what i find.

Right. Once you will have this one, I have Linux-specific truncate and
md5csum replacements to contribute. I do not send them now since I
cannot test them.


-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: Re: can not build glusterfs3.5.1 on solaris because of missing sys/cdefs.h

2014-07-16 Thread Anand Avati
Copying gluster-devel@

Thanks for reporting Michael. I guess we need to forward port that old
change. Can you please send out a patch to gerrit?

Thanks!

On 7/16/14, 2:36 AM, 马忠 wrote:
 Hi Avati,
 
I tried to build the latest glusterfs 3.5.1 on solaris11.1,  but 
 it stopped because of missing sys/cdefs.h
 
 I've checked the Changelog and found the 
 commit a5301c874f978570187c3543b0c3a4ceba143c25 had once
 
 solved such a problem in the obsolete file 
 libglusterfsclient/src/libglusterfsclient.h. I don't understand why
 
 it appeared again in the later added file api/src/glfs.h. Can you give 
 me any suggestion about this problem? thanks.
 
 --
 
 [root@localhost glusterfs]# git show 
 a5301c874f978570187c3543b0c3a4ceba143c25
 
 commit a5301c874f978570187c3543b0c3a4ceba143c25
 
 Author: Anand V. Avati av...@amp.gluster.com
 
 Date:   Mon May 18 17:24:16 2009 +0530
 
  workaround for not including sys/cdefs.h -- including sys/cdefs.h 
 breaks build on solaris and other platforms
 
 diff --git a/libglusterfsclient/src/libglusterfsclient.h 
 b/libglusterfsclient/src/libglusterfsclient.h
 
 index 1c2441b..5376985 100755
 
 --- a/libglusterfsclient/src/libglusterfsclient.h
 
 +++ b/libglusterfsclient/src/libglusterfsclient.h
 
 @@ -20,7 +20,22 @@
 
   #ifndef _LIBGLUSTERFSCLIENT_H
 
   #define _LIBGLUSTERFSCLIENT_H
 
 -#include sys/cdefs.h
 
 +#ifndef __BEGIN_DECLS
 
 +#ifdef __cplusplus
 
 +#define __BEGIN_DECLS extern C {
 
 +#else
 
 -
 
 root@solaris:~/glusterfs-3.5.1# gmake
 
 
 
 
 
 gmake[3]: Entering directory `/root/glusterfs-3.5.1/api/src'
 
CC libgfapi_la-glfs.lo
 
 In file included from glfs.c:50:
 
 glfs.h:41:23: sys/cdefs.h: No such file or directory
 
 In file included from glfs.c:50:
 
 glfs.h:57: error: syntax error before struct
 
 In file included from glfs.c:51:
 
 glfs-internal.h:57: error: syntax error before struct
 
 gmake[3]: *** [libgfapi_la-glfs.lo] Error 1
 
 gmake[3]: Leaving directory `/root/glusterfs-3.5.1/api/src'
 
 gmake[2]: *** [all-recursive] Error 1
 
 gmake[2]: Leaving directory `/root/glusterfs-3.5.1/api'
 
 gmake[1]: *** [all-recursive] Error 1
 
 gmake[1]: Leaving directory `/root/glusterfs-3.5.1'
 
 gmake: *** [all] Error 2
 
 --
 
 Thanks in advance,
 
  Michael
 



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glustershd status

2014-07-16 Thread Harshavardhana
So here is what i found long email please bare with me

Looks like the management daemon and these other daemons

eg: brick, nfs-server, gluster self-heal daemon

They work in non-blocking manner, as in notifying back to Gluster
management daemon when they are available and when they are not. This
is done through a notify() callback mechanism

A registered notify() handler is supposed to call setter() functions
which update the state of the notified instance with in gluster
management daemon

Taking self-heal daemon as an example:

conf-shd-online --- is the primary value which should be set
during this notify call back where self-heal-daemon informs of its
availability to Gluster management daemon - this happens during a
RPCCLNT_CONNECT event()

During this event glusterd_nodesvc_set_online_status() sets all the
necessary state online/offline.

What happens in FreeBSD/NetBSD is that this notify event doesn't occur
at all for some odd reason - there in-fact a first notify() event but
that in-fact sets the value as offline i.e status == 0 (gf_boolean_t
== _gf_false)

In-fact this is true on Linux as well - there is smaller time window
observe the below output , immediately run 'volume status' after a
'volume start' event

# gluster volume status
Status of volume: repl
Gluster process PortOnline  Pid
--
Brick 127.0.1.1:/d/backends/brick1  49152   Y   29082
Brick 127.0.1.1:/d/backends/brick2  49153   Y   29093
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost   N/A N   N/A

Task Status of Volume repl
--
There are no active volume tasks

Both these commands are 1 sec apart

# gluster volume status
Status of volume: repl
Gluster process PortOnline  Pid
--
Brick 127.0.1.1:/d/backends/brick1  49152   Y   29082
Brick 127.0.1.1:/d/backends/brick2  49153   Y   29093
NFS Server on localhost 2049Y   29115
Self-heal Daemon on localhost   N/A Y   29110

Task Status of Volume repl
--
There are no active volume tasks

So the change happens but sadly this doesn't happen on non-Linux
platform, my general speculation is that this is related to
poll()/epoll() -  i have to debug this further.

In-fact restarting 'gluster management daemon' fixes these issues
which is understandable :-)

On Wed, Jul 16, 2014 at 9:41 AM, Emmanuel Dreyfus m...@netbsd.org wrote:
 Harshavardhana har...@harshavardhana.net wrote:

 Its pretty much the same on FreeBSD, i didn't spend much time debugging
 it. Let me do it right away and let you know what i find.

 Right. Once you will have this one, I have Linux-specific truncate and
 md5csum replacements to contribute. I do not send them now since I
 cannot test them.


 --
 Emmanuel Dreyfus
 http://hcpnet.free.fr/pubz
 m...@netbsd.org



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures again!

2014-07-16 Thread Joseph Fernandes
Hi Avra,

Just clarifying things here,
1) When testing with the setup provide by Justin, I found the only place where 
bug-1112559.t failed was after the failure mgmt_v3-locks.t in the previous 
regression run. The mail attached with the previous mail was just an 
OBSERVATION and NOT an INFERENCE that failure of mgmt_v3-locks.t was the root 
cause of bug-1112559.t . I am NOT jumping the gun and making any 
statement/conclusion here. Its just an OBSERVATION. And thanks for the 
clarification on why mgmt_v3-locks.t is failing.

2) I agree with you that the cleanup script needs to kill all gluster* 
processes. And its also true that port range used by gluster for bricks is 
unique.
But bug-1112559.t fails only because of the unavailability of port, to start 
the snap brick. Therefore this suggests that there is some process(gluster or 
non-gluster)
still using the port. 

3) And Finally that bug-1112559.t failing individually all the time is not true 
as when looked into the links which you have provided there are case where 
there are previous other test case failures, on the same testing machine 
(slave26). By this I am not pointing out that those failure are the root cause 
for bug-1112559.t to fail 
As stated earlier its a notable OBSERVATION(Keeping in mind point 2 about ports 
and cleanup)

I have run nearly 30 runs on slave30 and only one time bug-1112559.t failed (As 
stated in point 1). I am continuing to run more runs. The only problem is the 
occurrence of bug-1112559.t failure is spurious and there is no deterministic 
way of reproducing it. 

Will keep all posted about the results.

Regards,
Joe



- Original Message -
From: Avra Sengupta aseng...@redhat.com
To: Joseph Fernandes josfe...@redhat.com, Pranith Kumar Karampuri 
pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Varun Shastry 
vshas...@redhat.com, Justin Clift jus...@gluster.org
Sent: Wednesday, July 16, 2014 1:03:21 PM
Subject: Re: [Gluster-devel] spurious regression failures again!

Joseph,

I am not sure I understand how this is affecting the spurious failure of 
bug-1112559.t. As per the mail you have attached, and according to your 
analysis,  bug-1112559.t fails because a cleanup hasn't happened 
properly after a previous test-case failed and in your case there was a 
crash as well.

Now out of all the times bug-1112559.t has failed, most of the time it's 
the only test case failing and there isn't any crash. Below are the 
regression runs that pranith had sent for the same.

http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull

http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull

http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull

http://build.gluster.org/job/rackspace-regression-2GB/543/console

In all of the above bug-1112559.t is the only test case that fails and 
there is no crash.

So what I fail to understand here is, if this particular testcase fails 
independently as well as with other testcases, then how can we conclude 
that any other testcase failing is somehow not doing a cleanup properly 
and that is the reason for bug-1112559.t failing.

mgmt_v3-locks.t fails because glusterd takes more time to register a 
node going down, and hence the peer status doesn't return what the 
testcase expects it to. It's a race. The testcase ends with a cleanup 
routine like every other testcase, that kills all gluster and glusterfsd 
processes, which might be using any brick ports. So could you please 
explain how or which process still uses the brick ports that the snap 
bricks are trying to use leading to the failure of bug-1112559.t.

Regards,
Avra

On 07/15/2014 09:57 PM, Joseph Fernandes wrote:
 Just pointing out ,

 2) tests/basic/mgmt_v3-locks.t - Author: Avra
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/375/consoleFull

 This is the similar kind of error I saw in my testing of spurious failure 
 tests/bugs/bug-1112559.t

 Please refer the attached mail.

 Regards,
 Joe



 - Original Message -
 From: Pranith Kumar Karampuri pkara...@redhat.com
 To: Joseph Fernandes josfe...@redhat.com
 Cc: Gluster Devel gluster-devel@gluster.org, Varun Shastry 
 vshas...@redhat.com
 Sent: Tuesday, July 15, 2014 9:34:26 PM
 Subject: Re: [Gluster-devel] spurious regression failures again!


 On 07/15/2014 09:24 PM, Joseph Fernandes wrote:
 Hi Pranith,

 Could you please share the link of the console output of the failures.
 Added them inline. Thanks for reminding :-)

 Pranith
 Regards,
 Joe

 - Original Message -
 From: Pranith Kumar Karampuri pkara...@redhat.com
 To: Gluster Devel gluster-devel@gluster.org, Varun Shastry 
 vshas...@redhat.com
 Sent: Tuesday, July 15, 2014 8:52:44 PM
 Subject: [Gluster-devel] spurious regression failures again!

 hi,
We have 4 tests failing once in a while causing problems:
 1) tests/bugs/bug-1087198.t - Author: Varun