Re: [Gluster-devel] glupy test failing
On Friday 20 June 2014 09:44 PM, Justin Clift wrote: On 20/06/2014, at 3:49 PM, Vijay Bellur wrote: Side-effect of merging this patch [1]. Have reverted the change to let regression tests pass. That seems to have fixed it. + Justin Yeah. It has been fixed. Thanks :) Regards, Raghavendra Bhat -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.5.1 beta 2 Sanity tests
On Fri, Jun 20, 2014 at 4:01 AM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > On 06/19/2014 11:32 PM, Justin Clift wrote: > >> On 19/06/2014, at 6:55 PM, Benjamin Turner wrote: >> >> >>> I went through these a while back and removed anything that wasn't valid >>> for GlusterFS. This test was passing on 3.4.59 when it was released, i am >>> thinking it may have something to do with a sym link to the same directory >>> bz i found a while back? Idk, I'll get it sorted tomorrow. >>> >>> I got this sorted, I needed to add a sleep between the file create and >>> the link. I ran through it manually and it worked every time, took me a >>> few goes to think of timing issue. I didn't need this on 3.4.0.59, is >>> there anything that needs investigated? >>> >> Any ideas? :) >> > Nope :-( Ok no problem. I was unable to repro outside the script and I tried everything I could think of. I am just going to leave the sleep 1s in there and keep an eye on these moving forward. Thanks guys! -b > > Pranith > > >> + Justin >> >> -- >> GlusterFS - http://www.gluster.org >> >> An open source, distributed file system scaling to several >> petabytes, and handling thousands of clients. >> >> My personal twitter: twitter.com/realjustinclift >> >> > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-devel > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] glupy test failing
On 20/06/2014, at 3:49 PM, Vijay Bellur wrote: > Side-effect of merging this patch [1]. Have reverted the change to let > regression tests pass. That seems to have fixed it. + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] glupy test failing
Hi, I am seeing glupy.t test being failed in some testcases. It is failing in my local machine as well (with latest master). Is it a genuine failure or a spurious one? /tests/features/glupy.t(Wstat: 0 Tests: 6 Failed: 2) Failed tests: 2, 6 As per the logfile of the fuse mount done in the testcase this is the error: [2014-06-20 14:15:53.038826] I [MSGID: 100030] [glusterfsd.c:1998:main] 0-glusterfs: Started running glusterfs version 3.5qa2 (args: glusterfs -f /d/backends/glupytest.vol /mnt/glusterfs/0) [2014-06-20 14:15:53.059484] E [glupy.c:2382:init] 0-vol-glupy: Python import failed [2014-06-20 14:15:53.059575] E [xlator.c:425:xlator_init] 0-vol-glupy: Initialization of volume 'vol-glupy' failed, review your volfile again [2014-06-20 14:15:53.059587] E [graph.c:322:glusterfs_graph_init] 0-vol-glupy: initializing translator failed [2014-06-20 14:15:53.059595] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-06-20 14:15:53.060045] W [glusterfsd.c:1182:cleanup_and_exit] (--> 0-: received signum (0), shutting down [2014-06-20 14:15:53.060090] I [fuse-bridge.c:5561:fini] 0-fuse: Unmounting '/mnt/glusterfs/0'. [2014-06-20 14:19:01.867378] I [MSGID: 100030] [glusterfsd.c:1998:main] 0-glusterfs: Started running glusterfs version 3.5qa2 (args: glusterfs -f /d/backends/glupytest.vol /mnt/glusterfs/0) [2014-06-20 14:19:01.897158] E [glupy.c:2382:init] 0-vol-glupy: Python import failed [2014-06-20 14:19:01.897241] E [xlator.c:425:xlator_init] 0-vol-glupy: Initialization of volume 'vol-glupy' failed, review your volfile again [2014-06-20 14:19:01.897252] E [graph.c:322:glusterfs_graph_init] 0-vol-glupy: initializing translator failed [2014-06-20 14:19:01.897260] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-06-20 14:19:01.897635] W [glusterfsd.c:1182:cleanup_and_exit] (--> 0-: received signum (0), shutting down [2014-06-20 14:19:01.897677] I [fuse-bridge.c:5561:fini] 0-fuse: Unmounting '/mnt/glusterfs/0'. Regards, Raghavendra Bhat ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] glupy test failing
On 20/06/2014, at 3:43 PM, Raghavendra Bhat wrote: > I am seeing glupy.t test being failed in some testcases. It is failing in my > local machine as well (with latest master). Is it a genuine failure or a > spurious one? > > /tests/features/glupy.t(Wstat: 0 Tests: 6 Failed: 2) > Failed tests: 2, 6 Yeah, we're discussing it on gluster-devel at the moment too. It seems to be a real failure, but not caused by your CR. We're guessing it's due to libgfapi being removed from the master branch last night. The removal of it also removed the __init__.py file, which glupy relies on as well. The fix will probably be to have the glupy installation Makefile create the __init__.py file now instead. + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] glupy test failing
On 06/20/2014 08:13 PM, Raghavendra Bhat wrote: Hi, I am seeing glupy.t test being failed in some testcases. It is failing in my local machine as well (with latest master). Is it a genuine failure or a spurious one? /tests/features/glupy.t(Wstat: 0 Tests: 6 Failed: 2) Failed tests: 2, 6 As per the logfile of the fuse mount done in the testcase this is the error: [2014-06-20 14:15:53.038826] I [MSGID: 100030] [glusterfsd.c:1998:main] 0-glusterfs: Started running glusterfs version 3.5qa2 (args: glusterfs -f /d/backends/glupytest.vol /mnt/glusterfs/0) [2014-06-20 14:15:53.059484] E [glupy.c:2382:init] 0-vol-glupy: Python import failed [2014-06-20 14:15:53.059575] E [xlator.c:425:xlator_init] 0-vol-glupy: Initialization of volume 'vol-glupy' failed, review your volfile again [2014-06-20 14:15:53.059587] E [graph.c:322:glusterfs_graph_init] 0-vol-glupy: initializing translator failed [2014-06-20 14:15:53.059595] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-06-20 14:15:53.060045] W [glusterfsd.c:1182:cleanup_and_exit] (--> 0-: received signum (0), shutting down [2014-06-20 14:15:53.060090] I [fuse-bridge.c:5561:fini] 0-fuse: Unmounting '/mnt/glusterfs/0'. [2014-06-20 14:19:01.867378] I [MSGID: 100030] [glusterfsd.c:1998:main] 0-glusterfs: Started running glusterfs version 3.5qa2 (args: glusterfs -f /d/backends/glupytest.vol /mnt/glusterfs/0) [2014-06-20 14:19:01.897158] E [glupy.c:2382:init] 0-vol-glupy: Python import failed [2014-06-20 14:19:01.897241] E [xlator.c:425:xlator_init] 0-vol-glupy: Initialization of volume 'vol-glupy' failed, review your volfile again [2014-06-20 14:19:01.897252] E [graph.c:322:glusterfs_graph_init] 0-vol-glupy: initializing translator failed [2014-06-20 14:19:01.897260] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-06-20 14:19:01.897635] W [glusterfsd.c:1182:cleanup_and_exit] (--> 0-: received signum (0), shutting down [2014-06-20 14:19:01.897677] I [fuse-bridge.c:5561:fini] 0-fuse: Unmounting '/mnt/glusterfs/0'. Side-effect of merging this patch [1]. Have reverted the change to let regression tests pass. Note to self: Double check for verified +1 votes by regression tests before merging. Thanks, Vijay [1] http://review.gluster.org/7920 ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Build up of xfsmount dirs... ;)
Hi Vijay, Can you please take this patch in? http://review.gluster.org/#/c/8138/ Thanks & Regads, Rajesh - Original Message - > From: "Justin Clift" > To: "Rajesh Joseph" > Cc: "Gluster Devel" > Sent: Friday, June 20, 2014 7:49:57 PM > Subject: Re: Build up of xfsmount dirs... ;) > > No worries. :) > > + Justin > > > On 20/06/2014, at 2:59 PM, Rajesh Joseph wrote: > > Thanks Justin for letting me know. Its clearly a bug in the implementation. > > I will send a patch to fix this. Meanwhile if it is causing any test > > failures or other issues then you can have a temporary fix to delete them > > in the script. > > > > Thanks & Regards, > > Rajesh > > > > - Original Message - > >> From: "Justin Clift" > >> To: "Rajesh Joseph" > >> Cc: "Gluster Devel" > >> Sent: Friday, June 20, 2014 7:07:06 PM > >> Subject: Build up of xfsmount dirs... ;) > >> > >> Hi Rajesh, > >> > >> Looking at the regression testing boxes this morning, they all have > >> several thousand /tmp/xfsmount* dirs on them. > >> > >> Seems like they're coming from this: > >> > >> f1705e2d (Rajesh Joseph 2014-06-05 10:00:33 +0530 3988) char > >> template [] = "/tmp/xfsmountXX"; > >> > >> (it's the only mention in the source for /tmp/xfsmount...) > >> > >> Could it be the case that the directory gets made, but never > >> gets cleaned up? If so, is there a feasible way to clean it > >> up after use? > >> > >> We *can* just change the regression test scripting to nuke > >> /tmp/xfsmount* after each run. But wondering it's something > >> Gluster should take care of itself. :) > >> > >> Regards and best wishes, > >> > >> Justin Clift > >> > >> -- > >> GlusterFS - http://www.gluster.org > >> > >> An open source, distributed file system scaling to several > >> petabytes, and handling thousands of clients. > >> > >> My personal twitter: twitter.com/realjustinclift > >> > >> > > -- > GlusterFS - http://www.gluster.org > > An open source, distributed file system scaling to several > petabytes, and handling thousands of clients. > > My personal twitter: twitter.com/realjustinclift > > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Fwd: Fwd: Bug#751888: glusterfs-server: creating symlinks generates errors
On 06/20/2014 03:05 PM, Ravishankar N wrote: On 06/20/2014 06:26 PM, Matteo Checcucci wrote: Control: forwarded -1 https://bugzilla.redhat.com/show_bug.cgi?id=454 On 06/20/2014 07:44 AM, Ravishankar N wrote: [...] Yes, just sent a patch for review on master :http://review.gluster.org/#/c/8135/ Once it gets accepted, will back-port it to the 3.5 branch -Ravi Hi Ravishankar, thanks again. I am looking forward to seeing it back-ported and integrated in the debian package. Bye Matteo ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Fwd: Fwd: Bug#751888: glusterfs-server: creating symlinks generates errors
Control: forwarded -1 https://bugzilla.redhat.com/show_bug.cgi?id=454 On 06/20/2014 07:44 AM, Ravishankar N wrote: Hi Matteo, Thanks for the reproducer. I've filed a bug report here: https://bugzilla.redhat.com/show_bug.cgi?id=454 Feel free to add yourself to the CC List to get notified of the fix. Thanks a lot, Have you already reproduced the bug? ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Build up of xfsmount dirs... ;)
No worries. :) + Justin On 20/06/2014, at 2:59 PM, Rajesh Joseph wrote: > Thanks Justin for letting me know. Its clearly a bug in the implementation. I > will send a patch to fix this. Meanwhile if it is causing any test failures > or other issues then you can have a temporary fix to delete them in the > script. > > Thanks & Regards, > Rajesh > > - Original Message - >> From: "Justin Clift" >> To: "Rajesh Joseph" >> Cc: "Gluster Devel" >> Sent: Friday, June 20, 2014 7:07:06 PM >> Subject: Build up of xfsmount dirs... ;) >> >> Hi Rajesh, >> >> Looking at the regression testing boxes this morning, they all have >> several thousand /tmp/xfsmount* dirs on them. >> >> Seems like they're coming from this: >> >> f1705e2d (Rajesh Joseph 2014-06-05 10:00:33 +0530 3988) char >> template [] = "/tmp/xfsmountXX"; >> >> (it's the only mention in the source for /tmp/xfsmount...) >> >> Could it be the case that the directory gets made, but never >> gets cleaned up? If so, is there a feasible way to clean it >> up after use? >> >> We *can* just change the regression test scripting to nuke >> /tmp/xfsmount* after each run. But wondering it's something >> Gluster should take care of itself. :) >> >> Regards and best wishes, >> >> Justin Clift >> >> -- >> GlusterFS - http://www.gluster.org >> >> An open source, distributed file system scaling to several >> petabytes, and handling thousands of clients. >> >> My personal twitter: twitter.com/realjustinclift >> >> -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Regarding regression failure in rackspace-regression-2GB machine
KP, One way to view relevant information from the core would be as follow, 1) before loading the core you need to tell gdb to look for shared objects in a different path, - using 'set solib-search-path' - For us this translates to 'set solib-search-path ./lib:./lib/glusterfs:./lib/glusterfs/3.5git:./lib/glusterfs/3.5git/auth:./lib/glusterfs/3.5git/rpc-transport:./lib/glusterfs/3.5git/xlator:./lib/glusterfs/3.5git/xlator/cluster:./lib/glusterfs/3.5git/xlator/debug:./lib/glusterfs/3.5git/xlator/encryption:./lib/glusterfs/3.5git/xlator/features:./lib/glusterfs/3.5git/xlator/features/glupy:./lib/glusterfs/3.5git/xlator/mgmt:./lib/glusterfs/3.5git/xlator/mount:./lib/glusterfs/3.5git/xlator/nfs:./lib/glusterfs/3.5git/xlator/performance:./lib/glusterfs/3.5git/xlator/protocol:./lib/glusterfs/3.5git/xlator/storage:./lib/glusterfs/3.5git/xlator/system:./lib/glusterfs/3.5git/xlator/testing:./lib/glusterfs/3.5git/xlator/testing/features:./lib/glusterfs/3.5git/xlator/testing/performance:./lib/ocf:./lib/ocf/resource.d:./lib/ocf/resource.d/glusterfs:./lib/pkgconfig:./lib/python2.6:./lib/python2.6/site-packages:./lib/python2.6/site-packages/gluster' I know the above is more than what you need to debug glusterd (which is the current cored process), but as you do not know which binary crashed, better to add all from a more generic standpoint. This would be done starting a vanilla gdb session and entering the command there before opening the core. NOTE: here my pwd (or resolution of . in ./lib/--- is where I untarred the core tarball and //build/install/ so that it searches all shared objects from the jenkins system that was bundled into the tar file. 2) load the core file in gdb now as follows (or other methods known), - (gdb) target exec ./sbin/glusterd - (gdb) target core ./cores/core.1077 - (gdb) bt, other commands and enjoy for the most part! Shyam - Original Message - > From: "Krishnan Parthasarathi" > To: "Pranith Kumar Karampuri" > Cc: "Gluster Devel" > Sent: Thursday, June 19, 2014 5:52:42 AM > Subject: [Gluster-devel] Regarding regression failure in > rackspace-regression-2GB machine > > Pranith, > > The core's backtrace [1] is not 'analysable'. It doesn't show function names > and displays "()?" for all the frames across all threads. It would be helpful > if we had the glusterd logs corresponding to cluster.rc setup. These logs > are missing too. > > thanks, > Krish > > > [1] - glusterd core file can be found here - > http://build.gluster.org/job/rackspace-regression-2GB/250/consoleFull > ___ > Gluster-devel mailing list > Gluster-devel@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-devel > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Build up of xfsmount dirs... ;)
Thanks Justin for letting me know. Its clearly a bug in the implementation. I will send a patch to fix this. Meanwhile if it is causing any test failures or other issues then you can have a temporary fix to delete them in the script. Thanks & Regards, Rajesh - Original Message - > From: "Justin Clift" > To: "Rajesh Joseph" > Cc: "Gluster Devel" > Sent: Friday, June 20, 2014 7:07:06 PM > Subject: Build up of xfsmount dirs... ;) > > Hi Rajesh, > > Looking at the regression testing boxes this morning, they all have > several thousand /tmp/xfsmount* dirs on them. > > Seems like they're coming from this: > > f1705e2d (Rajesh Joseph 2014-06-05 10:00:33 +0530 3988) char > template [] = "/tmp/xfsmountXX"; > > (it's the only mention in the source for /tmp/xfsmount...) > > Could it be the case that the directory gets made, but never > gets cleaned up? If so, is there a feasible way to clean it > up after use? > > We *can* just change the regression test scripting to nuke > /tmp/xfsmount* after each run. But wondering it's something > Gluster should take care of itself. :) > > Regards and best wishes, > > Justin Clift > > -- > GlusterFS - http://www.gluster.org > > An open source, distributed file system scaling to several > petabytes, and handling thousands of clients. > > My personal twitter: twitter.com/realjustinclift > > ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Build up of xfsmount dirs... ;)
Hi Rajesh, Looking at the regression testing boxes this morning, they all have several thousand /tmp/xfsmount* dirs on them. Seems like they're coming from this: f1705e2d (Rajesh Joseph 2014-06-05 10:00:33 +0530 3988) char template [] = "/tmp/xfsmountXX"; (it's the only mention in the source for /tmp/xfsmount...) Could it be the case that the directory gets made, but never gets cleaned up? If so, is there a feasible way to clean it up after use? We *can* just change the regression test scripting to nuke /tmp/xfsmount* after each run. But wondering it's something Gluster should take care of itself. :) Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Fwd: Fwd: Bug#751888: glusterfs-server: creating symlinks generates errors
On 06/20/2014 06:26 PM, Matteo Checcucci wrote: Control: forwarded -1 https://bugzilla.redhat.com/show_bug.cgi?id=454 On 06/20/2014 07:44 AM, Ravishankar N wrote: Hi Matteo, Thanks for the reproducer. I've filed a bug report here: https://bugzilla.redhat.com/show_bug.cgi?id=454 Feel free to add yourself to the CC List to get notified of the fix. Thanks a lot, Have you already reproduced the bug? Yes, just sent a patch for review on master :http://review.gluster.org/#/c/8135/ Once it gets accepted, will back-port it to the 3.5 branch -Ravi ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] tests and umount
On 06/18/2014 10:49 PM, Pranith Kumar Karampuri wrote: On 06/16/2014 09:08 PM, Pranith Kumar Karampuri wrote: On 06/16/2014 09:00 PM, Jeff Darcy wrote: I see that most of the tests are doing umount and these may fail sometimes because of EBUSY etc. I am wondering if we should change all of them to umount -l. Let me know if you foresee any problems. I think I'd try "umount -f" first. Using -l too much can cause an accumulation of zombie mounts. When I'm hacking around on my own, I sometimes have to do "umount -f" twice but that's always sufficient. Cool, I will do some kind of EXPECT_WITHIN with umount -f may be 5 times just to be on the safer side. I submitted http://review.gluster.com/8104 for one of the tests as it is failing frequently. Will do the next round later. http://review.gluster.org/8117 fixes the rest. Pranith Pranith If no one has any objections I will send out a patch tomorrow for this. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] 3.5.1 beta 2 Sanity tests
On 06/19/2014 11:32 PM, Justin Clift wrote: On 19/06/2014, at 6:55 PM, Benjamin Turner wrote: I went through these a while back and removed anything that wasn't valid for GlusterFS. This test was passing on 3.4.59 when it was released, i am thinking it may have something to do with a sym link to the same directory bz i found a while back? Idk, I'll get it sorted tomorrow. I got this sorted, I needed to add a sleep between the file create and the link. I ran through it manually and it worked every time, took me a few goes to think of timing issue. I didn't need this on 3.4.0.59, is there anything that needs investigated? Any ideas? :) Nope :-( Pranith + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel