>From glusterd log:
[2016-08-31 07:54:24.817811] E [run.c:191:runner_log] 
(-->/build/install/lib/glusterfs/3.9dev/xlator/mgmt/glusterd.so(+0xe1c30) 
[0x7f1a34ebac30] 
-->/build/install/lib/glusterfs/3.9dev/xlator/mgmt/glusterd.so(+0xe1794) 
[0x7f1a34eba794] -->/build/install/lib/libglusterfs.so.0(runner_log+0x1ae) 
[0x7f1a3fa15cea] ) 0-management: Failed to execute script: 
/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=patchy 
--first=yes --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd
[2016-08-31 07:54:24.819166]:++++++++++ 
G_LOG:./tests/basic/afr/root-squash-self-heal.t: TEST: 20 1 afr_child_up_status 
patchy 0 ++++++++++

The above is spawned from a "volume start force". I checked the brick logs and 
the killed brick had started successfully.

Links to failures:
 https://build.gluster.org/job/centos6-regression/429/console
 https://build.gluster.org/job/netbsd7-regression/358/consoleFull


Thanks,
Susant

----- Original Message -----
> From: "Susant Palai" <spa...@redhat.com>
> To: "gluster-devel" <gluster-devel@gluster.org>
> Sent: Thursday, 1 September, 2016 12:13:01 PM
> Subject: [Gluster-devel] spurious failures for        
> ./tests/basic/afr/root-squash-self-heal.t
> 
> Hi,
>  $subject is failing spuriously for one of my patch.
> One of the test case is: EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1"
> afr_child_up_status $V0 0
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to