Sent from Samsung Galaxy S4 On 13 Jun 2015 13:15, "Raghavendra Talur" <[email protected]> wrote: > > > > On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee < [email protected]> wrote: >> >> Sent from Samsung Galaxy S4 >> On 13 Jun 2015 12:58, "Anand Nekkunti" <[email protected]> wrote: >> > >> > Hi All >> > Rebalance is not working in single node cluster environment ( current test frame work ). I am getting error in below test , it seems re-balance is not migrated to current cluster test framework. >> Could you pin point which test case fails and what log do you see? >> > >> > cleanup; >> > TEST launch_cluster 2; >> > TEST $CLI_1 peer probe $H2; >> > >> > EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers >> > >> > $CLI_1 volume create $V0 $H1:$B1/$V0 $H2:$B2/$V0 >> > EXPECT 'Created' volinfo_field $V0 'Status'; >> > >> > $CLI_1 volume start $V0 >> > EXPECT 'Started' volinfo_field $V0 'Status'; >> > >> > #Mount FUSE >> > TEST glusterfs -s $H1 --volfile-id=$V0 $M0; >> > >> > TEST mkdir $M0/dir{1..4}; >> > TEST touch $M0/dir{1..4}/files{1..4}; >> > >> > TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1 >> > >> > TEST $CLI_1 volume rebalance $V0 start >> > >> > EXPECT_WITHIN 60 "completed" CLI_1_rebalance_status_field $V0 >> > >> > $CLI_2 volume status $V0 >> > EXPECT 'Started' volinfo_field $V0 'Status'; >> > >> > cleanup; >> > >> > Regards >> > Anand.N >> > >> > >> > >> > _______________________________________________ >> > Gluster-devel mailing list >> > [email protected] >> > http://www.gluster.org/mailman/listinfo/gluster-devel >> > >> >> >> _______________________________________________ >> Gluster-devel mailing list >> [email protected] >> http://www.gluster.org/mailman/listinfo/gluster-devel >> > > If it is a crash of glusterd when you do rebalance start, it is because of FORTIFY_FAIL in libc. > Here is the patch that Susant has already sent: http://review.gluster.org/#/c/11090/ > > You can verify that it is the same crash by checking the core in gdb; a SIGABRT would be raised > after strncpy. > AFAIR Anand tried it in mainline and that fix was already in place. I think this is something different. > -- > Raghavendra Talur >
_______________________________________________ Gluster-devel mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-devel
