Hi, It looks like the setup has been cleaned up? Volume 'patchy' seems to be gone now.
-Krutika ----- Original Message ----- > From: "Justin Clift" <jus...@gluster.org> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com> > Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Ravishankar N" > <ravishan...@redhat.com>, "Krutika Dhananjay" <kdhan...@redhat.com>, > "Anuradha Talur" <ata...@redhat.com> > Sent: Thursday, February 26, 2015 5:11:20 PM > Subject: Re: Regression host hung on tests/basic/afr/split-brain-healing.t > On 26 Feb 2015, at 08:57, Pranith Kumar Karampuri <pkara...@redhat.com> > wrote: > > On 02/26/2015 02:54 AM, Justin Clift wrote: > >> Anyone have an interest in a regression test VM that's (presently) hung on > >> tests/basic/afr/split-brain-healing.t? Likely to be a spurious error. > >> > >> I can either reboot the VM and put it back into service, or I can leave it > >> for someone to log into and figure out why it's hung. > >> > >> Trying to decide which way to go. :) > > Justin, > > I copied others who are working on afr as well so that they can take a > > look. > Thanks. It's slave28, and this is where it's hung (just checked): > root 17491 1 0 Feb25 ? S 0:00 sudo -E bash -x /opt/qa/regression.sh > root 17492 17491 0 Feb25 ? S 0:00 \_ bash -x /opt/qa/regression.sh > root 17499 17492 0 Feb25 ? S 0:00 \_ /bin/bash ./run-tests.sh > root 17514 17499 0 Feb25 ? S 0:00 \_ /usr/bin/perl /usr/bin/prove -rf --timer > ./tests > root 1749 17514 0 Feb25 ? S 0:00 \_ /bin/bash > ./tests/basic/afr/split-brain-healing.t > root 2906 1749 0 Feb25 ? Sl 0:06 \_ gluster --mode=script --wignore volume > heal patchy split-brain bigger-file /file1 > root 2915 2906 0 Feb25 ? Sl 0:02 \_ /build/install/sbin/glfsheal patchy > bigger-file /file1 > You guys are welcome to log into it and "do stuff". It has the normal > jenkins ssh auth for our Rackspace slaves. > Please let me know when it's no longer needed, so I can put it back into > service. :) > + Justin > -- > GlusterFS - http://www.gluster.org > An open source, distributed file system scaling to several > petabytes, and handling thousands of clients. > My personal twitter: twitter.com/realjustinclift
_______________________________________________ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel