Can you please verify if this issue exists in the master HEAD? There have been quite a few locking changes/fixes recently and this may be a specific case of those issues.
Avati On Thu, Jan 24, 2013 at 12:02 AM, Song <glus...@163.com> wrote: > Hi,**** > > ** ** > > Recently, glusterfs will hang when we do stress testing. To find the > reason, we write a test shell script.**** > > ** ** > > We run the test shell on 5 servers at the same time. For a moment, all > test programming is hang.**** > > When execute command “cd /xmail/gfs1/scl_test/001”, also hang.**** > > ** ** > > The test shell script:**** > > ** ** > > *for((i=1;i<=100;i++));* > > *do * > > *rmdir /xmail/gfs1/scl_test/001* > > *if [ "$?" == "0" ];* > > *then * > > *echo "delete dir success"* > > *fi * > > * * > > *mkdir /xmail/gfs1/scl_test/001* > > *if [ "$?" == "0" ];* > > *then * > > *echo "create dir success"* > > *fi* > > * * > > *echo "1111" >>/xmail/gfs1/scl_test/001/001.txt* > > *echo "2222" >>/xmail/gfs1/scl_test/001/002.txt* > > *echo "3333" >>/xmail/gfs1/scl_test/001/003.txt* > > * * > > *rm -rf /xmail/gfs1/scl_test/001/001.txt* > > *rm -ff /xmail/gfs1/scl_test/001/002.txt* > > *rm -rf /xmail/gfs1/scl_test/001/003.txt* > > *done* > > ** ** > > “/xmail/gfs1” is native mount point of gluster volume gfs1.**** > > ** ** > > Gluster volume info is as below:**** > > [root@d181 glusterfs]# gluster volume info**** > > ** ** > > Volume Name: gfs1**** > > Type: Distributed-Replicate**** > > Status: Started**** > > Number of Bricks: 30 x 3 = 90**** > > Transport-type: tcp**** > > ** ** > > ** ** > > Please help me, Thanks!**** > > ** ** > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@nongnu.org > https://lists.nongnu.org/mailman/listinfo/gluster-devel > >
_______________________________________________ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel