I have a volume that is distributed and replicated. While deleting a directory structure on the mounted volume, I also restarted the GlusterFS daemon on one of the replicated servers. After the "rm -rf" command completed, it complained that it couldn't delete a directory because it wasn't empty. But from the perspective of the mounted volume it appeared empty. Looking at the individual bricks, though, I could see that there were files remaining in this directory.

My question: what is the proper way to correct this problem and bring the volume back to a consistent state? I've tried using the "ls -alR" command to force a self-heal but for some reason this always causes the volume to become unresponsive from any client after 10 minutes or so.

Some clients/servers are running version 3.0.4 while the others are running 3.0.5.

Thanks!

Steve

--
Steven M. Wilson, Systems and Network Manager
Markey Center for Structural Biology
Purdue University
(765) 496-1946

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to