Paul Eggert <[EMAIL PROTECTED]> wrote: > Vincent Lefevre <[EMAIL PROTECTED]> writes: > >> The problem here is the NFS *client*, isn't it? And I'm not sure this >> is really a bug of the NFS client, knowing the fact that NFS works >> asynchronously. > > No, you're right, I don't see a bug in the GNU/Linux NFS client here. > The original workaround has to do with Mac OS X; see > > http://lists.gnu.org/archive/html/bug-coreutils/2006-09/msg00359.html > http://lists.gnu.org/archive/html/bug-coreutils/2006-09/msg00368.html > > and it's the workaround that is running afoul of the GNU/Linux NFS > client. If we could get rid of the workaround somehow, we wouldn't > have a problem on GNU/Linux NFS. (But I guess we would have a problem > on buggy MacOS platforms, either NFS client or NFS server I guess, > though I don't know the details.)
Vincent, Would you please try changing this definition (from src/remove.c) to e.g., 200 or 2000, CONSECUTIVE_READDIR_UNLINK_THRESHOLD = 10 and see if it lets you use "rm -r ..." with no errors? If this turns out to be a general problem, there's always the possibility of rewriting rm to use a different strategy: read all entries into malloc'd storage the first time, rather than reopening after processing each sub-directory. I hesitate to mention this (since I have so little time, now), but a couple months ago I implemented a prototype to do that, and from what I recall, it's twice as efficient on directories with very many entries, but 10x slower on the degenerate hierarchy: a/a/a/a/a/a/... There's almost certainly room for some optimization to mitigate that 10x, but this would be a big and potentially disruptive shift. I'm in no hurry. _______________________________________________ Bug-coreutils mailing list Bug-coreutils@gnu.org http://lists.gnu.org/mailman/listinfo/bug-coreutils