> Sten Eriksson <[EMAIL PROTECTED]> wrote:
>> When executing "rm -rf" on a directory-tree that is really deep, rm
>> segfaults. After reading rm.c and remove.c it's clear that the stack
>> that is used to push and pop CWD onto breaks (sooner or later).

Thanks for that report!
I reproduced the problem like this, using the latest version of rm:

  perl -e '$i=0; do {mkdir "z",0700; chdir "z"} until (++$i == 22000)'
  rm -r z
  Segmentation fault

FYI, the same procedure makes Solaris8's /bin/rm segfault, though on
that system I had to make the tree a little deeper.

If you actually run the above commands, be sure to remove the resulting
tree, otherwise, it'll probably cause trouble with e.g., backup programs,
nightly find/updatedb runs, etc.  Here's one way:

  perl -e 'while (1) {chdir "z" or last};' \
    -e 'while (1) {chdir ".." && rmdir "z" or last}'

The problem is that at such great depth rm overflows its stack
but doesn't detect/report the failure.

The general fix will be to make GNU rm (and others, like mv, cp, find,
tar, etc.) detect that they've blown their stack and to give a proper
diagnostic.

Eventually I'll probably write something that can remove much
deeper trees, though still within the limit that a set of
`active' directory dev/inode pairs fits in virtual memory.
The set is necessary to do it safely -- otherwise, the program
could potentially be subverted to remove files that were not
intended to be removed.

_______________________________________________
Bug-fileutils mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/bug-fileutils

Reply via email to