Paul Eggert <[EMAIL PROTECTED]> wrote: > Matthew Woehlke <[EMAIL PROTECTED]> writes: > >> It does actually fail. In fact, from the mountains of diff output it >> looks like 'sort' *may* be completely failing to output anything, or >> at least failing to output large chunks of what it is supposed to. I >> can re-run with VERBOSE, but it's going to be a *big* log, should I >> bz2 it and post that? > > No thanks, please don't bother. I can reproduce the problem on my > Debian stable host as follows: > > cd tests/misc > ulimit -u 64 > make check-TESTS TESTS=sort-compress > > The exact value "64" depends on how many other processes I have; if I > make it too small the test has unrelated problems, and if I make it > too large the test succeeds. > > I'll try to take a look at it when I have more time.
Some quick tests show sort using a maximum of 21 or 22 processes, even with contrived (-S 1k) options like in the test script. This is not light-weight, but don't forget that this code normally comes into play only for very large problems. ummm... well, also with medium-sized problems on systems with very little RAM. For those, you can just use --compress-program=, to disable compression altogether. Even so, sort needs a better fall-back position for when it fails to fork a decompression process (failing to start a compression process isn't a big problem). If it can't fork the process, it should simply revert to decompressing each stream as it reads it. This is possible (easy, even) when using gzip or bzip2, and part of the reason I want to limit the list of compressors to programs like those that provide the required procedural interfaces. It would be nice if sort could calculate how many processes it can fork, and then ensure that it never uses more than say 25% of that number of process slots. _______________________________________________ Bug-coreutils mailing list [email protected] http://lists.gnu.org/mailman/listinfo/bug-coreutils
