On Thu, Jan 18, 2007 at 10:06:45PM +0100, Jim Meyering wrote:
> One more thing to consider:
> 
>     In your proposed patch, a failed fork evokes an immediate exit.
>     In that case, it may be worthwhile to retry, or even to revert
>     to using gzip directly.
> 
> Consider sort's merge phase, and imagine having 40 compressed files,
> each hooked to a process running gzip -d.  It's not hard to imagine
> an environment in which some of those fork calls would fail.

That's a thought, although libz only works with gzip (as you said), and
it will add more complexity (like my original patch using LZO and this
new one combined).  I don't think we'll have 40 instances of gzip -d
running.  We should only need at most one compression process, and
NMERGE (16) decompression processes running at one time.  I think
retrying fork if it fails is a good idea, and I've already added that
since I read your mail.


_______________________________________________
Bug-coreutils mailing list
Bug-coreutils@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-coreutils

Reply via email to