On Sat, 02 Jan 2010 14:47:26 EST erik quanstrom <[email protected]>  wrote:
> 
> my beef with xargs is only that it is used as an excuse
> for not fixing exec in unix.  it's also used to bolster the
> "that's a rare case" argument.

I often do something like the following:

  find . -type f <condition> | xargs grep -l <pattern> | xargs <command>

If by "fixing exec in unix" you mean allowing something like

  <command> $(grep -l <pattern> $(find . -type f <condition>))

then <command> would take far too long to even get started.
And can eat up a lot of memory or even run out of it.  On a
2+ year old MacBookPro "find -x /" takes 4.5 minutes for 1.6M
files and 155MB to hold paths.  My 11 old machine has 64MB
and over a million files on a rather slow disk. Your solution
would run out of space on it.  Now granted I should update it
to a more balanced system but mechanisms should continue
working even if one doesn't have an optimal system.  At least
xargs gives me that choice.

Basically this is just streams programming for arguments
instead of data. Ideally all the args would be taken from a
stream (and specifying args on a command line would be just a
convenience) but it is too late for that.  Often unix
commands have a -r option to walk a file tree but it would've
been nicer to have the tree walk factored out. Then you can
do things like breadth first walk etc. and have everyone
benefit.

Reply via email to