<quote who="Peter Rundle"> > Maybe but.... given that rm is the command in this case, the time to > remove the files from disk is orders of magnitude greater than the time to > load and execute the program.
You'd be surprised - benchmark it. (Hint: You're adding the overhead of bringing up rm for *every* *single* file. Doesn't matter that the time to remove files from disk takes longer - you're adding time to every single cycle.) > Also what happens if find returns a million file names? xargs can't put > them all on a single command line else the dreaded "too many args" error > will occur. So xargs must have some smarts in it to call the command > multiple times, passing it 1,2,...N args at a time? > So it obviously breaks the args up into chunks and invokes the command > multiple times in any case. That's actually the main point of xargs. It just happens to do things faster by reducing the number of times it invokes the target command. If you want to do the same thing to 2000 files - in the majority of cases, you're better off doing it 10 times to 200 files than 2000 times to 1 file. Of course, you can specify how many files xargs will operate on in one invocation too, if there is some kind of arbitrary limit you must adhere to. Also, you would have to write something extremely fiendishly clever to win a SLUG Shell Scripting Smackdown that included find -exec. - Jeff -- LinuxWorldExpo: Johannesburg, South Africa http://www.linuxworldexpo.co.za/ "Gah. Out of coffee. Shall think whilst auto-caffeinating." - Telsa Gwynne -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
