On 8/21/07, Kevin King <[EMAIL PROTECTED]> wrote:
[snip]

> cd /ud/TEST/_PH_
> find . -mtime +90 -exec rm {} \;
> With "find" you're working with one file at a time so you should never hit
> the limit.
>

Yes,  however using find + xargs is more efficient than executing rm on each
individual instance of the "found" file:

from http://www.unixreview.com/documents/s=8274/sam0306g/
<http://www.unixreview.com/documents/s=8274/sam0306g/>...

The modern UNIX OS seems to have solved the problem of the *find* command
overflowing the command-line buffer. However, using the *find -exec* command
is still troublesome. It's better to do this:

# remove all files with a txt extension
find . -type f -name "*.txt" -print|xargs rm

than this:

find . -type f -name "*.txt" -exec rm {} \; -print

Controlling the call to *rm* with *xargs* is more efficient than having the
*find* command execute *rm* for each object found.

/Scott
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to