On 3/7/06, Andrew Lentvorski <[EMAIL PROTECTED]> wrote:
> Michael O'Keefe wrote:
> >>>>> The larger problem is that we shouldn't be using a "filename".
> >>>> Hans Reiser (of Namesys) would agree with you.
> >>> What are the proposed alternatives ?
> >> Queries primarily.
> >
> > Wouldn't that make a filename just an alias for 'select inode from
> > filesystem where type = "text document" and size > 2MB and mtime < 5min' ?
> >
> >> I have no objection to keeping a filename.
> >>
> >> However, I note that I type the following idiom:
> >>
> >> find . -type f -exec grep <someregex> {} /dev/null \;
> >> I want the ability to search files. *FAST*.
> >
> > And yet you use -exec rather than xargs ?
>
> Yes.
>
> Because I don't want to wait for the *entire* recursive list of files to
> be built before starting the grep. How do I get around that with xargs?
>
> Quite often I find what I need and I abort the command.
Does not xargs(1) accept the output from find(1) as it arrives, and
ship it off to grep(1) in suitable buffersful without waiting for
find(1) to finish?
In any case you must have a _lot_ of files, or be very fast on the ^C key.
Sample experiment with /usr/include. Looking for a moderatly rare word.
[EMAIL PROTECTED] include]$ pwd
/usr/include
[EMAIL PROTECTED] include]$ find . -type f | wc -l
7081
[EMAIL PROTECTED] include]$ time find . -type f -exec grep -i largefile64 {}
/dev/null \; > /tmp/find_largefile64
real 0m9.111s
user 0m2.611s
sys 0m6.420s
[EMAIL PROTECTED] include]$ time find . -type f -print | xargs grep -i
largefile64 /dev/null > /tmp/find_largefile64
real 0m0.227s
user 0m0.073s
sys 0m0.177s
[EMAIL PROTECTED] include]$
The overhead in using the "exec" option to find(1) rather than
xargs(1) is like a factor of 40. YMMV.
carl
--
carl lowenstein marine physical lab u.c. san diego
[EMAIL PROTECTED]
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list