Andrew Sackville-West wrote:
On Sun, Sep 09, 2007 at 05:35:12PM -0400, Marty wrote:
Andrew Sackville-West wrote:
On Sun, Sep 09, 2007 at 04:23:42PM -0400, Marty wrote:
The following script seems to run abnormally slow on a 400Mhz Sarge
system, getting only about one iteration per second in the while loop.
It extracts md5sums from a 180k Packages file and makes an indices file.
I've narrowed down the slowdown to the lines in the while loop starting
with "search=..."
how have you determined this?
I checked the output rate by outputing to stdout (instead of piping to gzip
after the "done" statement). I also timed it with the "time" command.
but, That only tells you how long it takes to iterate through the loop
to get to the gzip command, not how much time is spent in each
statement.
smth like:
while read inputline
do echo "input line is " $inputline
search=`grep...`
echo "search is " $search"
if...
echo "we got a good search"
fi
...
done
so that you can see how long is actually spent on the creation of a
value for $search, who long in the if comparison and so forth. You
stated above that the slowdown is in the "search=" lines, and I'm
curious to know how you determined that.
You're right. I should have added that I commented out the lines in question,
and by adding a few echo statements, determined that the script seems to execute
orders of magnitude more quickly without those lines. Once I determined that
the "search=..." lines were consuming most of the time, I tried various versions
of that line, but I couldn't narrow it down to a single command in the line.
or did you do
time search=`grep...`
which could give meaningful output too.
No. That's just how I determined the rate of the output lines.
...
FTR, you may do much better using something like awk to do this,
though I'm no script master, just an observation.
I tried awk instead of cut, with no dramatic change.
sorry, i meant replacing the whole operation with awk which, while a
little heavy might be a better solution than instantiating a whole
bunch of greps and cuts over and over. but that's just a guess. and
your solution should certainly work pretty easily. and i see nothing
that would cause it to take a full second per iteration. my brief
testing (granted a much more powerful system) scrolled the output
right by... faster than I could hope to read. There was definitely no
noticeable delay anywhere in the process.
That's surprising. I just tried on a 2.8GHz system, and it required .102s per
output line (about 18s total). Although that's 10X faster than the 400Mhz
system, it still seems slow.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]