Here... let me drag us even further off course.
On Wed, 28 Apr 1999, Shane Kerr wrote:
> Sorry for the off-topic post, but I did feel the need to defend myself.
> :)
>
> [snip]
>
> kerr@bang [~/work]time tr -d '\15' < shani.tmp2 > /dev/null
>
> real 0m2.925s
> user 0m2.603s
> sys 0m0.283s
> kerr@bang [~/work]time perl -pe 's/\r$//' < shani.tmp2 > /dev/null
>
> real 0m6.255s
> user 0m5.872s
> sys 0m0.321s
> kerr@bang [~/work]
>
> I wouldn't say that taking 6 seconds to convert a 5 Mbyte file is "slow".
> While it is twice as slow as the tr program, that's only a 3 second
> difference.
> [snip]
Also, in your defense, one needs to consider that there's the load time
difference between getting perl in core and kicking, and getting tr in
core and kicking.
It's quite possible that maybe say it takes 0.25s for tr to load before
it can start executing and then maybe 3.0s for perl to load before it can
start. So the initial overhead of perl is greater, but once executing
maybe it runs on par (or even more quickly) than tr does. That's
speculation, but it would be interesting to look at a bit more closely.
You'd want to try the same test on an insignificantly sized file (0 or 1
lines) to approximate the load time. (Sort of like finding the tare mass
of a beaker in chem lab). And then you'd want to retime the two
conversions on a significantly large files (20-30Meg maybe). Subtract the
approximated load times, and then figure out the preformance ratio between
the two effective processing times.
Of course tr is more practical for smaller files, but that doesn't mean
that perl doesn't have just as efficient a translation facility.
-brian.
---
Brian "JARAI" Chase | http://world.std.com/~bdc/ | VAXZilla LIVES!!!