Ben Laurie wrote:
Brian Pane wrote:
Building upon Cliff's formulas, here's another idea
for doing faster conversions of the current apr_time_t
format to seconds:
What we want is t/1000000
What we can do easily is t/1048576
But what can we add to t/1048576 to approximate t/1000000?
If I solve for 'C' in
t/1000000 = t/1048576 + t/C
I get C= ~21,586,297
That's not a power of 2, but what if use 2^24 (~16M) as an
approximation:
seconds = (t >> 20) + (t >> 24)
That probably isn't accurate enough, but you get the basic idea:
sum a couple of t/(2^n) terms to approximate t/1000000.
What do you think?
I think you're all nuts. Are you seriously saying we compute time
stuff often enough to care how long it takes?
Yes. The product of:
frequency of time manipulation * cost of time manipulation
is high enough to make 64-bit division one of the top 5 most
expensive functions in the httpd. (See Bill Stoddard's [EMAIL PROTECTED]
posts on performance profiling for some examples.)
This result doesn't mean that time manipulation is naturally
an expensive part of the httpd, though; rather, it means that
we're using a time representation that's mismatched to the
needs of the application.
--Brian