A fine summary of the situation.

On Friday, July 12, 2002, at 12:42 PM, William A. Rowe, Jr. wrote:
I. We represent all time quantum in the same scale throughout APR.  That
   scale is in microseconds.

Which is goodness, because we don't ever have to go back to docs and ask, "Does that function take seconds or apr time?"

It's easy to keep track of that within APR. What is hard is dealing with both APR and other libraries within the code of APR users, since the rest of the universe thinks in seconds. The accessor functions were a significant improvement.

II. Performance is an issue, we are attempting to reclaim CPU cycles lost
converting, especially between seconds and microseconds, both internally
and externally (by other apps.)

And everyone agrees we want this as fast as possible, without introducing bugs due to [whatever sort of] programmer confusion.

Yes, +1 on applying the binary microseconds patch -- its far better than the current state.

III. The existing name is an issue to Roy and others who are confused by the
similarity between apr_time_t and time_t (in the ANSI/POSIX definitions.)

And I agree it's an obstacle to 1) porting old code to APR, and 2) folks quickly
getting comfortable with APR, when they are just learning the library.

I haven't been confused by it. I just have a better memory of the number of times that you have patched various aspects of httpd and apr-util due to people confusing them, or simply changing the interface using mass find/replace. Most of those bugs were introduced by the authors of APR, so I fail to see how "just learning" applies.

IV. Without sacrificing resolution, I put forward a proposal that we use
a binary representation of microseconds. Mr. Stoddard has determined
that the binary representation we presented does reduce the overall cpu
instructions and clock cycles in httpd request processing, as expected.

This has two benefits. Scalar math operations simply work; computation of deltas doesn't require additional carry operations. However, seconds can be quickly grabbed with a binary shift, so there is no huge integer division to contend with. It's an all-around performance win.

However, it's mildly confusing to work with, without the macros.  Those
macros need to be thoroughly vetted for range and overflow errors, etc.

The casts should be removed and the interval time really should have the same size as epoch time.

V. Aaron and others submit that we should change the name of the type
if we change the scale, to assure our APR library users aren't tripped up
by casual msec = t / 1000 computations in their existing code. This just
happens to coincide with Roy's concerns in (III.) above.

And with (III.) above, it just makes good sense to pick new names for this
new type, IF we are going to have a contract with the programmers about
the representation. We can have compatibility macros until the old symbols
are deprecated, and Aaron and others who are concerned with catching all
instances of the old usage can disable the compat macros.

Er, that would have been a good idea, had it been deployed earlier.

VI. Brian and others have asked that we have an undefined scalar value
[with no contract to the users about it's representation.] Roy and others
object, due to overflow and range considerations, and binary compatibility
considerations [as it's all in macros that aren't updated by new binaries].

I really don't see a win here. Why have no contract? We aren't hiding the
definitions within accessor functions behind an opaque type. There is NO
type safety when you use C scalars for a type.


And the code can never be binary drop-in replaceable, the time manipulation
was all compiled into the user's code from the macros, it isn't buried safely
inside of APR.

I just don't believe in partially-implemented ADTs. It needs to be either abstract to hide implementation details or written in stone such that the implementation details are guaranteed across platforms. One or the other is okay with me, but not both.

VII. Ryan and others submit that we need two types, in fact; one absolute
measure (from epoch 1.1.1970) and one 'interval' or 'delta' that represents
a span, rather than a time. This is the case in APR today.

And we all agree here. The key words, time and span make the most sense
after I considered it. span is fairly well adopted in the C/C++/STL world.


VIII. From all of the above came the original discussion of naming. Ryan and
others believe we should not change the name of the type, whatsoever.
The sub-argument is between a strongly defined name contract, e.g.
busec in the identifier, or a completely ambigious name with no contract
of the scale's unit.

And I stand on a strongly named, intuitive names that warn you not to just pass around seconds. That leaves us with something like;

apr_time_busec_t
apr_span_busec_t

which conveys that the span is measured in buseconds.

+1

I'm suggesting the time/span before busec so we don't have to go about
renaming EVERY SINGLE apr_time_fn() to a new type.  Can we all live
with apr_time_() functions addressing apr_time_busec_t values?  Or do
we have to go to the extreme of renaming these all apr_time_busec_fn()?

*shrug*

IX. Roy's original comments yesterday went back to item (II.) above, and
reintroduced to the optimization discussion the options of either using
seconds to those apr functions that don't need precision, and/or replacing
our time definition with a structure (seperate seconds and useconds fields.)
These options were debated/voted upon several times before on the APR list.

To the idea of both a seconds and a fine-resolution time type in APR... I say
no friggin way. All of us have introduced bugs in code at one time or another
by mixing up our seconds, mseconds and useconds, no?


One scale for time in APR is sufficient. That's the way NSPR went as well, IIRC.

To the other idea of going back to a structure, that has a huge performance
penalty when you need to compute deltas. Simple add/subtract becomes
10+ cpu instructions. That's the main objection Dean and myself make.

Bah! That's nothing compared to the multiply and divides. Once those are gone, I can live with it either way.

However, if you take a little wrapper [dunno how portable this is] you get;

It isn't portable. In any case, macros are better than bitfields because you only want to do the type conversion once when coding at the lesser precision level.

The point I'm making, however, is that a busec representation is very close,
cpu-wise, to a 2-int structure, while dropping the number of cpu cycles
required to do basic addition and subtraction.


Only the usec/msec/nsec into busec computation becomes expensive
(that is a shift then divide). busec into those units is a shift - multiply,
which is much faster. Since we regularly obtain something other than
usec (such as 100ns units on NT, or msec for timeouts), this really isn't
a penalty that will cost us often, and it's one we already pay much more
frequently today than we will with the new semantics.

How about approximate conversion functions for those who don't care about the gain or loss of a few microseconds? The poll implementation is one example.

....Roy



Reply via email to