On Fri, 23 Feb 2001, Roy T. Fielding wrote:

> On Fri, Feb 23, 2001 at 03:01:04PM -0500, Bill Stoddard wrote:
> > > Doesn't this remove the advantage of using apr_time_t's?  The whole point
> > > of that format was that we were using microseconds instead of seconds.  If
> > > that is our goal, then why don't we just change what an apr_time_t is?
> > >
> >
> > apr_time_t should maintain 1 microsecond resolution (there are legit uses for
> > it).  Reducing resolutions should be a runtime or compile time option for
> > folks who want speed at the expense of function.
>
> I was going to bring this up later, but what the heck...  I would much
> prefer a time structure that used time_t seconds and int microseconds
> as separate components rather than mixing all of these 100000s through
> the code.  The fact is that almost every use (all except poll/select)
> use seconds as their primary time format, and those system calls that
> do use microseconds prefer them as a separate parameter.

That's how it it was originally.  It was changed to this model not long
after the original code was committed.  One of the problems with using
seconds and a separate microsecond field, is that platforms other than
Unix don't have the same reliance on seconds.  I believe Windows uses 100
nanosecond blocks.

Ryan

_______________________________________________________________________________
Ryan Bloom                              [EMAIL PROTECTED]
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------

Reply via email to