Hi,

On Wed, 1 Aug 2007, Linus Torvalds wrote:

> So I think it would be entirely appropriate to
> 
>  - do something that *approximates* microseconds.
> 
>    Using microseconds instead of nanoseconds would likely allow us to do 
>    32-bit arithmetic in more areas, without any real overflow.

The basic problem is that one needs a number of bits (at least 16) for 
normalization, which limits the time range one can work with. This means 
that 32 bit leaves only room for 1 millisecond resolution, the remainder 
could maybe saved and reused later.
So AFAICT using micro- or nanosecond resolution doesn't make much 
computational difference.

bye, Roman
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to