J.P. Delport wrote:
For Linux, play with this:
#ifndef WIN32
/// High resolution timers for Linux.
#define NSEC_PER_SEC 1000000000LL
inline uint64_t timespec_to_ns(const struct timespec *ts)
{
return ((uint64_t) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
}
/// Returns nanoseconds since epoch.
inline uint64_t currentTimeNanoSec()
{
struct timespec timestamp_ts;
//clock_gettime(CLOCK_MONOTONIC, ×tamp_ts);
clock_gettime(CLOCK_REALTIME, ×tamp_ts);
return timespec_to_ns(×tamp_ts);
}
inline uint64_t clockResNanoSec()
{
struct timespec timestamp_ts;
//clock_getres(CLOCK_MONOTONIC, ×tamp_ts);
clock_getres(CLOCK_REALTIME, ×tamp_ts);
return timespec_to_ns(×tamp_ts);
}
#if 0
int main(void)
{
printf("%lld, %lld\n", currentTimeNanoSec(), clockResNanoSec());
printf("%lld, %lld\n", currentTimeNanoSec(), clockResNanoSec());
}
#endif
#else
Interesting, I had compared different ways of doing timing on Linux some
time ago and was certain that clock_gettime() didn't work for me.
But now I tested it again it at least seems to provide enough precision.
Ah well, I'll probably never figure out what happened :)
Thanks,
Paul
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org