> This is not an easy problem. See our most recent discussion at
Thanks for the small test program.
I tested it on my MacBook Pro and gettimeofday() was way faster than time().
The clock_gettime() used by the patch clock_gettime_1.patch in the
mailing thread from Hari Babu apparently doesn't work on OS X.
Instead, I tested the OS X specific mach_absolute_time() which was the fastest:
gcc -Wall -O2 -o time-timing-calls -DUSE_MACH_ABSOLUTE_TIME time-timing-calls.c
gcc -Wall -O2 -o time-timing-calls -DUSE_GETTIMEOFDAY time-timing-calls.c
tv = mach_absolute_time();
> I'm prepared to consider an argument that wait timing might have weaker
> requirements than EXPLAIN ANALYZE (which certainly needs to measure short
> durations) but you didn't actually make that argument.
I can see why timing overload is a problem in EXPLAIN ANALYZE and at
and that would of course be a great thing to fix.
However, I'm not sure I fully understand how it can be that much of a
problem in pgstat_report_wait_start()?
As far as I can tell from reading the source code, it only appears
pgstat_report_wait_start() is only entered when a process is waiting?
Is it not likely the time spent waiting will vastly exceed the amount
of extra time for the gettimeofday() call?
Is it really a typical real-life scenario that processes can be
waiting extremely often for extremely short periods of time,
where the timing overhead would be significant?
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: