> This is not an easy problem.  See our most recent discussion at
> https://www.postgresql.org/message-id/flat/31856.1400021891%40sss.pgh.pa.us

Thanks for the small test program.
I tested it on my MacBook Pro and gettimeofday() was way faster than time().

The clock_gettime() used by the patch clock_gettime_1.patch in the
mailing thread from Hari Babu apparently doesn't work on OS X.

Instead, I tested the OS X specific mach_absolute_time() which was the fastest:

gcc -Wall -O2 -o time-timing-calls -DUSE_MACH_ABSOLUTE_TIME time-timing-calls.c
time ./time-timing-calls
real 0m16.806s
user 0m16.781s
sys 0m0.012s

gcc -Wall -O2 -o time-timing-calls -DUSE_GETTIMEOFDAY time-timing-calls.c
time ./time-timing-calls
real 0m35.466s
user 0m35.062s
sys 0m0.393s

Code:

#ifdef __MACH__
#include <mach/mach_time.h>
#endif
#ifdef USE_MACH_ABSOLUTE_TIME
        uint64_t tv;
        tv = mach_absolute_time();
#endif

> I'm prepared to consider an argument that wait timing might have weaker
> requirements than EXPLAIN ANALYZE (which certainly needs to measure short
> durations) but you didn't actually make that argument.

I can see why timing overload is a problem in EXPLAIN ANALYZE and at
other places,
and that would of course be a great thing to fix.

However, I'm not sure I fully understand how it can be that much of a
problem in pgstat_report_wait_start()?

As far as I can tell from reading the source code, it only appears
pgstat_report_wait_start() is only entered when a process is waiting?

Is it not likely the time spent waiting will vastly exceed the amount
of extra time for the gettimeofday() call?

Is it really a typical real-life scenario that processes can be
waiting extremely often for extremely short periods of time,
where the timing overhead would be significant?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to