> Can we change the default setting of track_io_timing to on?

+1 for better observability by default.

> I can't imagine a lot of people who care much about its performance impact 
> will be running the latest version of PostgreSQL on ancient/weird systems 
> that have slow clock access. (And the few who do can just turn it off for 
> their system).
> For systems with fast user-space clock access, I've never seen this setting 
> being turned on make a noticeable dent in performance.  Maybe I just never 
> tested enough in the most adverse scenario (which I guess would be a huge FS 
> cache, a small shared buffers, and a high CPU count with constant churning of 
> pages that hit the FS cache but miss shared buffers--not a system I have 
> handy to do a lot of tests with.)

Coincidently I have some quick notes for measuring the impact of changing the 
"clocksource"  on the Linux 5.10.x (real syscall vs vd.so optimization) on 
PgSQL 13.x as input to the discussion. The thing is that the slow "xen" 
implementation (at least on AWS i3, Amazon Linux 2) is default because 
apparently time with faster TSC/ RDTSC ones can potentially drift backwards 
e.g. during potential(?) VM live migration. I haven't seen better way to see 
what happens under the hood than strace and/or measuring huge no of calls. This 
only shows of course the impact to the whole PgSQL (with track_io_timing=on), 
not just impact between track_io_timing=on vs off. IMHO better knowledge (in 
explain analyze, autovacuum) is worth more than this potential degradation when 
using slow clocksources.

With /sys/bus/clocksource/devices/clocksource0/current_clocksource=xen  
(default on most AWS instances; ins.pgb = simple insert to table with PK only 
from sequencer.):
# time ./testclock # 10e9 calls of gettimeofday()
real    0m58.999s
user    0m35.796s
sys     0m23.204s

//pgbench 
    transaction type: ins.pgb
    scaling factor: 1
    query mode: simple
    number of clients: 8
    number of threads: 2
    duration: 100 s
    number of transactions actually processed: 5511485
    latency average = 0.137 ms
    latency stddev = 0.034 ms
    tps = 55114.743913 (including connections establishing)
    tps = 55115.999449 (excluding connections establishing)

With /sys/bus/clocksource/devices/clocksource0/current_clocksource=tsc :
# time ./testclock # 10e9 calls of gettimeofday()
real    0m2.415s
user    0m2.415s
sys     0m0.000s # XXX: notice, userland only workload, no %sys part

//pgbench:
    transaction type: ins.pgb
    scaling factor: 1
    query mode: simple
    number of clients: 8
    number of threads: 2
    duration: 100 s
    number of transactions actually processed: 6190406
    latency average = 0.123 ms
    latency stddev = 0.035 ms
    tps = 61903.863938 (including connections establishing)
    tps = 61905.261175 (excluding connections establishing)

In addition what could be done here - if that XXX note holds true on more 
platforms - is to measure via rusage() many gettimeofdays() during startup and 
log warning to consider checking OS clock implementation if it takes relatively 
too long and/or %sys part is > 0. I dunno what to suggest for the potential 
time going backwards , but changing track_io_timings=on doesn't feel like it is 
going to make stuff crash., so again I think it is good idea. 

-Jakub Wartak.


Reply via email to