> Setting "track_io_timing = on" should measure the time spent doing I/O
more accurately.
I see I/O timings after this. It shows that 96.5% of long queries is spent
on I/O. If I subtract I/O time from total I get ~1,4 s for 5000 rows, which
is SAME for both ranges if I adjust segment borders accordingly (to match
~5000 rows). Only I/O time differs, and differs significantly.

> One problem with measuring read speed that way is that "buffers read" can
mean "buffers read from storage" or "buffers read from the file system
cache",
I understand, that's why I conducted experiments with drop_caches.

> but you say you observe a difference even after dropping the cache.
No, I say I see NO significant difference (accurate to measurement error)
between "with caches" and after dropping caches. And this is explainable, I
think. Since I read consequently almost all data from the huge table, no
cache can fit this data, thus it cannot influence significantly on results.
And whilst the PK index *could* be cached (in theory) I think its data is
being displaced from buffers by bulkier JSONB data.

Vlad

Reply via email to