On Fri, Feb 13, 2026 at 6:42 AM Hannu Krosing <[email protected]> wrote:
> I haven't looked at the code here yet, but when using plain rdtsc on
> modern CPUs one sees much more overhead from just the fact that the
> code is there than from calling the rdtsc instruction, and the
> overhead can vary by orders of magnitude based on how complex the work
> is that is timed.

If I understand you correctly, your comment would refer to this
function in 0003:

static inline instr_time
pg_get_ticks_fast(void)
{
#if defined(__x86_64__)
        if (likely(use_tsc))
        {
                instr_time      now;

                now.ticks = __rdtsc();
                return now;
        }
#endif

        return pg_get_ticks_system(); /* clock_gettime on POSIX */
}

I agree that the code that is here is more complex than just getting
the time via clock_gettime / vDSO - but in practice this does lower
the timing overhead, and I think so far I've not seen this regress
when testing with the system clock on x86-64 instead (i.e. not going
through the likely branch).

If you have a concern here it would be helpful to have a specific
example where you see this behave slower or measure incorrectly.

> I discovered this when I timed the (then-)new dead tid lookups in the
> Vacuum in Pg 17 and saw significantly larger overhead per lookup when
> the lookups themselves were slower, i.e. a case where the lookups were
> done in random order (inded was on  created on a column filled with
> random())

If you can share an example of what you tested here in the past, I'd
also be happy to take a look at it to understand better.

> So while just a tight loop of N million rtdsc calls will give you the
> lower limit, it is likely not very representative of actual overhead.

I agree that a tight loop itself could be scheduled differently on the
CPU than regular code paths, so our tests could be skewed if that's
all we're looking at. But that's why we're doing the combined testing
of a problematic EXPLAIN ANALYZE COUNT(*) and pg_test_timing in this
thread.

Thanks,
Lukas

-- 
Lukas Fittl


Reply via email to