Hi,

On 1/23/23 21:30, Andres Freund wrote:
That's been the case since my first post in the thread :). Mainly, it seems
easier to detect underflow cases during subtraction that way. And the factor
of 2 in range doesn't change a whole lot.
I just realized it the other day :).
If you have time to look at the pg_test_timing part, it'd be
appreciated. That's a it larger, and nobody looked at it yet. So I'm a bit
hesitant to push it.
I haven't yet pushed the pg_test_timing (nor it's small prerequisite)
patch.

I've attached those two patches. Feel free to include them in your series if
you want, then the CF entry (and thus cfbot) makes sense again...
I'll include them in my new patch set and also have a careful look at them.

I reviewed the prerequisite patch which introduces INSTR_TIME_SET_SECONDS(), as well as the pg_test_timing patch. Here my comments:

- The prerequisite patch looks good me.

- By default, the test query in the pg_test_timing doc runs serially. What about adding SET max_parallel_workers_per_gather = 0 to make sure it really always does (e.g. on a system with different settings for parallel_tuple_cost / parallel_setup_cost)? Otherwise, the numbers will be much more flaky.

- Why have you added a case distinction for diff == 0? Have you encountered this case? If so, how? Maybe add a comment.

- To further reduce overhead we could call INSTR_TIME_SET_CURRENT() multiple times. But then again: why do we actually care about the per-loop time? Why not instead sum up diff and divide by the number of iterations to exclude all the overhead in the first place?

- In the computation of the per-loop time in nanoseconds you can now use INSTR_TIME_GET_NANOSEC() instead of INSTR_TIME_GET_DOUBLE() * NS_PER_S.

The rest looks good to me. The rebased patches are part of the patch set I sent out yesterday in reply to another mail in this thread.

--
David Geier
(ServiceNow)



Reply via email to