On 12/21/21 02:01, Tom Lane wrote:
Tomas Vondra <tomas.von...@enterprisedb.com> writes:
OK, I did a quick test with two very simple benchmarks - simple select
from a sequence, and 'pgbench -N' on scale 1. Benchmark was on current
master, patched means SEQ_LOG_VALS was set to 1.

But ... pgbench -N doesn't use sequences at all, does it?

Probably inserts into a table with a serial column would constitute a
plausible real-world case.


D'oh! For some reason I thought pgbench has a sequence on the history table, but clearly I was mistaken. There's another thinko, because after inspecting pg_waldump output I realized "SEQ_LOG_VALS 1" actually logs only every 2nd increment. So it should be "SEQ_LOG_VALS 0".

So I repeated the test fixing SEQ_LOG_VALS, and doing the pgbench with a table like this:

  create table test (a serial, b int);

and a script doing

  insert into test (b) values (1);

The results look like this:

1) select nextval('s');

     clients          1         4
    ------------------------------
     master       39533    124998
     patched       3748      9114
    ------------------------------
     diff          -91%      -93%


2) insert into test (b) values (1);

     clients          1         4
    ------------------------------
     master        3718      9188
     patched       3698      9209
    ------------------------------
     diff            0%        0%

So the nextval() results are a bit worse, due to not caching 1/2 the nextval calls. The -90% is roughly expected, due to generating about 32x more WAL (and having to wait for commit).

But results for the more realistic insert workload are about the same as before (i.e. no measurable difference). Also kinda expected, because those transactions have to wait for WAL anyway.

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Reply via email to