[oops, wrong from, resent...]
The culprit I found is "bgwriter", which is basically doing nothing to
prevent the coming checkpoint IO storm, even though there would be ample
time to write the accumulating dirty pages so that checkpoint would find a
clean field and pass in a blink. Indeed, at the end of the 500 seconds
throttled test, "pg_stat_bgwriter" says:
Are you doing pg_stat_reset_shared('bgwriter') after running pgbench -i?
Yes, I did.
You don't want your steady state stats polluted by the bulk load.
buffers_checkpoint = 19046
buffers_clean = 2995
Out of curiosity, what does buffers_backend show?
buffers_backend = 157
In any event, this almost certainly is a red herring.
Possibly. It is pretty easy to reproduce, though.
Whichever of the three ways are being used to write out the buffers, it
is the checkpointer that is responsible for fsyncing them, and that is
where your drama is almost certainly occurring. Writing out with one
path rather than a different isn't going to change things, unless you
change the fsync.
Well, I agree partially. ISTM that the OS does not need to wait for fsync
to start writing pages if it is receiving one minute of buffer writes at
50 writes per second, I would have thought that some scheduler should
start handling the flow before fsync... So I thought that if bgwriter was
to write the buffers is would help, but maybe there is a better way.
Also, are you familiar with checkpoint_completion_target, and what is it
The default 0.5. Moving to 0.9 seems to worsen the situation.
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: