Hello Jeff,

The culprit I found is "bgwriter", which is basically doing nothing to
prevent the coming checkpoint IO storm, even though there would be ample
time to write the accumulating dirty pages so that checkpoint would find a
clean field and pass in a blink. Indeed, at the end of the 500 seconds
throttled test, "pg_stat_bgwriter" says:

Are you doing pg_stat_reset_shared('bgwriter') after running pgbench -i?

Yes, I did.

You don't want your steady state stats polluted by the bulk load.

Sure!

  buffers_checkpoint = 19046
  buffers_clean = 2995

Out of curiosity, what does buffers_backend show?

   buffers_backend = 157

In any event, this almost certainly is a red herring.

Possibly! It is pretty easy to reproduce... can anyone try?

Whichever of the three ways are being used to write out the buffers, it is the checkpointer that is responsible for fsyncing them, and that is where your drama is almost certainly occurring. Writing out with one path rather than a different isn't going to change things, unless you change the fsync.

Well, ISTM that the OS does not need to wait for fsync to start writing pages if it has received one minute of buffer writes at 50 writes per second, some scheduler should start handling the flow somewhere... So if bgwriter was to write the buffers is would help, but maybe there is a better way.

Also, are you familiar with checkpoint_completion_target, and what is it
set to?

The default 0.5. Moving to 0.9 seems to worsen the situation.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to