Andres Freund <and...@2ndquadrant.com> wrote:

> I don't think I actually found any workload where the bgwriter
> actually wroute out a relevant percentage of the necessary pages.

I had one at Wisconsin Courts.  The database which we targeted with
logical replication from the 72 circuit court databases (plus a few
others) on six database connection pool with about 20 to (at peaks)
hundreds of transactions per second modifying the database (the
average transaction involving about 20 modifying statements with
potentially hundreds of affected rows), with maybe 2000 to 3000
queries per second on a 30 connection pool, wrote about one-third
each of the dirty buffers with checkpoints, background writer, and
backends needing to read a page.  I shared my numbers with Greg,
who I believe used them as one of his examples for how to tune
memory, checkpoints, and background writer, so you might want to
check with him if you want more detail.

Of course, we set bgwriter_lru_maxpages = 1000 and
bgwriter_lru_multiplier = 4, and kept shared_buffers to 2GB to hit
that.  Without the reduced shared_buffers and more aggressive
bgwriter we hit the problem with writes overwhelming the RAID
controller's cache and causing everything in the database to
"freeze" until it cleared some cache space.

I'm not saying this invalidates your general argument; just that
such cases do exist.  Hopefully this data point is useful.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to