While testing the buffer ring patch, I noticed that bulk inserts with both INSERT and COPY pin and unpin the buffer they insert to for every tuple. That means that the usage_count of all those buffers are bumped up to 5. That's gotta be bad if you try to run a COPY concurrently with other activity. It also affects tables like TPC-C order_line where a tuples are always inserted and updated in groups.

To demonstrate:

postgres=# truncate foo; TRUNCATE TABLE
postgres=# COPY foo FROM '/tmp/foo.data'; COPY 1000
postgres=# SELECT c.relname, bufferid, relblocknumber, isdirty, usagecount FROM pg_buffercache bc, pg_class c WHERE c.relfilenode = bc.relfilenode and c.relname='foo';
 relname | bufferid | relblocknumber | isdirty | usagecount
 foo     |   105078 |              4 | f       |          5
 foo     |   105079 |              3 | f       |          5
 foo     |   105080 |              2 | f       |          5
 foo     |   105081 |              1 | f       |          5
 foo     |   105082 |              0 | f       |          5
(5 rows)

A fix for COPY will fall naturally out of the buffer ring patch, but not for INSERT.

A more general fix would be to somehow keep the last insertion page pinned across calls to heap_insert.

  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to