Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
                        imola-336       imola-337       imola-340
writes by checkpoint      38302           30410           39529
writes by bgwriter       350113         2205782         1418672
writes by backends      1834333          265755          787633
writes total            2222748         2501947         2245834
allocations             2683170         2657896         2699974

It looks like Tom's idea is not a winner; it leads to more writes than necessary.

The incremental number of writes is not that large; only about 10% more.
The interesting thing is that those "extra" writes must represent
buffers that were re-touched after their usage_count went to zero, but
before they could be recycled by the clock sweep.  While you'd certainly
expect some of that, I'm surprised it is as much as 10%.  Maybe we need
to play with the buffer allocation strategy some more.

The very small difference in NOTPM among the three runs says that either
this whole area is unimportant, or DBT2 isn't a good test case for it;
or maybe that there's something wrong with the patches?

The small difference in NOTPM is because the I/O still wasn't saturated even with 10% extra writes.

I ran more tests with a higher number of warehouses, and the extra writes start to show in the response times. See tests 341-344: http://community.enterprisedb.com/bgwriter/.

I scheduled a test with the moving average method as well, we'll see how that fares.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

              http://archives.postgresql.org

Reply via email to