On 3/13/07 2:37 AM, "Simon Riggs" <[EMAIL PROTECTED]> wrote:

>> We're planning a modification that I think you should consider: when there
>> is a sequential scan of a table larger than the size of shared_buffers, we
>> are allowing the scan to write through the shared_buffers cache.
> Write? For which operations?

I'm actually just referring to the sequential scan "writing into" the shared
buffers cache, sorry for the "write through" :-)
> I was thinking to do this for bulk writes also, but it would require
> changes to bgwriter's cleaning sequence. Are you saying to write say ~32
> buffers then fsync them, rather than letting bgwriter do that? Then
> allow those buffers to be reused?

Off topic, but we think we just found the reason(s) for the abysmal heap
insert performance of pgsql and are working on a fix to that as well.  It
involves two main things: the ping-ponging seeks used to extend a relfile
and the bgwriter not flushing aggressively enough.  We're hoping to move the
net heap insert rate from 12MB/s for a single stream to something more like
100 MB/s per stream, but it may take a week to get some early results and
find out if we're on the right track.  We've been wrong on this before ;-)

- Luke   

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to