I’m interested in many of the issues that were discussed in this thread. Was
this patch ever wrapped up (I can’t find it in any CF), or did this thread die
On Aug 6, 2013, at 12:18 AM, Amit Kapila <amit.kap...@huawei.com> wrote:
> On Friday, June 28, 2013 6:20 PM Robert Haas wrote:
>> On Fri, Jun 28, 2013 at 12:52 AM, Amit Kapila <amit.kap...@huawei.com>
>>> Currently it wakes up based on bgwriterdelay config parameter which
>> is by
>>> default 200ms, so you means we should
>>> think of waking up bgwriter based on allocations and number of
>> elements left
>>> in freelist?
>> I think that's what Andres and I are proposing, yes.
>>> As per my understanding Summarization of points raised by you and
>>> which this patch should address to have a bigger win:
>>> 1. Bgwriter needs to be improved so that it can help in reducing
>> usage count
>>> and finding next victim buffer
>>> (run the clock sweep and add buffers to the free list).
> I think one way to handle it is that while moving buffers to freelist,
> if we find
> that there are not enough buffers (>= high watermark) which have zero
> usage count,
> then move through buffer list and reduce usage count. Now here I think
> it is important
> how do we find that how many times we should circulate the buffer list
> to reduce usage count.
> Currently I have kept it proportional to number of times it failed to
> move enough buffers to freelist.
>>> 2. SetLatch for bgwriter (wakeup bgwriter) when elements in freelist
>> Check. The way to do this is to keep a variable in shared memory in
>> the same cache line as the spinlock protecting the freelist, and
>> update it when you update the free list.
> Added a new variable freelistLatch in BufferStrategyControl
>>> 3. Split the workdone globallock (Buffreelist) in StrategyGetBuffer
>>> (a spinlock for the freelist, and an lwlock for the clock sweep).
> Added a new variable freelist_lck in BufferStrategyControl which will be
> used to protect freelist.
> Still Buffreelist will be used to protect clock sweep part of
>>> 4. Separate processes for writing dirty buffers and moving buffers to
>> I think this part might be best pushed to a separate patch, although I
>> agree we probably need it.
>>> 5. Bgwriter needs to be more aggressive, logic based on which it
>>> how many buffers it needs to process needs to be improved.
>> This is basically overlapping with points already made. I suspect we
>> could just get rid of bgwriter_delay, bgwriter_lru_maxpages, and
>> bgwriter_lru_multiplier altogether. The background writer would just
>> have a high and a low watermark. When the number of buffers on the
>> freelist drops below the low watermark, the allocating backend sets
>> the latch and bgwriter wakes up and begins adding buffers to the
>> freelist. When the number of buffers on the free list reaches the
>> high watermark, the background writer goes back to sleep. Some
>> experimentation might be needed to figure out what values are
>> appropriate for those watermarks. In theory this could be a
>> configuration knob, but I suspect it's better to just make the system
>> tune it right automatically.
> Currently in Patch I have used low watermark as 1/6 and high watermark as
> 1/3 of NBuffers.
> Values are hardcoded for now, but I will change to guc's or hash defines.
> As far as I can think there is no way to find number of buffers on freelist,
> so I had added one more variable to maintain it.
> Initially I thought that I could use existing variables firstfreebuffer and
> lastfreebuffer to calculate it, but it may not be accurate as
> once the buffers are moved to freelist, these don't give exact count.
> The main doubt here is what if after traversing all buffers, it didn't find
> enough buffers to meet
> high watermark?
> Currently I just move out of loop to move buffers and just try to reduce
> usage count as explained in point-1
>>> 6. There can be contention around buffer mapping locks, but we can
>> focus on
>>> it later
>>> 7. cacheline bouncing around the buffer header spinlocks, is there
>>> we can do to reduce this?
>> I think these are points that we should leave for the future.
> This is just a WIP patch. I have kept older code in comments. I need to
> further refine it and collect performance data.
> I had prepared one script (perf_buff_mgmt.sh) to collect performance data
> for different shared buffers/scalefactor/number_of_clients
> Top level points which still needs to be taken care:
> 1. Choose Optimistically used buffer in StrategyGetBuffer(). Refer Simon's
> 2. Don't bump the usage count on every time buffer is pinned. This idea I
> got when reading archives about
> improvements in this area.
> With Regards,
> Amit Kapila.
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: