On Thu, Aug 28, 2014 at 7:11 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I have updated the patch to address the feedback. Main changes are:
1. For populating freelist, have a separate process (bgreclaimer)
instead of doing it by bgwriter.
2. Autotune the low and high threshold values for
On Thu, Aug 28, 2014 at 4:41 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
I have yet to collect data under varying loads, however I have
collected performance data for 8GB shared buffers which shows
reasonably good performance and scalability.
I think the main part left for this patch is
On Wed, Sep 3, 2014 at 9:45 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Aug 28, 2014 at 4:41 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
I have yet to collect data under varying loads, however I have
collected performance data for 8GB shared buffers which shows
reasonably
On Tue, Aug 26, 2014 at 11:10 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Amit Kapila amit.kapil...@gmail.com writes:
On Tue, Aug 5, 2014 at 9:21 PM, Robert Haas robertmh...@gmail.com wrote:
I think you should get rid of BufFreelistLock completely and just
decide that freelist_lck will protect
On Tue, Aug 26, 2014 at 10:53 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Today, while working on updating the patch to improve locking
I found that as now we are going to have a new process, we need
a separate latch in StrategyControl to wakeup that process.
Another point is I think it
On Tue, Aug 5, 2014 at 9:21 PM, Robert Haas robertmh...@gmail.com wrote:
Incidentally, while I generally think your changes to the locking regimen
in StrategyGetBuffer() are going in the right direction, they need
significant cleanup. Your patch adds two new spinlocks, freelist_lck and
Amit Kapila amit.kapil...@gmail.com writes:
On Tue, Aug 5, 2014 at 9:21 PM, Robert Haas robertmh...@gmail.com wrote:
I think you should get rid of BufFreelistLock completely and just
decide that freelist_lck will protect everything the freeNext links, plus
everything in StrategyControl except
On Tue, Aug 26, 2014 at 8:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Amit Kapila amit.kapil...@gmail.com writes:
Another point is I think it will be better to protect
StrategyControl-completePasses with victimbuf_lck rather than
freelist_lck, as when we are going to update it we will already
On 2014-08-13 09:51:58 +0530, Amit Kapila wrote:
Overall, the main changes required in patch as per above feedback
are:
1. add an additional counter for the number of those
allocations not satisfied from the free list, with a
name like buffers_alloc_clocksweep.
2. Autotune the low and high
On Wed, Aug 13, 2014 at 4:25 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-08-13 09:51:58 +0530, Amit Kapila wrote:
Overall, the main changes required in patch as per above feedback
are:
1. add an additional counter for the number of those
allocations not satisfied from the
On 2014-08-06 15:42:08 +0530, Amit Kapila wrote:
On Tue, Aug 5, 2014 at 9:21 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 5, 2014 at 4:43 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
This essentially removes BgWriterDelay, but it's still mentioned in
BgBufferSync(). Looking
On Wed, Aug 13, 2014 at 2:32 AM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-08-06 15:42:08 +0530, Amit Kapila wrote:
On Tue, Aug 5, 2014 at 9:21 PM, Robert Haas robertmh...@gmail.com
wrote:
This essentially removes BgWriterDelay, but it's still mentioned in
BgBufferSync().
On Tue, Aug 5, 2014 at 9:21 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jun 5, 2014 at 4:43 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
This essentially removes BgWriterDelay, but it's still mentioned in
BgBufferSync(). Looking further, I see that with the patch applied,
On Wed, Aug 6, 2014 at 6:12 AM, Amit Kapila amit.kapil...@gmail.com wrote:
If I'm reading this right, the new statistic is an incrementing counter
where, every time you update it, you add the number of buffers currently on
the freelist. That makes no sense.
I think using 'number of buffers
On Thu, Jun 5, 2014 at 4:43 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I have improved the patch by making following changes:
a. Improved the bgwriter logic to log for xl_running_xacts info and
removed the hibernate logic as bgwriter will now work only when
there is scarcity of
On Mon, Jun 9, 2014 at 9:33 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Sun, Jun 8, 2014 at 7:21 PM, Kevin Grittner kgri...@ymail.com wrote:
Backend processes related to user connections still
performed about 30% of the writes, and this work shows promise
toward bringing that down,
On Sun, Jun 8, 2014 at 9:51 AM, Kevin Grittner kgri...@ymail.com wrote:
Amit Kapila amit.kapil...@gmail.com wrote:
I have improved the patch by making following changes:
a. Improved the bgwriter logic to log for xl_running_xacts info and
removed the hibernate logic as bgwriter will now
Amit Kapila amit.kapil...@gmail.com wrote:
I have improved the patch by making following changes:
a. Improved the bgwriter logic to log for xl_running_xacts info and
removed the hibernate logic as bgwriter will now work only when
there is scarcity of buffer's in free list. Basic idea
On Sun, Jun 8, 2014 at 7:21 PM, Kevin Grittner kgri...@ymail.com wrote:
Amit Kapila amit.kapil...@gmail.com wrote:
I have improved the patch by making following changes:
a. Improved the bgwriter logic to log for xl_running_xacts info and
removed the hibernate logic as bgwriter will now
On Thu, May 15, 2014 at 11:11 AM, Amit Kapila amit.kapil...@gmail.comwrote:
Data with LWLOCK_STATS
--
BufMappingLocks
PID 7245 lwlock main 38: shacq 41117 exacq 34561 blk 36274 spindelay 101
PID 7310 lwlock main 39: shacq 40257 exacq 34219 blk
On Fri, May 16, 2014 at 10:51 AM, Amit Kapila amit.kapil...@gmail.comwrote:
Thrds (64) Thrds (128) HEAD 45562 17128 HEAD + 64 57904 32810 V1 + 64
105557 81011 HEAD + 128 58383 32997 V1 + 128 110705 114544
I haven't actually reviewed the code, but this sort of thing seems like
good
On Fri, May 16, 2014 at 7:51 AM, Amit Kapila amit.kapil...@gmail.comwrote:
shared_buffers= 8GB
scale factor = 3000
RAM - 64GB
Thrds (64) Thrds (128) HEAD 45562 17128 HEAD + 64 57904 32810 V1 + 64
105557 81011 HEAD + 128 58383 32997 V1 + 128 110705 114544
shared_buffers= 8GB
scale
On Sat, May 17, 2014 at 6:29 AM, Peter Geoghegan p...@heroku.com wrote:
On Fri, May 16, 2014 at 7:51 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
shared_buffers= 8GB
scale factor = 3000
RAM - 64GB
I'm having a little trouble following this. These figure are transactions
per second for
On Sat, May 17, 2014 at 6:02 AM, Robert Haas robertmh...@gmail.com wrote:
I haven't actually reviewed the code, but this sort of thing seems like
good evidence that we need your patch, or something like it. The fact that
the patch produces little performance improvement on it's own (though it
As mentioned previously about my interest in improving shared
buffer eviction especially by reducing contention around
BufFreelistLock, I would like to share my progress about the
same.
The test used for this work is mainly the case when all the
data doesn't fit in shared buffers, but does fit in
101 - 125 of 125 matches
Mail list logo