On Tue, Aug 20, 2013 at 1:57 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-08-19 15:17:44 -0700, Jeff Janes wrote:
On Wed, Aug 7, 2013 at 7:40 AM, Merlin Moncure mmonc...@gmail.com wrote:
I agree; at least then it's not unambiguously better. if you (in
effect) swap all contention
On 2013-08-19 15:17:44 -0700, Jeff Janes wrote:
On Wed, Aug 7, 2013 at 7:40 AM, Merlin Moncure mmonc...@gmail.com wrote:
I agree; at least then it's not unambiguously better. if you (in
effect) swap all contention on allocation from a lwlock to a spinlock
it's not clear if you're
On Sat, Aug 17, 2013 at 10:55 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Aug 5, 2013 at 11:49 AM, Merlin Moncure mmonc...@gmail.com wrote:
*) What I think is happening:
I think we are again getting burned by getting de-scheduled while
holding the free list lock. I've been chasing
On Mon, Aug 5, 2013 at 8:49 AM, Merlin Moncure mmonc...@gmail.com wrote:
*) What I think is happening:
I think we are again getting burned by getting de-scheduled while
holding the free list lock. I've been chasing this problem for a long
time now (for example, see:
On Wed, Aug 7, 2013 at 7:40 AM, Merlin Moncure mmonc...@gmail.com wrote:
I agree; at least then it's not unambiguously better. if you (in
effect) swap all contention on allocation from a lwlock to a spinlock
it's not clear if you're improving things; it would have to be proven
and I'm trying
On Mon, Aug 19, 2013 at 5:02 PM, Jeff Janes jeff.ja...@gmail.com wrote:
My concern is how we can ever move this forward. If we can't recreate
it on a test system, and you probably won't be allowed to push
experimental patches to the production systemwhat's left?
Also, if the kernel is
On Mon, Aug 19, 2013 at 5:17 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Aug 7, 2013 at 7:40 AM, Merlin Moncure mmonc...@gmail.com wrote:
I agree; at least then it's not unambiguously better. if you (in
effect) swap all contention on allocation from a lwlock to a spinlock
it's not
On Mon, Aug 5, 2013 at 11:49 AM, Merlin Moncure mmonc...@gmail.com wrote:
*) What I think is happening:
I think we are again getting burned by getting de-scheduled while
holding the free list lock. I've been chasing this problem for a long
time now (for example, see:
: PostgreSQL-development; Jeff Janes
Subject: Re: [HACKERS] StrategyGetBuffer optimization, take 2
On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund
andres(at)2ndquadrant(dot)com
wrote:
On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
I have some very strong evidence that the problem
On 8/14/13 8:30 AM, Merlin Moncure wrote:
Performance testing this patch is a real bugaboo for me; the VMs I have to work
with are too unstable to give useful results :-(. Need to scrounge up a doner
box somewhere...
I offered a server or two to the community a while ago but I don't think
On Wed, Aug 14, 2013 at 7:00 PM, Merlin Moncure mmonc...@gmail.com wrote:
Performance testing this patch is a real bugaboo for me; the VMs I have to
work with are too unstable to give useful results :-(. Need to scrounge up
a doner box somewhere...
While doing performance tests in this
12:09 AM
To: Andres Freund
Cc: PostgreSQL-development; Jeff Janes
Subject: Re: [HACKERS] StrategyGetBuffer optimization, take 2
On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund
andres(at)2ndquadrant(dot)com
wrote:
On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
I have some very strong
-development; Jeff Janes
Subject: Re: [HACKERS] StrategyGetBuffer optimization, take 2
On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
I don't think the unlocked increment of nextVictimBuffer is a good
idea
On Mon, Aug 5, 2013 at 11:40 AM, Andres Freund and...@anarazel.de wrote:
On 2013-08-05 10:49:08 -0500, Merlin Moncure wrote:
optimization 4: remove free list lock (via Jeff Janes). This is the
other optimization: one backend will no longer be able to shut down
buffer allocation
I think
On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
I don't think the unlocked increment of nextVictimBuffer is a good idea
though. nextVictimBuffer jumping over NBuffers under concurrency seems
like a recipe for disaster to me. At the very, very least it will need a
good wad of comments
On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
I don't think the unlocked increment of nextVictimBuffer is a good idea
though. nextVictimBuffer jumping over NBuffers under concurrency seems
like a recipe for
On Wed, Aug 7, 2013 at 10:37 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
I don't think the unlocked increment of nextVictimBuffer is a good idea
though. nextVictimBuffer jumping over NBuffers under concurrency seems
like a recipe for
optimization, take 2
On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
I don't think the unlocked increment of nextVictimBuffer is a good
idea
though. nextVictimBuffer jumping over NBuffers under concurrency
seems
My $company recently acquired another postgres based $company and
migrated all their server operations into our datacenter. Upon
completing the move, the newly migrated database server started
experiencing huge load spikes.
*) Environment description:
Postgres 9.2.4
RHEL 6
32 cores
virtualized
On 2013-08-05 10:49:08 -0500, Merlin Moncure wrote:
optimization 4: remove free list lock (via Jeff Janes). This is the
other optimization: one backend will no longer be able to shut down
buffer allocation
I think splitting off the actual freelist checking into a spinlock makes
quite a bit of
optimization 2: refcount is examined during buffer allocation without
a lock. if it's 0, buffer is assumed pinned (even though it may not
in fact be) and sweep continues
+1.
I think this shall not lead to much problems, since a lost update
cannot,IMO, lead to disastrous result. At most, a
21 matches
Mail list logo