Performance testing this patch is a real bugaboo for me; the VMs I have to
work with are too unstable to give useful results :-(.  Need to scrounge up
a doner box somewhere...


On Tue, Aug 13, 2013 at 12:26 AM, Amit Kapila <amit.kapil...@gmail.com>wrote:

> Merlin Moncure wrote:
> On Wed, Aug 7, 2013 at 11:52 PM, Amit Kapila
> <amit(dot)kapila(at)huawei(dot)com> wrote:
> >>> -----Original Message-----
> >>> From: pgsql-hackers-owner(at)postgresql(dot)org [mailto:pgsql-hackers-
> >>> owner(at)postgresql(dot)org] On Behalf Of Merlin Moncure
> >>> Sent: Thursday, August 08, 2013 12:09 AM
> >>> To: Andres Freund
> >>> Cc: PostgreSQL-development; Jeff Janes
> >>> Subject: Re: [HACKERS] StrategyGetBuffer optimization, take 2
> >>>
> >>> On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund
> <andres(at)2ndquadrant(dot)com>
> >>> wrote:
> >>> > On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
>
> >>> I have some very strong evidence that the problem is coming out of the
> >>> buffer allocator.  Exhibit A is that vlad's presentation of the
> >>> problem was on a read only load (if not allocator lock, then what?).
> >>> Exhibit B is that lowering shared buffers to 2gb seems to have (so
> >>> far, 5 days in) fixed the issue.  This problem shows up on fast
> >>> machines with fast storage and lots of cores.  So what I think is
> >>> happening is that usage_count starts creeping up faster than it gets
> >>> cleared by the sweep with very large buffer settings which in turn
> >>> causes the 'problem' buffers to be analyzed for eviction more often.
> >
> >>   Yes one idea which was discussed previously is to not increase usage
> >> count, every time buffer is pinned.
> >>   I am also working on some of the optimizations on similar area, which
> you
> >> can refer here:
> >>
> >>
> http://www.postgresql.org/message-id/006e01ce926c$c7768680$56639380$@kapila@
> >> huawei.com
>
> > yup -- just took a quick look at your proposed patch.  You're
> > attacking the 'freelist' side of buffer allocation where my stripped
> > down patch addresses issues with the clocksweep.  I think this is a
> > good idea but more than I wanted to get into personally.
>
> > Good news is that both patches should essentially bolt on together
> > AFAICT.
>
> True, I also think so as both are trying to reduce contention in same area.
>
> >  I propose we do a bit of consolidation of performance testing
> > efforts and run tests with patch A, B, and AB in various scenarios.  I
> > have a 16 core vm (4gb ram) that I can test with and want to start
> > with say 2gb database 1gb shared_buffers high concurrency test and see
> > how it burns in.  What do you think?
>
> I think this can mainly benefit with large data  and shared buffers (>
> 10G), last year also I had ran few tests with similar idea's but
> didn't get much
> in with less shared buffers.
>
> >  Are you at a point where we can
> > run some tests?
>
> Not now, but I will try to run before/during next CF.
>
>
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com
>

Reply via email to