On Tue, Apr 5, 2016 at 1:04 PM, Andres Freund <and...@anarazel.de> wrote:
> On 2016-04-05 12:14:35 -0400, Robert Haas wrote:
>> On Tue, Apr 5, 2016 at 11:30 AM, Andres Freund <and...@anarazel.de> wrote:
>> > On 2016-04-05 20:56:31 +0530, Amit Kapila wrote:
>> >> This fluctuation started appearing after commit 6150a1b0 which we have
>> >> discussed in another thread  and a colleague of mine is working on to
>> >> write a patch to try to revert it on current HEAD and then see the
>> >> results.
>> > I don't see what that buys us. That commit is a good win on x86...
>> Maybe. But I wouldn't be surprised to find out that that is an
>> overgeneralization. Based on some results Mithun Cy showed me this
>> morning, I think that some of this enormous run-to-run fluctuation
>> that we're seeing is due to NUMA effects. So some runs we get two
>> things that are frequently accessed together on the same NUMA node and
>> other times they get placed on different NUMA nodes and then
>> everything sucks. I don't think we fully understand what's going on
>> here yet - and I think we're committing changes in this area awfully
>> quickly - but I see no reason to believe that x86 is immune to such
>> effects. They may just happen in different scenarios than what we see
>> on POWER.
> I'm not really following - we were talking about 6150a1b0 ("Move buffer
> I/O and content LWLocks out of the main tranche.") made four months
> ago. Afaics the atomic buffer pin patch is a pretty clear win on both
> ppc and x86?
The point is that the testing Amit's team is doing can't tell the
answer to that question one way or another. 6150a1b0 completely
destabilized performance on our test systems to the point where
testing subsequent patches is extremely difficult.
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: