On Tue, Sep 23, 2014 at 7:42 PM, Andres Freund <and...@2ndquadrant.com> wrote:
>> It will actually be far worse than that, because we'll acquire and
>> release the spinlock for every buffer over which we advance the clock
>> sweep, instead of just once for the whole thing.
>
> I said double, because we already acquire the buffer header's spinlock
> every tick.

Oh, good point.

>> > Let me try to quantify that.
>>
>> Please do.
>
> I've managed to find a ~1.5% performance regression. But the setup was
> plain absurd. COPY ... FROM /tmp/... BINARY; of large bytea datums into
> a fillfactor 10 table with the column set to PLAIN storage. With the
> resulting table size chosen so it's considerably bigger than s_b, but
> smaller than the dirty writeback limit of the kernel.
>
> That's perfectly reasonable.
>
> I can think of a couple other cases, but they're all similarly absurd.

Well, it's not insane to worry about such things, but if you can only
manage 1.5% on such an extreme case, I'm encouraged.  This is killing
us on OLTP workloads, and fixing that is a lot more important than a
couple percent on an extreme case.

> Ah. My guess is that most of the time will probably actually be spent in
> the lwlock's spinlock, not the the lwlock putting itself to sleep.

Ah, OK.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to