On Tue, Nov 7, 2017 at 2:36 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Peter Geoghegan <p...@bowt.ie> writes:
>> My point is only that it's worth considering that this factor affects
>> how representative your sympathetic case is. It's not clear how many
>> PageIndexMultiDelete() calls are from opportunistic calls to
>> _bt_vacuum_one_page(), how important that subset of calls is, and so
>> on. Maybe it doesn't matter at all.
>
> According to the perf measurements I took earlier, essentially all the
> compactify_tuple calls in this test case are from PageRepairFragmentation
> (from heap_page_prune), not PageIndexMultiDelete.

For a workload with high contention (e.g., lots of updates that follow
a Zipfian distribution) lots of important cleanup has to occur within
_bt_vacuum_one_page(), and with an exclusive buffer lock held. It may
be that making PageIndexMultiDelete() faster pays off
disproportionately well there, but I'd only expect to see that at
higher client count workloads with lots of contention -- workloads
that we still do quite badly on (note that we always have not done
well here, even prior to commit 2ed5b87f9 -- Yura showed this at one
point).

It's possible that this work influenced Yura in some way.

When Postgres Pro did some benchmarking of this at my request, we saw
that the bloat got really bad past a certain client count. IIRC there
was a clear point at around 32 or 64 clients where TPS nosedived,
presumably because cleanup could not keep up. This was a 128 core box,
or something like that, so you'll probably have difficulty recreating
it with what's at hand.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to