On 7 March 2017 at 17:31, David Rowley <david.row...@2ndquadrant.com> wrote:
> On 2 March 2017 at 16:06, Amit Kapila <amit.kapil...@gmail.com> wrote:
>> Few comments on the patch:
>> 1.
>> +/*
>> + * Number of buckets to split RecoveryLockTable into.
>> + * This must be a power of two.
>> + */
>> On what basis did you decide to keep the lock table size as 1024, is
>> it just a guess, if so, then also adding a comment to indicate that
>> you think this is sufficient for current use cases seems better.
>> However, if it is based on some data, then it would be more
>> appropriate.

I've performed some benchmarking using the test case from my previous
email and performed a perf record -g on the startup process during
each test.

I used various different sized hash tables, in the range of 32 to 8192
elements. The results are below:

@32 +   28.55%    25.20%  postgres  postgres           [.] StandbyReleaseLocks
@64 +   16.07%    14.65%  postgres  postgres           [.] StandbyReleaseLocks
@128 +    9.15%     8.07%  postgres  postgres           [.] StandbyReleaseLocks
@256 +    5.80%     5.00%  postgres  postgres           [.] StandbyReleaseLocks
@512      2.99%     2.63%  postgres  postgres           [.] StandbyReleaseLocks
@1024     1.74%     1.58%  postgres  postgres           [.] StandbyReleaseLocks
@2048     1.26%     1.14%  postgres  postgres           [.] StandbyReleaseLocks
@4096     0.89%     0.83%  postgres  postgres           [.] StandbyReleaseLocks
@8192     0.99%     0.93%  postgres  postgres           [.] StandbyReleaseLocks

Seems to be diminishing returns after 1024 elements. So perhaps 512 or
1024 are in fact the best size. Of course, I'm only performing this on
my i7 laptop, these numbers may vary on larger hardware with more
transactions running.

>> 2.
>> @@ -607,6 +627,8 @@ StandbyAcquireAccessExclusiveLock(TransactionId
>> xid, Oid dbOid, Oid relOid)
>>  {
>>   xl_standby_lock *newlock;
>>   LOCKTAG locktag;
>> + size_t pidx;
>> +
>> Comment on top of this function needs some change, it still seems to
>> refer to RelationLockList.
> Will fix that.

Fixed in the attached.

>> 4.
>> In some cases, it might slow down the performance where you have to
>> traverse the complete hash table like StandbyReleaseOldLocks, however,
>> I think those cases will be less relative to other lock release calls
>> which are called at every commit, so probably this is okay.
> Yes, that may be true. Perhaps making the table smaller would help
> bring that back to what it was, or perhaps the patch needs rethought
> to use bloom filters to help identify if we need to release any locks.
> Opinions on that would be welcome. Meanwhile I'll go off and play with
> bloom filters to see if I can knock some percentage points of the perf
> report for StandbyReleaseLocks().

So I quickly realised that bloom filters are impossible here as
there's no way to unmark the bit when locks are released as we cannot
know if the bit is used to mark another transaction's locks. I ended
up giving up on that idea and instead I added a RecoveryLockCount
variable that keeps track on the number of locks. This may also help
to speed up some cases where a transaction has many sub transactions
and there's no AEL locks held, as we can simply fast path out in
StandbyReleaseLockTree() without looping over each sub-xid and trying
to release locks which don't exist.

I've attached an updated patch

 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Attachment: recovery_lock_v2.patch
Description: Binary data

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to