On Tue, 06 Jan 2026 at 16:23, Heikki Linnakangas <[email protected]> wrote:
> On 30/12/2025 14:37, Andrey Borodin wrote:
>> Hi hackers,
>> Following up on the Discord discussion about the PROCLOCK hash table
>> being
>> a "weird allocator" that we never actually use for lookups - I took a stab at
>> replacing it with a simpler partitioned free list approach as was suggested.
>> I was doing this mostly to educate myself on Lock Manager internals.
>> The current implementation uses LockMethodProcLockHash purely as an
>> allocator.
>> We never do hash lookups by key; we only allocate entries, link them to the 
>> lock's
>> procLocks list, and free them later. Using a full hash table for this adds
>> unnecessary complexity and maybe even overhead (I did not measure this).
>> The attached patch replaces this with:
>> - ProcLockArray: A fixed-size array of all PROCLOCK structs (allocated at 
>> startup)
>> - ProcLockFreeList: Partitioned free lists, one per lock partition to reduce 
>> contention
>> - ProcLockAlloc/Free: Simple push/pop operations on the free lists
>> - PROCLOCK lookup: Linear traversal of lock->procLocks (see 
>> LockRefindAndRelease()
>>    and FastPathGetRelationLockEntry())
>> The last point bothers me most. It seems like this traversals are
>> expected to be short.
>> But I'm not 100% sure.
>
> Hmm, yeah the last point contradicts the premise that the hash table
> is used purely as an allocator. It *is* used for lookups, and you're
> replacing them with linear scans. That doesn't seem like an
> improvement.
>
> - Heikki

I tested the patch on a Loongson 3C6000/D system with 128 vCPUs using
BenchmarkSQL 5.0 (100 warehouses, 100 clients).

Here are the results:

|         | tpmC      | tpmTotal  |
|---------|-----------|-----------|
| master  | 248199.09 | 551387.46 |
|         | 243660.35 | 541902.31 |
|         | 244418.30 | 542867.57 |
| patched | 247330.65 | 549949.25 |
|         | 242953.79 | 539620.65 |
|         | 237883.19 | 528491.66 |

Not sure if this is useful, but throwing it out there.

-- 
Regards,
Japin Li
ChengDu WenWu Information Technology Co., Ltd.


Reply via email to