> 27 марта 2021 г., в 01:26, Thomas Munro <thomas.mu...@gmail.com> написал(а):
>
> On Sat, Mar 27, 2021 at 4:52 AM Andrey Borodin <x4...@yandex-team.ru> wrote:
>> Some thoughts on HashTable patch:
>> 1. Can we allocate bigger hashtable to reduce probability of collisions?
>
> Yeah, good idea, might require some study.
In a long run we always have this table filled with nslots. But the keys will
be usually consecutive numbers (current working set of CLOG\Multis\etc). So in
a happy hashing scenario collisions will only appear for some random backward
jumps. I think just size = nslots * 2 will produce results which cannot be
improved significantly.
And this reflects original growth strategy SH_GROW(tb, tb->size * 2).
>> 2. Can we use specialised hashtable for this case? I'm afraid hash_search()
>> does comparable number of CPU cycles as simple cycle from 0 to 128. We could
>> inline everything and avoid hashp->hash(keyPtr, hashp->keysize) call. I'm
>> not insisting on special hash though, just an idea.
>
> I tried really hard to not fall into this rabbit h.... [hack hack
> hack], OK, here's a first attempt to use simplehash,
> Andres's
> steampunk macro-based robinhood template
Sounds magnificent.
> that we're already using for
> several other things
I could not find much tests to be sure that we do not break something...
> , and murmurhash which is inlineable and
> branch-free.
I think pageno is a hash already. Why hash any further? And pages accessed
together will have smaller access time due to colocation.
> I had to tweak it to support "in-place" creation and
> fixed size (in other words, no allocators, for use in shared memory).
We really need to have a test to know what happens when this structure goes out
of memory, as you mentioned below. What would be apropriate place for
simplehash tests?
> Then I was annoyed that I had to add a "status" member to our struct,
> so I tried to fix that.
Indeed, sizeof(SlruMappingTableEntry) == 9 seems strange. Will simplehash align
it well?
Thanks!
Best regards, Andrey Borodin.