On 12/19/14, 5:13 PM, Tom Lane wrote:
Jim Nasby <jim.na...@bluetreble.com> writes:
On 12/18/14, 5:00 PM, Jim Nasby wrote:
2201582 20 -- Mostly LOCALLOCK and Shared Buffer
Started looking into this; perhaps https://code.google.com/p/fast-hash/ would
be worth looking at, though it requires uint64.
It also occurs to me that we're needlessly shoving a lot of 0's into the hash
input by using RelFileNode and ForkNumber. RelFileNode includes the tablespace
Oid, which is pointless here because relid is unique per-database. We also have
very few forks and typically care about very few databases. If we crammed dbid
and ForkNum together that gets us down to 12 bytes, which at minimum saves us
the trip through the case logic. I suspect it also means we could eliminate one
of the mix() calls.
I don't see this working. The lock table in shared memory can surely take
no such shortcuts. We could make a backend's locallock table omit fields
that are predictable within the set of objects that backend could ever
lock, but (1) this doesn't help unless we can reduce the tag size for all
LockTagTypes, which we probably can't, and (2) having the locallock's tag
be different from the corresponding shared tag would be a mess too.
I think dealing with (2) might easily eat all the cycles we could hope to
save from a smaller hash tag ... and that's not even considering the added
logical complexity and potential for bugs.
I think we may be thinking different things here...
I'm not suggesting we change BufferTag or BufferLookupEnt; clearly we can't
simply throw away any of the fields I was talking about (well, except possibly
tablespace ID. AFAICT that's completely redundant for searching because relid
What I am thinking is not using all of those fields in their raw form to
calculate the hash value. IE: something analogous to:
hash_any(SharedBufHash, (rot(forkNum, 2) | dbNode) ^ relNode) << 32 | blockNum)
perhaps that actual code wouldn't work, but I don't see why we couldn't do
something similar... am I missing something?
Switching to a different hash algorithm could be feasible, perhaps.
I think we're likely stuck with Jenkins hashing for hashes that go to
disk, but hashes for dynahash tables don't do that.
Yeah, I plan on testing the performance of fash-hash for HASH_BLOBS just to see
how it compares.
Why would we be stuck with Jenkins hashing for on-disk data? pg_upgrade, or
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: