> > When done that way, you're going to see a lot of index B-tree
> > fragmentation with even DCE 1.1 (ISO/IEC 11578:1996) time based
> > as described above. With random (version 4) or hashed based (version
> > or 5) UUIDs there's nothing that can be done to improve the
> > obviously.
> Is this based on empirical results or just a theory? I'm asking
> it's actually a common technique to reverse the natural index key to
> construct basically exactly this situation -- for performance reasons.
> The idea is that low order bits have higher cardinality and that that
> can *improve* btree performance by avoiding contention.
> I'm not sure how much I believe in the effectiveness of that strategy
> myself or for that matter whether it's universally applicable or only
> useful in certain types of loads.
> I'm not saying you're wrong, but I'm not sure it's a simple open and
> case either.
What hinted me off to this was the same problem occurring in SQL Server,
where changing the behavior gave them a much lower I/O load with
replication (which utilizes UUIDs).
A blog post from an MS developer at http://tinyurl.com/2xy5jn talks
about how this has allowed a tighter index as well as not requiring
random searches on the b-tree, significantly reducing I/O.
A performance analysis at http://tinyurl.com/2ysora has a table
comparing using an integer, UUID and sequential UUID (when the system
orders UUIDs sequentially by time, like SQL Server already does).
Obviously this is not SQL Server we're dealing with, but I can see many
of the same issues unavoidable and equally impactful as we both use the
same index data structure. I'll
That being said, I'm going to perform tests today or this weekend on
different loads to see how PostgreSQL would be affected by this change.
I'll be very interested to see the results of random b-tree searches on
every insert vs the contention from sequentially generated UUIDs.
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster