Thanks for the reply.
On Oct 2, 2015, at 3:26 PM, Jim Nasby wrote:
> I'm not really following here... the size of an index is determined by the
> number of tuples in it and the average width of each tuple. So as long as
> you're using the same size of data type, 18 vs 1 sequence won't change the
> size of your indexes.
I'm pretty much concerned with exactly that -- the general distribution of
numbers, which affects the average size/length of each key.
Using an even distribution as an example, the average width of the keys can
increase by 2 places:
Since we have ~18 object types, the primary keys in each might range from 1 to
9,999,999
Using a shared sequence, the keys for the same dataset would range from 1 to
189,999,999
Each table is highly related, and may fkey onto 2-4 other tables... So i'm a
bit wary of this change. But if it works for others... I'm fine with that!
> Sequences are designed to be extremely fast to assign. If you ever did find a
> single sequence being a bottleneck, you could always start caching values in
> each backend. I think it'd be hard (if not impossible) to turn a single
> global sequence into a real bottleneck.
I don't think so either, but everything I've read has been theoretical -- so I
was hoping that someone here can give the "yeah, no issue!" from experience.
The closest production stuff I found was via the BDR plugin (only relevant
thing that came up during search) and there seemed to be anecdotal accounts of
issues with sequences becoming bottlenecks -- but that was from their code that
pre-generated allowable sequence ids on each node.
--
Sent via pgsql-general mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general