Simon Riggs wrote:
I agree with much of your post, though this particular point caught my
eye. If you'll forgive me for jumping on an isolated point in your post:

No problem.

Multi-table indexes sound like a good solution until you consider how
big they would be. The reason we "need" a multi-table index is because
we are using partitioning, which we wouldn't be doing unless the data
was fairly large. So the index is going to be (Num partitions *
fairly-large) in size, which means its absolutely enormous. Adding and
dropping partitions also becomes a management nightmare, so overall
multi-table indexes look unusable to me. Multi-table indexes also remove
the possibility of loading data quickly, then building an index on the
data, then adding the table as a partition - both the COPY and the
CREATE INDEX would be slower with a pre-existing multi-table index.

I agree. (And thanks to TOAST, we never have very wide tables with relatively few rows, right? I mean, something like pictures stored in bytea columns or some such.)

My hope is to have a mechanism to partition indexes or recognise that
they are partitioned, so that a set of provably-distinct unique indexes
can provide the exact same functionlity as a single large unique index,
just without the management nightmare.

Uhm... I don't quite get what you mean by "provably-distinct unique indexes".

As long as the first columns of an index are equal to all columns of the partitioning columns, there is no problem. You could easily reduce to simple per-table indexes and using the partitioning rule set to decide which index to query.

But how to create an (unique) index which is completely different from the partitioning key?



---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at


Reply via email to