Which brings us back to the original issue. If I decide to stick with
the current implementation and not improve our existing partitioning
mechanisms to scale to 100,000 partitions, I could do something like:
There is a point where you can leave the selection of the correct rows
to normal
Zeugswetter Andreas ADI SD [EMAIL PROTECTED] writes:
I'd say that that point currently is well below 2000 partitions for all
common db systems.
I think it will depend heavily on the type of queries you're talking about.
Postgres's constraint_exclusion is a linear search and does quite a bit of
Josh,
I think what you are suggesting is something like this:
-- begin SQL --
core=# CREATE TABLE temp_x( x_id BIGINT PRIMARY KEY, x_info VARCHAR(16) NOT
NULL DEFAULT 'x_info');
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index temp_x_pkey
for table temp_x
CREATE TABLE
core=# CREATE
Jason,
Which brings us back to the original issue. If I decide to stick with
the current implementation and not improve our existing partitioning
mechanisms to scale to 100,000 partitions, I could do something like:
Maintain 2 copies of the parent table (partitioned by 256).
Inherit from
Jason,
Aside from running into a known bug with too many triggers when creating
gratuitous indices on these tables, I feel as it may be possible to do what
I want without breaking everything. But then again, am I taking too many
liberties with technology that maybe didn't have use cases like
I am building up a schema for storing a bunch of data about proteins, which
on a certain level can be modelled with quite simple tables. The problem is
that the database I am building needs to house lots of it 10TB and growing,
with one table in particular threatening to top 1TB. In the case of