I think what you are suggesting is something like this:
-- begin SQL --
core=# CREATE TABLE temp_x( x_id BIGINT PRIMARY KEY, x_info VARCHAR(16) NOT
NULL DEFAULT 'x_info');
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "temp_x_pkey"
for table "temp_x"
core=# CREATE TABLE temp_y( y_id BIGINT PRIMARY KEY, y_info VARCHAR(16) NOT
NULL DEFAULT 'y_info');
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "temp_y_pkey"
for table "temp_y"
core=# CREATE TABLE temp_xy() INHERITS (temp_x, temp_y);
core=# \d temp_xy
Column | Type | Modifiers
x_id | bigint | not null
x_info | character varying(16) | not null default 'x_info'::character
y_id | bigint | not null
y_info | character varying(16) | not null default 'y_info'::character
-- end code --
The problem with this is what I really want to do is something like this:
-- begin code --
core=# CREATE TABLE temp_xx() INHERITS (temp_x, temp_x);
ERROR: inherited relation "temp_x" duplicated
-- end code --
The issue is that the relations are in fact reflexive and, due to the sheer
size fo the data I'm trying to warehouse, I'd like not to keep them around
more than once.
I'm sort of thinking aloud here, but based on what you've told me, I guess
I'm left having to choose which direction I want to search in since the
domains and ranges are theoretically the same. On the other hand, perhaps I
could take the overhead impact and just keep two copies of the parent tables
around. The relation table is on the order of about 300-500x as large as the
parent tables and that multiplier is expected to stay relatively constant
Which brings us back to the original issue. If I decide to stick with the
current implementation and not "improve our existing partitioning
mechanisms to scale to 100,000 partitions", I could do something like:
Maintain 2 copies of the parent table (partitioned by 256).
Inherit from both to a relation table.
Does this get me out of the woods with the query analyzer? Doesn't seem like
it would, necessarily, at least.
On 8/11/07, Josh Berkus <[EMAIL PROTECTED]> wrote:
> > Aside from running into a known bug with "too many triggers" when
> > gratuitous indices on these tables, I feel as it may be possible to do
> > I want without breaking everything. But then again, am I taking too many
> > liberties with technology that maybe didn't have use cases like this one
> > mind?
> Well, you're pushing PostgreSQL partitioning further than it's currently
> to go. Right now our range exclusion becomes too costly to be useful
> somewhere around 300 to 1000 partitions (depending on CPU and other
> because the constraints are checked linearly.
> To make your scheme work, you'd need to improve our existing partitioning
> mechanisms to scale to 100,000 partitions. It would also help you to
> implement multiple inheritance so that you could have a partition which
> belonged to two masters. I'd be very interested in seeing you do so, of
> course, but this may be more hacking than you had in mind.
> Josh Berkus
> PostgreSQL @ Sun
> San Francisco
Programmer/Analyst IV - Database Administration
UCLA-DOE Institute for Genomics & Proteomics
Howard Hughes Medical Institute
611 C.E. Young Drive East | Tel: (310) 206-3907
105 Boyer Hall, Box 951570 | Fax: (310) 206-3914
Los Angeles, CA 90095. USA | Mail: [EMAIL PROTECTED]