Hi,

NikhilS wrote:
The following things are TODOs:

iv) Auto generate rules using the checks mentioned for the partitions, to
handle INSERTs/DELETEs/UPDATEs to navigate them to the appropriate child.
Note that checks specified directly on the master table will get inherited
automatically.

Am planning to do the above by using the check constraint specified for each partition. This constraint's raw_expr field ends up becoming the whereClause
for the rule specific to that partition.

I appreciate you efforts, but I'm not sure if this has been discussed enough. There seem to be two ideas floating around:

 - you are heading for automating the current kludge, which involves
   creating partitions and constraints by hand. AFAICT, you want to
   support list and range partitioning.

 - Simon Riggs has proposed partitioning functions, which could easily
   handle any type of partitioning (hash, list, range and any mix of
   those).

Both proposals do not have much to do with the missing multi-table indices. It's clear to me that we have to implement those someday, anyway.

AFAICT, the first proposal does not ease the task of writing correct constraints, so that we are sure that each row ends up in only exactly one partition. The second would.

But the second proposal makes it hard for the planner to choose the right partitions, i.e. if you request a range of ids, the planner would have to query the partitioning function for every possible value. The first variant could use constraint exclusion for that.

None of the two has gone as far as thinking about switching from one partitioning rule set to another. That gets especially hard if you consider database restarts during re-partitioning.


Here are some thought I have come up with recently. This is all about how to partition and not about how to implement multi-table indices. Sorry if this got somewhat longish. And no, this is certainly not for 8.3 ;-)

I don't like partitioning rules, which leave open questions, i.e. when there are values for which the system does not have an answer (and would have to fall back to a default) or even worse, where it could give multiple correct answers. Given that premise, I see only two basic partitioning types:

 - splits: those can be used for what's commonly known as list and range
   partitioning. If you want customers A-M to end up on partition 1 and
   customers N-Z on partition 2 you would split between M and N. (That
   way, the system would still know what to do with a customer name
   beginning with an @ sign, for example. The only requirement for a
   split is that the underlying data type supports comparison
   operators.)

 - modulo: I think this is commonly known as hash partitioning. It
   requires an integer input, possibly by hashing, and calculates the
   remainder of a division by n. That should give an equal distribution
   among n partitions.

Besides the expression to work on, a split always needs one argument, the split point, and divides into two buckets. A modulo splits into two or more buckets and needs the divisor as an argument.

Of course, these two types can be combined. I like to think of these combinations as trees. Let me give you a simple examlpe:

                         table customers
                               |
                               |
                       split @ name >= 'N'
                        /               \
                       /                 \
                     part1              part2



A combination of the two would look like:

                         table invoices
                               |
                               |
                       split @ id >= 50000
                        /               \
                       /                 \
              hash(id) modulo 3         part4
                 /    |     \
                /     |      \
             part1   part2   part3


Knowledge of these trees would allow the planner to choose more wisely, i.e. given a comparative condition (WHERE id > 100000) it could check the splits in the partitioning tree and only scan the partitions necessary. Likewise with an equality condition (WHERE id = 1234).

As it's a better definition of the partitioning rules, the planner would not have to check constraints of all partitions, as the current constraint exclusion feature does. It might even be likely that querying this partitioning tree and then scanning the single-table index will be faster than an index scan on a multi-table index. At least, I cannot see why it should be any slower.

Such partitioning rule sets would allow us to re-partition by adding a split node on top of the tree. The split point would have to increment together with the progress of moving around the rows among the partitions, so that the database would always be in a consistent state regarding partitioning.

Additionally, it's easy to figure out, when no or only few moving around is necessary, i.e. when adding a split @ id >= 1000 to a table which only has ids < 1000.



I believe that this is a well defined partitioning rule set, which has more information for the planner than a partitioning function could ever have. And it is less of a foot-gun than hand written constraints, because it does not allow the user to specify illegal partitioning rules (i.e. it's always guaranteed, that every row ends up in only one partition).

Of course, it's far more work than either of the above proposals, but maybe we can go there step by step? Maybe, NikhilS proposal is more like a step towards such a beast?

Feedback of any form is very welcome.

Regards

Markus


---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to