On Thu, Jun 29, 2017 at 3:52 PM, Amit Khandekar <amitdkhan...@gmail.com> wrote: > So to conclude, I think, we can do this : > > Scenario 1 : > Only one partitioned table : the root; rest all are leaf partitions. > In this case, it is definitely efficient to just check the root > partition key, which will be sufficient. > > Scenario 2 : > There are few non-leaf partitioned tables (3-4) : > Open those tables, and follow 2nd approach above: If we don't find any > updated partition-keys in any of them, well and good. If we do find, > failover to approach 3 : For each of the update resultrels, use the > new rd_partcheckattrs bitmap to know if it uses any of the updated > columns. This would be faster than pulling up attrs from the quals > like how it was done in the patch.
I think we should just have the planner figure out a list of which columns are partitioning columns either for the named relation or some descendent, and set a flag if that set of columns overlaps the set of columns updated. At execution time, update tuple routing is needed if either that flag is set or if some partition included in the plan has a BR UPDATE trigger. Attached is a draft patch implementing that approach. This could be made more more accurate. Suppose table foo is partitioned by a and some but not all of the partitions partitioned by b. If it so happens that, in a query which only updates b, constraint exclusion eliminates all of the partitions that are subpartitioned by b, it would be unnecessary to enable update tuple routing (unless BR UPDATE triggers are present) but this patch will not figure that out. I don't think that optimization is critical for the first version of this feature; there will be a limited number of users with asymmetrical subpartitioning setups, and if one of them has an idea how to improve this without hurting anything else, they are free to contribute a patch. Other optimizations are possible too, but I don't really see any of them as critical either. I don't think the approach of building a hash table to figure out which result rels have already been created is a good one. That too feels like something that the planner should be figuring out and the executor should just be implementing what the planner decided. I haven't figured out exactly how that should work yet, but it seems like it ought to be doable. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Description: Binary data
-- Sent via pgsql-hackers mailing list (firstname.lastname@example.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers