On Fri, Jul 27, 2018 at 3:17 AM, Ashutosh Bapat <ashutosh.ba...@enterprisedb.com> wrote: > Apart from the complexity there's also a possibility that this > skipping will reduce the efficiency actually in normal cases. Consider > a case where A and B have exactly matching partitions. Current > partition matching algorithm compare any given range/bound only once > and we will have O(n) algorithm. If we use a binary search, however, > for every range comparison, it will be O(n log n) algorithm. There > will be unnecessary comparisons during binary search. The current > algorithm is always O(n), whereas binary search would be O(n log(n)) > with a possibility of having sub-O(n) complexity in some cases. I > would go for an algorithm which has a consistent time complexity > always and which is efficient in normal cases, rather than worrying > about some cases which are not practical.
Yeah, I think that's a good point. The normal case here will be that the partition bounds are equal, or that there are a few extra partitions on one side that don't exist on the other. We don't want other cases to be crazily inefficient, but I suspect in practice that if the partitioning bounds aren't pretty close to matching up exactly, we're going to end up failing to be able to do a partition-wise join at all. It's not very likely that somebody happens to have a case where they've partitioned two tables in randomly different ways, but then they decide to join them anyway, but then it turns out that the partition bounds happen to be compatible enough that this algorithm works. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company