> On a further testing of this patch I find another case when it is
> showing regression, the time taken with patch is around 160 secs and
> without it is 125 secs.
> Another minor thing to note that is planning time is almost twice with
> this patch, though I understand that this is for scenarios with really
> big 'big data' so this may not be a serious issue in such cases, but
> it'd be good if we can keep an eye on this that it doesn't exceed the
> computational bounds for a really large number of tables.
Right, planning time would be proportional to the number of partitions
at least in the first version. We may improve upon it later.
> Please find the attached .out file to check the output I witnessed and
> let me know if anymore information is required
> Schema and data was similar to the preciously shared schema with the
> addition of more data for this case, parameter settings used were:
> work_mem = 1GB
> random_page_cost = seq_page_cost = 0.1
The patch does not introduce any new costing model. It costs the
partition-wise join as sum of costs of joins between partitions. The
method to create the paths for joins between partitions is same as
creating the paths for joins between regular tables and then the
method to collect paths across partition-wise joins is same as
collecting paths across child base relations. So, there is a large
chance that the costing for joins between partitions might have a
problem which is showing up here. There may be some special handling
for regular tables versus child tables that may be the root cause. But
I have not seen that kind of code till now.
Can you please provide the outputs of individual partition-joins? If
the plans for joins between partitions are same as the ones chosen for
partition-wise joins, we may need to fix the existing join cost
The Postgres Database Company
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: