On Tue, Mar 21, 2017 at 7:41 AM, Ashutosh Bapat
<ashutosh.ba...@enterprisedb.com> wrote:
> On Mon, Mar 20, 2017 at 10:17 PM, Ashutosh Bapat
> <ashutosh.ba...@enterprisedb.com> wrote:
>>> On a further testing of this patch I find another case when it is
>>> showing regression, the time taken with patch is around 160 secs and
>>> without it is 125 secs.
>>> Another minor thing to note that is planning time is almost twice with
>>> this patch, though I understand that this is for scenarios with really
>>> big 'big data' so this may not be a serious issue in such cases, but
>>> it'd be good if we can keep an eye on this that it doesn't exceed the
>>> computational bounds for a really large number of tables.
>> Right, planning time would be proportional to the number of partitions
>> at least in the first version. We may improve upon it later.
>>> Please find the attached .out file to check the output I witnessed and
>>> let me know if anymore information is required
>>> Schema and data was similar to the preciously shared schema with the
>>> addition of more data for this case, parameter settings used were:
>>> work_mem = 1GB
>>> random_page_cost = seq_page_cost = 0.1
> this doesn't look good. Why do you set both these costs to the same value?

That's a perfectly reasonable configuration if the data is in memory
on a medium with fast random access, like an SSD.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to