On 09/24/2015 07:04 PM, Tom Lane wrote:
Tomas Vondra <tomas.von...@2ndquadrant.com> writes:
But what about computing the number of expected batches, but always
start executing assuming no batching? And only if we actually fill
work_mem, we start batching and use the expected number of batches?
Hmm. You would likely be doing the initial data load with a "too
small" numbuckets for single-batch behavior, but if you successfully
loaded all the data then you could resize the table at little
penalty. So yeah, that sounds like a promising approach for cases
where the initial rowcount estimate is far above reality.
I don't understand the comment about "too small" numbuckets - isn't
doing that the whole point of using the proposed limit? The batching is
merely a consequence of how bad the over-estimate is.
But I kinda thought we did this already, actually.
I don't think so - I believe we haven't modified this aspect at all. It
may not have been as pressing thanks to NTUP_PER_BUCKET=10 in the past.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers