On Thu, 2007-05-03 at 08:01 +0100, Simon Riggs wrote:
> On Wed, 2007-05-02 at 23:59 +0100, Heikki Linnakangas wrote:
> > Umm, you naturally have just entry per relation, but we were talking 
> > about how many entries the table needs to hold.. You're patch had a 
> > hard-coded value of 1000 which is quite arbitrary.
> We need to think of the interaction with partitioning here. People will
> ask whether we would recommend that individual partitions of a large
> table should be larger/smaller than a particular size, to allow these
> optimizations to kick in.
> My thinking is that database designers would attempt to set partition
> size larger than the sync scan limit, whatever it is. That means:
> - they wouldn't want the limit to vary when cache increases, so we *do*
> need a GUC to control the limit. My suggestion now would be
> large_scan_threshold, since it effects both caching and synch scans.
> - so there will be lots of partitions, so a hardcoded limit of 1000
> would not be sufficient. A new GUC, or a link to an existing one, is
> probably required.

That's a very good point. I don't know how much we can do to fix it now
though, because that has interactions with the planner too: the planner
"should" choose to UNION ALL the relations in an order dependent on
other concurrent queries. I think this will require more thought.

To address the idea of scaling to more relations being concurrently
scanned I could use Heikki's recommendation of a dynamic hash table.

        Jeff Davis

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to