Rod Taylor wrote:
> On Fri, Feb 25, 2011 at 14:26, Alvaro Herrera 
> <>wrote:
> > Excerpts from Rod Taylor's message of vie feb 25 14:03:58 -0300 2011:
> >
> > > How practical would it be for analyze to keep a record of response times
> > for
> > > given sections of a table as it randomly accesses them and generate some
> > > kind of a map for expected response times for the pieces of data it is
> > > analysing?
> >
> > I think what you want is random_page_cost that can be tailored per
> > tablespace.
> >
> >
> Yes, that can certainly help but does nothing to help with finding typical
> hot-spots or cached sections of the table and sending that information to
> the planner.
> Between Analyze random sampling and perhaps some metric during actual IO of
> random of queries we should be able to determine and record which pieces of
> data tend to be hot/in cache, or readily available and what data tends not
> to be.
> If the planner knew that the value "1" tends to have a much lower cost to
> fetch than any other value in the table (it is cached or otherwise readily
> available), it can choose a plan better suited toward that.

Well, one idea I have always had is feeding things the executor finds
back to the optimizer for use in planning future queries.  One argument
against that is that a planned query might run with different data
behavior than seen by the executor in the past, but we know if the
optimizer is planning something for immediate execution or later
execution, so we could use executor stats only when planning for
immediate execution.

  Bruce Momjian  <>

  + It's impossible for everything to be true. +

Sent via pgsql-hackers mailing list (
To make changes to your subscription:

Reply via email to