Greg Stark wrote:
Josh Berkus <josh@agliodbs.com> writes:
That's not really practical. There are currently 5 major query tuning parameters, not counting the memory adjustments which really can't be left out. You can't realistically test all combinations of 6 variables.

I don't think it would be very hard at all actually.
[snip]
What's needed is for the explain plan to total up the costing penalties
independently. So the result would be something like

1000 * random_page_cost + 101 * sequential_page_cost + 2000 * index_tuple_cost
+ ...

In other words a tuple like <1000,101,2000,...>
>
And explain analyze would produce the above tuple along with the resulting
time.

Some program would have to gather these values from the log or stats data and
gather them up into a large linear system and solve for values that minimize
the divergence from the observed times.

You'd only need to log them if they diverged from expected anyway. That should result in fairly low activity pretty quickly (or we're wasting our time). Should they go to the stats collector rather than logs?


(Also, currently explain analyze has overhead that makes this impractical.
Ideally it could subtract out its overhead so the solutions would be accurate
enough to be useful)

Don't we only need the top-level figures though? There's no need to record timings for each stage, just work completed.


--
  Richard Huxton
  Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to