`Tom Lane wrote:`

No, they are not that easy to determine. In particular I think the idea of automatically feeding back error measurements is hopeless, because you cannot tell which parameters are wrong.

Isn't it just a matter of solving an equation system with n variables (n being the number of parameters), where each equation stands for the calculation of the run time of a particular query? I.e. something like

this for a sequential scan over 1000 rows with e.g. 2 operators used per iteration that took 2 seconds (simplified so that the costs are actual timings and not relative costs to a base value):

Isn't it just a matter of solving an equation system with n variables (n being the number of parameters), where each equation stands for the calculation of the run time of a particular query? I.e. something like

this for a sequential scan over 1000 rows with e.g. 2 operators used per iteration that took 2 seconds (simplified so that the costs are actual timings and not relative costs to a base value):

`1000 * sequential_scan_cost + 1000 * 2 * cpu_operator_cost = 2.0 seconds`

`With a sufficient number of equations (not just n, since not all query plans use all the parameters) this system can be solved for the particular query mix that was used. E.g. with a second sequential scan over 2000 rows with 1 operator per iteration that took 3 seconds you can derive:`

sequential_scan_cost = 1ms cpu_operator_cost = 0.5ms

`This could probably be implemented with very little overhead compared to the actual run times of the queries.`

Regard, Marinos

---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly