Nice one Martijn - we have immediate need for this, as one of our sizeable
queries under experimentation took 3 hours without EXPLAIN ANALYZE, then
over 20 hours with it...

- Luke 


On 5/9/06 2:38 PM, "Martijn van Oosterhout" <kleptog@svana.org> wrote:

> On Tue, May 09, 2006 at 05:16:57PM -0400, Rocco Altier wrote:
>>> - To get this close it needs to get an estimate of the sampling
>>> overhead. It does this by a little calibration loop that is run
>>> once per backend. If you don't do this, you end up assuming all
>>> tuples take the same time as tuples with the overhead, resulting in
>>> nodes apparently taking longer than their parent nodes. Incidently,
>>> I measured the overhead to be about 3.6us per tuple per node on my
>>> (admittedly slightly old) machine.
>> 
>> Could this be deferred until the first explain analyze?  So that we
>> aren't paying the overhead of the calibration in all backends, even the
>> ones that won't be explaining?
> 
> If you look it's only done on the first call to InstrAlloc() which
> should be when you run EXPLAIN ANALYZE for the first time.
> 
> In any case, the calibration is limited to half a millisecond (that's
> 500 microseconds), and it'll be a less on fast machines.
> 
> Have a nice day,



---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to