> - To get this close it needs to get an estimate of the sampling
> It does this by a little calibration loop that is run once per
> If you don't do this, you end up assuming all tuples take the same
> as tuples with the overhead, resulting in nodes apparently taking
> longer than their parent nodes. Incidently, I measured the overhead to
> be about 3.6us per tuple per node on my (admittedly slightly old)
> machine.

Could this be deferred until the first explain analyze?  So that we
aren't paying the overhead of the calibration in all backends, even the
ones that won't be explaining?


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to