On Thursday 10 February 2005 03:09 pm, Darcy Buskermolen wrote:
> On February 10, 2005 10:58 am, Tom Lane wrote:
> > Chris Kratz <[EMAIL PROTECTED]> writes:
> > > Does anyone have any idea why there be over a 4s difference between
> > > running the statement directly and using explain analyze?
> > >
> > >  Aggregate  (cost=9848.12..9848.12 rows=1 width=0) (actual
> > > time=4841.231..4841.235 rows=1 loops=1)
> > >    ->  Seq Scan on answer  (cost=0.00..8561.29 rows=514729 width=0)
> > > (actual time=0.011..2347.762 rows=530576 loops=1)
> > >  Total runtime: 4841.412 ms
> >
> > EXPLAIN ANALYZE's principal overhead is two gettimeofday() kernel calls
> > per plan node execution, so 1061154 such calls here.  I infer that
> > gettimeofday takes about 4 microseconds on your hardware ... which seems
> > a bit slow for modern machines.  What sort of box is it?
>
> dvl reported the same thing on #postgresql some months back, and neilc
> was/is/did looking into it.  I belive he came up with a way to move the
> function call outside of the loop  with no ill effects to the rest of the
> expected behavior.

That's interesting to know.  It's not a big deal, we were just curious as to 
why the difference.  Tom's explanation makes good sense.  We run into the 
same situation with using a profiler on an application, ie measuring incurs 
overhead.  

-Chris

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to