did you happen to take a look at the other set of queries I sent you, that were showing 39k and 41k explain results? do they show pretty much the same 'lag' in gettimeofday()?
On Fri, 28 Nov 2003, Tom Lane wrote: > > ... if its just me mis-reading the numbers, let me > > know ... it just "feels" off > > Here's what I see: > > > time psql -c "explain analyze SELECT count(rec_id) FROM url" 186_archives > QUERY PLAN > -------------------------------------------------------------------------------------------------------------------- > Aggregate (cost=11177.19..11177.19 rows=1 width=4) (actual time=3539.180..3539.184 > rows=1 loops=1) > -> Seq Scan on url (cost=0.00..10741.55 rows=174255 width=4) (actual > time=0.111..2292.234 rows=215552 loops=1) > Total runtime: 3539.922 ms > (3 rows) > > 0.006u 0.000s 0:03.57 0.0% 0+0k 0+0io 0pf+0w > > So there doesn't seem to be any discrepancy between what EXPLAIN says > and what time(1) says. It is true that the overhead of EXPLAIN ANALYZE > looks a bit high: > > > time psql -c "SELECT count(rec_id) FROM url" 186_archives > count > -------- > 215169 > (1 row) > > 0.000u 0.004s 0:00.77 0.0% 0+0k 0+0io 0pf+0w > > If I'm doing the arithmetic correctly this makes the measurement > overhead about 13 microseconds per row. Presumably that's almost > entirely spent on the two gettimeofday() calls, so they are costing > upwards of 6 microseconds apiece, which seems like a lot for a modern > processor. Might be worth griping to the BSD kernel folk... > > Still though I think we could be proud that we've gotten the price of > a seqscan down to the point where a couple of gettimeofday() calls per > row are dominating the runtime. > > regards, tom lane > ---- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly