On 6/8/16 9:56 AM, Tom Lane wrote:
Thom Brown <t...@linux.com> writes:
On 15 May 2014 at 19:56, Bruce Momjian <br...@momjian.us> wrote:
On Tue, May 13, 2014 at 06:58:11PM -0400, Tom Lane wrote:
A recent question from Tim Kane prompted me to measure the overhead
costs of EXPLAIN ANALYZE, which I'd not checked in awhile.  Things
are far worse than I thought.  On my current server (by no means
lavish hardware: Xeon E5-2609 @2.40GHz) a simple seqscan can run
at something like 110 nsec per row:

Did this idea die, or is it still worth considering?

We still have a problem, for sure.  I'm not sure that there was any
consensus on what to do about it.  Using clock_gettime(CLOCK_REALTIME)
if available would be a straightforward change that should ameliorate
gettimeofday()'s 1-usec-precision-limit problem; but it doesn't do
anything to fix the excessive-overhead problem.  The ideas about the
latter were all over the map, and none of them looked easy.

If you're feeling motivated to work on this area, feel free.

Semi-related: someone (Robert I think) recently mentioned investigating "vectorized" executor nodes, where multiple tuples would be processed in one shot. If we had that presumably the explain penalty would be a moot point.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to