On Thu, Jun 9, 2016 at 12:56 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: > Thom Brown <t...@linux.com> writes: >> On 15 May 2014 at 19:56, Bruce Momjian <br...@momjian.us> wrote: >>> On Tue, May 13, 2014 at 06:58:11PM -0400, Tom Lane wrote: >>>> A recent question from Tim Kane prompted me to measure the overhead >>>> costs of EXPLAIN ANALYZE, which I'd not checked in awhile. Things >>>> are far worse than I thought. On my current server (by no means >>>> lavish hardware: Xeon E5-2609 @2.40GHz) a simple seqscan can run >>>> at something like 110 nsec per row: > >> Did this idea die, or is it still worth considering? > > We still have a problem, for sure. I'm not sure that there was any > consensus on what to do about it. Using clock_gettime(CLOCK_REALTIME) > if available would be a straightforward change that should ameliorate > gettimeofday()'s 1-usec-precision-limit problem; but it doesn't do > anything to fix the excessive-overhead problem. The ideas about the > latter were all over the map, and none of them looked easy. > > If you're feeling motivated to work on this area, feel free.
How about using both CLOCK_REALTIME and CLOCK_REALTIME_COARSE as the clock id's in clock_gettime wherever applicable. COARSE option is used wherever there is no timing calculation is required, because in my laptop, there is a significant performance difference is observed (like 8 times) compared to CLOCK_REALTIME. If it is fine, I will try to update the code and send a patch. Regards, Hari Babu Fujitsu Australia -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers