On Tue, Aug 9, 2016 at 12:47 AM, Ilya Kosmodemiansky <
ilya.kosmodemian...@postgresql-consulting.com> wrote:

> On Mon, Aug 8, 2016 at 7:03 PM, Bruce Momjian <br...@momjian.us> wrote:
> > It seems asking users to run pg_test_timing before deploying to check
> > the overhead would be sufficient.
>
> I'am not sure. Time measurement for waits is slightly more complicated
> than a time measurement for explain analyze: a good workload plus
> using gettimeofday in a straightforward manner can cause huge
> overhead.


What makes you think so?  Both my thoughts and observations are opposite:
it's way easier to get huge overhead from EXPLAIN ANALYZE than from
measuring wait events.  Current wait events are quite huge events itself
related to syscalls, context switches and so on. In contrast EXPLAIN
ANALYZE calls gettimeofday for very cheap operations like transfer tuple
from one executor node to another.


> Thats why a proper testing is important - if we can see a
> significant performance drop if we have for example large
> shared_buffers with the same concurrency,  that shows gettimeofday is
> too expensive to use. Am I correct, that we do not have such accurate
> tests now?
>

Do you think that large shared buffers is a kind a stress test for wait
events monitoring? If so, why?

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Reply via email to