On Wed, Sep 9, 2015 at 8:36 AM, Pavel Stehule <pavel.steh...@gmail.com>
wrote:

>
>> Please find attached v4.
>>
>
> It is better
>

One important thing to notice, and probably deserves a code comment, that
any modification of the slot fields done by the info sender backend must be
done before actually sending the message via the shared memory queue (or
detaching one, if there's nothing to be sent).  Otherwise it is possible
that is_processed flag will be set on the slot that has been already reused
by the receiving backend.

I could hit that problem rather easily while doing

select pg_cmdstatus(pid,$1) from pg_stat_activity where
pid<>pg_backend_pid() \watch 1

and running pgbench with >= 8 connections in the background.  The observed
effect is that after a few successful runs, the pg_cmdstatus() never
returns because the next signaled backend loops through the slots and
doesn't find an unprocessed one (because it was marked as processed by the
backend that was responding just before it).

Two notices:
>
> 1. The communication mechanism can be used more wide, than only for this
> purpose. We can introduce a SendInfoHook - and it can be used for any
> customer probes - memory, cpu, ...
>

Not sure if for CPU you can get any more insight than an external tool like
top(1) will provide.  For the memory, it might be indeed useful in some way
(memory context inspection?)

So there's certainly a space for generalization.  Rename it as
pg_backend_info()?

2. With your support for explain of nested queries we have all what we need
> for integration auto_explain to core.
>

Well, I don't quite see how that follows.  What I'm really after is
bringing instrumentation support to this, so that not only plan could be
extracted, but also the rows/loops/times accumulated thus far during the
query run.

--
Alex

Reply via email to