2015-09-17 11:55 GMT+02:00 Shulgin, Oleksandr <oleksandr.shul...@zalando.de>
:

> On Wed, Sep 16, 2015 at 8:07 PM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
>
>>
>> 2015-09-16 16:31 GMT+02:00 Shulgin, Oleksandr <
>> oleksandr.shul...@zalando.de>:
>>
>>>
>>> I've added the timeout parameter to the pg_cmdstatus call, and more
>>> importantly to the sending side of the queue, so that one can limit the
>>> potential effect of handling the interrupt in case something goes really
>>> wrong.
>>>
>>
>> I don't think so introduction new user visible timer is good idea. This
>> timer should be invisible
>>
>> My idea - send a signal and wait 1sec, then check if target process is
>> living still. Stop if not. Wait next 5 sec, then check target process. Stop
>> if this process doesn't live or raise warning "requested process doesn't
>> communicate, waiting.." The final cancel should be done by
>> statement_timeout.
>>
>> what do you think about it?
>>
>
> That won't work really well with something like I use to do when testing
> this patch, namely:
>
> postgres=# select pid, array(select pg_cmdstatus(pid, 1, 10)) from
> pg_stat_activity where pid<>pg_backend_pid() \watch 1
>
> while also running pgbench with -C option (to create new connection for
> every transaction).  When a targeted backend exits before it can handle the
> signal, the receiving process keeps waiting forever.
>

no - every timeout you have to check, if targeted backend is living still,
if not you will do cancel. It is 100% safe.


>
> The statement_timeout in this case will stop the whole select, not just
> individual function call.  Unless you wrap it to set statement_timeout and
> catch QUERY_CANCELED in plpgsql, but then you won't be able to ^C the whole
> select.  The ability to set a (short) timeout for the function itself
> proves to be a really useful feature to me.
>

you cannot to handle QUERY_CANCELED in plpgsql. There is need some internal
timeout - but this timeout should not be visible - any new GUC increase
PostgreSQL complexity - and there is not a reason why do it.


>
> We can still communicate some warnings to the client when no timeout is
> specified (and make 0 the default value actually).
>
> What I'm now thinking about is probably we can extend this backend
>>> communication mechanism in order to query GUC values effective in a
>>> different backend or even setting the values.  The obvious candidate to
>>> check when you see some query misbehaving would be work_mem, for example.
>>> Or we could provide a report of all settings that were overridden in the
>>> backend's session, to the effect of running something like this:
>>>
>>> select * from pg_settings where context = 'user' and setting !=
>>> reset_val;
>>>
>>
>> this is a good idea. More times I interested what is current setting of
>> query. We cannot to use simple query - because it is not possible to push
>> PID to function simply, but you can to write function  pg_settings_pid() so
>> usage can look like
>>
>> select * from pg_settings_pid(xxxx, possible other params) where ...
>>
>
> I would rather have a more general way to run *readonly* queries in the
> other backend than invent a new function for every occurrence.
>
> The obvious candidates to be set externally are the
>>> cmdstatus_analyze/instrument_*: when you decided you want to turn them on,
>>> you'd rather do that carefully for a single backend than globally
>>> per-cluster.  One can still modify the postgresql.conf and then send SIGHUP
>>> to just a single backend, but I think a more direct way to alter the
>>> settings would be better.
>>>
>>
>> I am 100% for possibility to read the setting. But reconfiguration from
>> other process is too hot coffee  - it can be available via extension, but
>> not via usually used tools.
>>
>
> It can be reserved to superuser, and nobody is forcing one to use it
> anyway. :-)
>
> In this light should we rename the API to something like "backend control"
>>> instead of "command status"?  The SHOW/SET syntax could be extended to
>>> support the remote setting retrieval/update.
>>>
>>
>> prepare API, and this functionality can be part of referential
>> implementation in contrib.
>>
>> This patch should not to have too range be finished in this release cycle.
>>
>
> These are just the thoughts on what could be achieved using this
> cross-backend communication mechanism and ideas for generalization of the
> API.
>

ok


>
> --
> Alex
>
>

Reply via email to