2017-05-13 14:38 GMT+02:00 Amit Kapila <amit.kapil...@gmail.com>:

> On Wed, May 10, 2017 at 10:10 PM, Remi Colinet <remi.coli...@gmail.com>
> wrote:
> >
> > Parallel queries can also be monitored. The same mecanism is used to
> monitor
> > child workers with a slight difference: the main worker requests the
> child
> > progression directly in order to dump the whole progress tree in shared
> > memory.
> >
>
> What if there is any error in the worker (like "out of memory") while
> gathering the statistics?  It seems both for workers as well as for
> the main backend it will just error out.  I am not sure if it is a
> good idea to error out the backend or parallel worker as it will just
> end the query execution.


The handling of progress report starts by the creation of a MemoryContext
attached to CurrentMemoryContext. Then, few memory (few KB) is allocated.
Meanwhile, the handling of progress report could indeed exhaust memory and
fail the backend request. But, in such situation, the backend could also
have fail even without any progress request.


> Also, even if it is okay, there doesn't seem
> to be a way by which a parallel worker can communicate the error back
> to master backend, rather it will just exit silently which is not
> right.
>

If a child worker fails, it will not respond to the main backend request.
The main backend will follow up it execution after a 5 seconds timeout (GUC
param to be added may be). In which case, the report would be partially
filled.

If the main backend fails, the requesting backend will have a response such
as:

test=# PROGRESS 14611;
 PLAN PROGRESS
----------------
 <backend timeout>
(1 row)

test=#

and the child workers will log their response to the shared memory. This
response will not be collected by the main backend which has failed.

Thx & regards
Remi


> --
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com
>

Reply via email to