Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-11-01 Thread Robert Haas
On Thu, Nov 2, 2017 at 8:01 AM, Craig Ringer  wrote:
> That forces materialization, and I'm guessing part of Tomas's goal
> here is to prevent the need to materialize into a temp table /
> tuplestore / etc.

I get that, but if you're running a query like "SELECT * FROM
bigtable", you don't need parallel query in the first place, because a
single backend is quite capable of sending back the rows as fast as a
client can read them.  If you're running a query like "SELECT * FROM
bigtable WHERE " then that's a good use
case for parallel query, but then materializing it isn't that bad
because the result set is a lot smaller than the original table.

I am not disputing the idea that there are *some* cases where parallel
query is useful and materialization is still undesirable, of course.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-11-01 Thread Craig Ringer
On 2 November 2017 at 10:01, Robert Haas  wrote:

> I think that still leaves a fair number of scenarios to consider, and
> the error handling by itself seems pretty thorny.  Plus it's kind of a
> weird mode and, like Craig, I'm not really sure what it gets you.
> Maybe if somebody has the use case where this would help, they should
> just do:
>
> CREATE TEMP TABLE x AS SELECT * FROM t2 WHERE ...;
> DECLARE x CURSOR FOR SELECT * FROM x;

That forces materialization, and I'm guessing part of Tomas's goal
here is to prevent the need to materialize into a temp table /
tuplestore / etc.

It's not clear to me why an unbounded portal fetch, using the tcp
socket windows and buffers for flow control, isn't sufficient.

Tomas, can you explain the use case a bit more?


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-11-01 Thread Robert Haas
On Wed, Nov 1, 2017 at 3:47 AM, Tomas Vondra
 wrote:
> But maybe there's a simpler option - what if we only allow fetches from
> the PARALLEL cursor while the cursor is open? That is, this would work:
>
> BEGIN;
> ...
> DECLARE x PARALLEL CURSOR FOR SELECT * FROM t2 WHERE ...;
> FETCH 1000 FROM x;
> FETCH 1000 FROM x;
> FETCH 1000 FROM x;
> CLOSE x;
> ...
> COMMIT;
>
> but adding any other command between the OPEN/CLOSE commands would fail.
> That should close all the holes with parallel-unsafe stuff, right?

I think that still leaves a fair number of scenarios to consider, and
the error handling by itself seems pretty thorny.  Plus it's kind of a
weird mode and, like Craig, I'm not really sure what it gets you.
Maybe if somebody has the use case where this would help, they should
just do:

CREATE TEMP TABLE x AS SELECT * FROM t2 WHERE ...;
DECLARE x CURSOR FOR SELECT * FROM x;

Since e9baa5e9fa147e00a2466ab2c40eb99c8a700824, that ought to work.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-11-01 Thread Robert Haas
On Wed, Nov 1, 2017 at 7:49 AM, Craig Ringer  wrote:
> If the client wants to fetch in chunks it can use a portal and limited
> size fetches. That shouldn't (?) be parallel-unsafe, since nothing
> else can happen in the middle anyway.

I believe sending a limited-size fetch forces serial execution
currently.  If it's true that nothing else can happen in the middle
then we could relax that, but I don't see why that should be true?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-31 Thread Craig Ringer
> Now, I agree this is somewhat more limited than I hoped for, but OTOH it
> still solves the issue I initially aimed for (processing large query
> results in efficient way).

I don't quite understand this part.

We already send results to the client in a stream unless it's
something we have to materialize, in which case a cursor won't help
anyway.

If the client wants to fetch in chunks it can use a portal and limited
size fetches. That shouldn't (?) be parallel-unsafe, since nothing
else can happen in the middle anyway.

But in most cases the client can just fetch all, and let the socket
buffering take care of things, reading results only when it wants
them, and letting the server block when the windows are full.

That's not to say that SQL-level cursor support wouldn't be nice. I'm
just trying to better understand what it's solving.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-31 Thread Tomas Vondra
Hi,

On 10/20/2017 03:23 PM, Robert Haas wrote:
>
> ...
> 
> The main points I want to make clearly understood is the current
> design relies on (1) functions being labeled correctly and (2) other
> dangerous code paths being unreachable because there's nothing that
> runs between EnterParallelMode and ExitParallelMode which could invoke
> them, except by calling a mislabeled function.  Your patch expands the
> vulnerability surface from "executor code that can be reached without
> calling a mislabeled function" to "any code that can be reached by
> typing an SQL command".  Just rejecting any queries that are
> parallel-unsafe probably closes a good chunk of the holes, but that
> still leaves a lot of code that's never been run in parallel mode
> before potentially now running in parallel mode - e.g. any DDL command
> you happen to type, transaction control commands, code that only runs
> when the server is idle like idle_in_transaction_timeout, cursor
> operations.  A lot of that stuff is probably fine, but it's got to be
> thought through.  Error handling might be a problem, too: what happens
> if a parallel worker is killed while the query is suspended?  I
> suspect that doesn't work very nicely at all.
> 

OK, understood and thanks for explaining what may be the possible
issues. I do appreciate that.

I still think it'd be valuable to support this, though, so I'm going to
spend more time on investigating what needs to be handled.

But maybe there's a simpler option - what if we only allow fetches from
the PARALLEL cursor while the cursor is open? That is, this would work:

BEGIN;
...
DECLARE x PARALLEL CURSOR FOR SELECT * FROM t2 WHERE ...;
FETCH 1000 FROM x;
FETCH 1000 FROM x;
FETCH 1000 FROM x;
CLOSE x;
...
COMMIT;

but adding any other command between the OPEN/CLOSE commands would fail.
That should close all the holes with parallel-unsafe stuff, right?

Of course, this won't solve the issue with error handling / killing
suspended workers (which didn't occur to me before as a possible issue
at all, so that's for pointing that out). But that's a significantly
more limited issue to fix than all the parallel-unsafe bits.

Now, I agree this is somewhat more limited than I hoped for, but OTOH it
still solves the issue I initially aimed for (processing large query
results in efficient way).


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-20 Thread Robert Haas
On Tue, Oct 17, 2017 at 12:06 PM, Tomas Vondra
 wrote:
> In general, it may be acceptable to rely on the elog() checks - which is
> pretty much what the FETCH+INSERT+SHARE example I shared in the first
> message of this thread does. I agree it's not particularly convenient,
> and perhaps we should replace it with checks while planning the queries.

Those checks are very much not comprehensive, though.  For example,
consider commit e9baa5e9fa147e00a2466ab2c40eb99c8a700824, which
allowed heap_insert() in parallel mode.  Here's the comment:

 /*
+ * Parallel operations are required to be strictly read-only in a parallel
+ * worker.  Parallel inserts are not safe even in the leader in the
+ * general case, because group locking means that heavyweight locks for
+ * relation extension or GIN page locks will not conflict between members
+ * of a lock group, but we don't prohibit that case here because there are
+ * useful special cases that we can safely allow, such as CREATE TABLE AS.
+ */
+if (IsParallelWorker())
 ereport(ERROR,
 (errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+ errmsg("cannot insert tuples in a parallel worker")));

Now, as things stand, there's no way for this code to be reached
except in the case where the inserts are targeting a new table, or at
least I hope there isn't, because that would be a bug.  But if we
applied your patch then it could be easily done: just start a parallel
cursor and then run an INSERT command.  It might take up a little work
to gin up a test case that shows this really crashing and burning, but
I'm very confident that it's possible.  I wouldn't have gone to the
trouble of installing checks to prevent inserts from running in
parallel mode if inserts were safe in parallel mode.

>> Also, I doubt this guarantees that we won't try to call
>> parallel-unsafe functions will parallel mode is active, so things
>> will just blow up in whatever way they do, maybe crashing the server
>> or rendering the database inconsistent or whatever.
>
> Probably. What happens right now is that declaring the cursor switches
> the whole transaction into parallel mode (EnterParallelMode), so if you
> do FETCH + INSERT + FETCH that will fail with elog(ERROR).

But, again, only in the limited cases that the elog() checks catch.  A
C function can be parallel-unsafe without doing anything that hits the
elog() checks; there is no way for the system to protect itself
against arbitrary C code.  The elog() checks are intended to catch the
common or likely ways of breaking the world, but arbitrarily C code
can work around those checks -- if nothing else, duplicate one of the
functions that has an elog() in it, remove the elog(), and then call
it.  You just did something parallel-safe, unchecked!  That's an
artificial example, of course, which is not likely to occur in
practice, but I'm pretty confident that there are non-artificial
examples also.

I think this is a more or less classic kind of programming problem -
you're proposing to take a set of checks that were intended to ensure
safety under a limited set of circumstances and generalize them to a
much broader context than the one for which they were designed.  They
will not be adequate to those circumstances, and a considerable amount
of analysis will be needed to figure out where the gaps are.  If
somebody wants to do that analysis, I'm willing to review it and try
to spot any holes, but I don't really want to spend the time to go do
the whole analysis myself.

The main points I want to make clearly understood is the current
design relies on (1) functions being labeled correctly and (2) other
dangerous code paths being unreachable because there's nothing that
runs between EnterParallelMode and ExitParallelMode which could invoke
them, except by calling a mislabeled function.  Your patch expands the
vulnerability surface from "executor code that can be reached without
calling a mislabeled function" to "any code that can be reached by
typing an SQL command".  Just rejecting any queries that are
parallel-unsafe probably closes a good chunk of the holes, but that
still leaves a lot of code that's never been run in parallel mode
before potentially now running in parallel mode - e.g. any DDL command
you happen to type, transaction control commands, code that only runs
when the server is idle like idle_in_transaction_timeout, cursor
operations.  A lot of that stuff is probably fine, but it's got to be
thought through.  Error handling might be a problem, too: what happens
if a parallel worker is killed while the query is suspended?  I
suspect that doesn't work very nicely at all.

One way to get some ideas about where the problem lies would be to
write a test patch that keeps parallel mode active at all times except
when running a query that contains something parallel-unsafe.  Then
run the regression tests and see if anything blows up.  That's not

Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-17 Thread Tomas Vondra


On 10/17/2017 03:16 PM, Robert Haas wrote:
> On Sat, Oct 14, 2017 at 2:14 PM, Tomas Vondra
>  wrote:
>> I propose is to add a new cursor option (PARALLEL), which would allow
>> parallel plans for that particular user-defined cursor. Attached is an
>> experimental patch doing this (I'm sure there are some loose ends).
> 
> I think you would need to do a huge amount of additional work in
> order to actually make this robust. I believe that a fair amount of
> what goes on in parallel mode right now is checked with elog() if we
> think that it's unreachable without writing C code -- but this will
> make a lot of those things reachable, which means they would need to
> be checked some other way.

Sure, additional checks may be necessary. I've tried to come up with
examples causing crashes, and haven't succeeded. Of course, that's no
proof of correctness, so if you have an example that will crash and burn
I'd like to see see it.

In general, it may be acceptable to rely on the elog() checks - which is
pretty much what the FETCH+INSERT+SHARE example I shared in the first
message of this thread does. I agree it's not particularly convenient,
and perhaps we should replace it with checks while planning the queries.

> Also, I doubt this guarantees that we won't try to call
> parallel-unsafe functions will parallel mode is active, so things
> will just blow up in whatever way they do, maybe crashing the server
> or rendering the database inconsistent or whatever.
> 

Probably. What happens right now is that declaring the cursor switches
the whole transaction into parallel mode (EnterParallelMode), so if you
do FETCH + INSERT + FETCH that will fail with elog(ERROR).

But yeah, this should probably be checked at planning time, and the
whole query should fail if the transaction is in parallel mode and the
query contains unsafe/restricted functions.

> Possibly I'm overestimating the extent of the danger, but I don't
> think so.  You're try to take a mechanism that was only ever meant to
> be active during the course of one query and applying it for long
> periods of time during which a user can do anything, with basically no
> upgrade of the infrastructure.  I think something like this could be
> made to work if you put a large amount of energy into it, but I think
> the patch as proposed is about the easiest 3-5% of what would actually
> be required to cover all the holes.
> 

Well, soliciting feedback like this was one of the points of sharing the
experimental patch, so thank you for that. I'm not sure if the estimate
of 3-5% is accurate, but I'm sure the patch is incomplete - which is why
I marked it as experimental, after all.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-17 Thread Robert Haas
On Sat, Oct 14, 2017 at 2:14 PM, Tomas Vondra
 wrote:
> I propose is to add a new cursor option (PARALLEL), which would allow
> parallel plans for that particular user-defined cursor. Attached is an
> experimental patch doing this (I'm sure there are some loose ends).

I think you would need to do a huge amount of additional work in order
to actually make this robust.  I believe that a fair amount of what
goes on in parallel mode right now is checked with elog() if we think
that it's unreachable without writing C code -- but this will make a
lot of those things reachable, which means they would need to be
checked some other way.  Also, I doubt this guarantees that we won't
try to call parallel-unsafe functions will parallel mode is active, so
things will just blow up in whatever way they do, maybe crashing the
server or rendering the database inconsistent or whatever.

Possibly I'm overestimating the extent of the danger, but I don't
think so.  You're try to take a mechanism that was only ever meant to
be active during the course of one query and applying it for long
periods of time during which a user can do anything, with basically no
upgrade of the infrastructure.  I think something like this could be
made to work if you put a large amount of energy into it, but I think
the patch as proposed is about the easiest 3-5% of what would actually
be required to cover all the holes.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-16 Thread Tomas Vondra


On 10/16/2017 01:13 PM, Amit Kapila wrote:
> On Sat, Oct 14, 2017 at 11:44 PM, Tomas Vondra
>  wrote:
>> Hi,
>>
>>
>> I propose is to add a new cursor option (PARALLEL), which would allow
>> parallel plans for that particular user-defined cursor. Attached is an
>> experimental patch doing this (I'm sure there are some loose ends).
>>
> 
> How will this work for backward scans?
> 

It may not work, just like for regular cursors without SCROLL. And if
you specify SCROLL, then I believe a Materialize node will be added
automatically if needed, but haven't tried.

> 
>>
>> Regarding (2), if the user suspends the cursor for a long time, bummer.
>> The parallel workers will remain assigned, doing nothing. I don't have
>> any idea how to get around that, but I don't see how we could do better.
>>
> 
> One idea could be that whenever someone uses Parallel option, we can 
> fetch and store all the data locally before returning control to the 
> user (something like WITH HOLD option).
> 

I don't like that very much, as it adds unnecessary overhead. As I said
before, I'm particularly interested in cursors returning large amounts
of data, so the overhead may be quite significant.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-16 Thread Amit Kapila
On Sat, Oct 14, 2017 at 11:44 PM, Tomas Vondra
 wrote:
> Hi,
>
>
> I propose is to add a new cursor option (PARALLEL), which would allow
> parallel plans for that particular user-defined cursor. Attached is an
> experimental patch doing this (I'm sure there are some loose ends).
>

How will this work for backward scans?


>
> Regarding (2), if the user suspends the cursor for a long time, bummer.
> The parallel workers will remain assigned, doing nothing. I don't have
> any idea how to get around that, but I don't see how we could do better.
>

One idea could be that whenever someone uses Parallel option, we can
fetch and store all the data locally before returning control to the
user (something like WITH HOLD option).

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-14 Thread Tomas Vondra
Hi,

One of the existing limitations of parallel query is that cursors
generally do not benefit from it [1]. Commit 61c2e1a95f [2] improved the
situation for cursors from procedural languages, but unfortunately for
user-defined cursors parallelism is still disabled.

For many use cases that is perfectly fine, but for applications that
need to process large amounts of data this is rather annoying. When the
result sets are large, cursors are extremely efficient - in terms of
memory consumption, for example. So the applications have to choose
between "cursor" approach (and no parallelism), or parallelism and
uncomfortably large result sets.

I believe there are two main reasons why parallelism is disabled for
user-defined cursors (or queries that might get suspended):

(1) We can't predict what will happen while the query is suspended (and
the transaction is still in "parallel mode"), e.g. the user might run
arbitrary DML which is not allowed.

(2) If the cursor gets suspended, the parallel workers would be still
assigned to it and could not be used for anything else.

Clearly, we can't solve those issues in general, so the default will
probably remain "parallelism disabled".

I propose is to add a new cursor option (PARALLEL), which would allow
parallel plans for that particular user-defined cursor. Attached is an
experimental patch doing this (I'm sure there are some loose ends).

This does not make either any of the issues go away, of course. We still
enforce "no DML while parallel operation in progress" as before, so this
will not work:

BEGIN;
DECLARE x PARALLEL CURSOR FOR SELECT * FROM t2 WHERE ...;
FETCH 1000 FROM x;
INSERT INTO t2 VALUES (1);
FETCH 1000 FROM x;
COMMIT;

but this will

BEGIN;
DECLARE x PARALLEL CURSOR FOR SELECT * FROM t2 WHERE ...;
FETCH 1000 FROM x;
...
FETCH 1000 FROM x;
CLOSE x;
INSERT INTO t2 VALUES (1);
COMMIT;

Regarding (2), if the user suspends the cursor for a long time, bummer.
The parallel workers will remain assigned, doing nothing. I don't have
any idea how to get around that, but I don't see how we could do better.
I don't see either of these limitations as fatal.

Any opinions / obvious flaws that I missed?

regards

[1]
https://www.postgresql.org/docs/9.6/static/when-can-parallel-query-be-used.html

[2]
https://www.postgresql.org/message-id/CAOGQiiMfJ%2B4SQwgG%3D6CVHWoisiU0%2B7jtXSuiyXBM3y%3DA%3DeJzmg%40mail.gmail.com

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 76d6cf1..ffaa096 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -66,6 +66,12 @@ PerformCursorOpen(DeclareCursorStmt *cstmt, ParamListInfo params,
 		RequireTransactionChain(isTopLevel, "DECLARE CURSOR");
 
 	/*
+	 * Enable parallel plans for cursors that explicitly requested it.
+	 */
+	if (cstmt->options & CURSOR_OPT_PARALLEL)
+		cstmt->options |= CURSOR_OPT_PARALLEL_OK;
+
+	/*
 	 * Parse analysis was done already, but we still have to run the rule
 	 * rewriter.  We do not do AcquireRewriteLocks: we assume the query either
 	 * came straight from the parser, or suitable locks were acquired by
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 9689429..64f8a32 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -423,6 +423,13 @@ standard_ExecutorFinish(QueryDesc *queryDesc)
 	/* This should be run once and only once per Executor instance */
 	Assert(!estate->es_finished);
 
+	/* If this was PARALLEL cursor, do cleanup and exit parallel mode. */
+	if (queryDesc->parallel_cursor)
+	{
+		ExecShutdownNode(queryDesc->planstate);
+		ExitParallelMode();
+	}
+
 	/* Switch into per-query memory context */
 	oldcontext = MemoryContextSwitchTo(estate->es_query_cxt);
 
@@ -1085,6 +1092,18 @@ InitPlan(QueryDesc *queryDesc, int eflags)
 
 	queryDesc->tupDesc = tupType;
 	queryDesc->planstate = planstate;
+
+	/* If this was PARALLEL cursor, enter parallel mode, except in EXPLAIN-only. */
+
+	queryDesc->parallel_cursor
+		= (eflags & EXEC_FLAG_PARALLEL) && !(eflags & EXEC_FLAG_EXPLAIN_ONLY);
+
+	/*
+	 * In PARALLEL cursors we have to enter the parallel mode once, at the very
+	 * beginning (and not in ExecutePlan, as we do for execute_once plans).
+	 */
+	if (queryDesc->parallel_cursor)
+		EnterParallelMode();
 }
 
 /*
@@ -1725,7 +1744,8 @@ ExecutePlan(EState *estate,
 		if (TupIsNull(slot))
 		{
 			/* Allow nodes to release or shut down resources. */
-			(void) ExecShutdownNode(planstate);
+			if (execute_once)
+(void) ExecShutdownNode(planstate);
 			break;
 		}
 
@@ -1772,7 +1792,8 @@ ExecutePlan(EState *estate,
 		if (numberTuples && numberTuples == current_tuple_count)
 		{
 			/* Allow nodes to release or shut down resources. */
-			(void) ExecShutdownNode(planstate);
+			if (execute_once)
+(void) ExecShutdownNode(planstate);