On 11/04/2016 17:44, Robert Haas wrote:
> On Mon, Apr 11, 2016 at 11:27 AM, Julien Rouhaud
> <julien.rouh...@dalibo.com> wrote:
>> On 11/04/2016 15:56, tushar wrote:
>>>
>>> I am assuming parallel_degree=0 is as same as not using it  , i.e
>>> create table fok2(n int) with (parallel_degree=0);  = create table
>>> fok2(n int);
>>>
>>> so in this case it should have accepted the parallel seq .scan.
>>>
>>
>> No, setting it to 0 means to force not using parallel workers (but
>> considering the parallel path IIRC).
> 
> I'm not sure what the parenthesized bit means, because you can't use
> parallelism without workers.

Obvious mistake, sorry.

>  But I think I should have made the docs
> more clear that 0 = don't parallelize scans of this table while
> committing this.  Maybe we should go add a sentence about that.
> 

What about

-      the setting of <xref linkend="guc-max-parallel-degree">.
+      the setting of <xref linkend="guc-max-parallel-degree">.  Setting
this
+      parameter to 0 will disable parallelism for that table.

>> Even if you set a per-table parallel_degree higher than
>> max_parallel_degree, it'll be maxed at max_parallel_degree.
>>
>> Then, the explain shows that the planner assumed it'll launch 9 workers,
>> but only 8 were available (or needed perhaps) at runtime.
> 
> We should probably add the number of workers actually obtained to the
> EXPLAIN ANALYZE output.  That's been requested before.
> 

If it's not too late for 9.6, it would be very great.

>>> postgres=# set max_parallel_degree =2624444;
>>> ERROR:  2624444 is outside the valid range for parameter
>>> "max_parallel_degree" (0 .. 262143)
>>>
>>> postgres=# set max_parallel_degree =262143;
>>> SET
>>> postgres=#
>>>
>>> postgres=# explain analyze verbose select * from abd  where n<=1;
>>> ERROR:  requested shared memory size overflows size_t
>>>
>>> if we remove the analyze keyword then query running successfully.
>>>
>>> Expected = Is it not better to throw the error at the time of setting
>>> max_parallel_degree, if not supported ?
>>
>> +1
> 
> It surprises me that that request overflowed size_t.  I guess we
> should look into why that's happening.  Did you test this on a 32-bit
> system?
> 

I can reproduce the same issue on a 64 bits system. Setting
max_parallel_degree to 32768 or above raise this error:

ERROR:  could not resize shared memory segment "/PostgreSQL.44279285" to
18446744072113360072 bytes: Invalid argument

On a 32 bits system, following assert fails:

TRAP: FailedAssertion("!(offset < total_bytes)", File: "shm_toc.c",
Line: 192)

After some gdb, it looks like the overflow comes from

/* Estimate space for tuple queues. */
shm_toc_estimate_chunk(&pcxt->estimator,                                        
        
PARALLEL_TUPLE_QUEUE_SIZE * pcxt->nworkers);

372             shm_toc_estimate_chunk(&pcxt->estimator,
(gdb) p pcxt->estimator
$2 = {space_for_chunks = 3671712, number_of_keys = 3}
(gdb) n
374             shm_toc_estimate_keys(&pcxt->estimator, 1);
(gdb) p pcxt->estimator
$3 = {space_for_chunks = 18446744071565739680, number_of_keys = 3}


Following change fix the issue:

diff --git a/src/backend/executor/execParallel.c
b/src/backend/executor/execParallel.c
index 572a77b..0a5210e 100644
--- a/src/backend/executor/execParallel.c
+++ b/src/backend/executor/execParallel.c
@@ -370,7 +370,7 @@ ExecInitParallelPlan(PlanState *planstate, EState
*estate, int nworkers)

    /* Estimate space for tuple queues. */
    shm_toc_estimate_chunk(&pcxt->estimator,
-                          PARALLEL_TUPLE_QUEUE_SIZE * pcxt->nworkers);
+                          (Size) PARALLEL_TUPLE_QUEUE_SIZE *
pcxt->nworkers);
    shm_toc_estimate_keys(&pcxt->estimator, 1);

But the query still fails with "ERROR:  out of shared memory".

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to