Hi Thomas,

I was testing this feature with v20 patch, and I got a crash while doing
large joins with small work_mem, and lots of workers as below.

-- Machine Configuration: (d1.xlarge) CUPs : 8 , RAM  : 16GB , SIze : 640GB

-- postgres.conf setting as below:
work_mem = *64kB*
max_parallel_workers_per_gather = *128*
max_parallel_workers = *64*
enable_mergejoin = off
enable_nestloop = off
enable_hashjoin = on
force_parallel_mode = on
seq_page_cost = 0.1
random_page_cost = 0.1
effective_cache_size = 128MB
parallel_tuple_cost = 0
parallel_setup_cost = 0
parallel_synchronization_cost = 0

-- created only one table "lineitem" of size 93GB.
postgres=# select pg_size_pretty(pg_total_relation_size('lineitem'));
 pg_size_pretty
----------------
 93 GB
(1 row)

[centos@centos-prabhat bin]$ vi test10.sql
explain (analyze, costs off)
select  count(*) from lineitem t1 join lineitem t2 using(l_suppkey) join
lineitem t3 using(l_suppkey);
select  count(*) from lineitem t1 join lineitem t2 using(l_suppkey) join
lineitem t3 using(l_suppkey);

[centos@centos-prabhat bin]$ ./psql postgres -a -f test10.sql > test10.out

[centos@centos-prabhat bin]$ vi test10.out
explain (analyze, costs off)
select  count(*) from lineitem t1 join lineitem t2 using(l_suppkey) join
lineitem t3 using(l_suppkey);
psql:test10.sql:2: WARNING:  terminating connection because of crash of
another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
psql:test10.sql:2: server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
psql:test10.sql:2: connection to server was lost


Kindly check, if you can reproduce the same at your end.



*Thanks & Regards,*

*Prabhat Kumar Sahu*
Mob: 7758988455
Skype ID: prabhat.sahu1984

www.enterprisedb.co <http://www.enterprisedb.com/>m
<http://www.enterprisedb.com/>

On Thu, Oct 5, 2017 at 1:15 PM, Thomas Munro <thomas.mu...@enterprisedb.com>
wrote:

> On Thu, Oct 5, 2017 at 7:07 PM, Rushabh Lathia <rushabh.lat...@gmail.com>
> wrote:
> > v20 patch set (I was trying 0008, 0009 patch)  not getting cleanly apply
> on
> > latest commit also getting compilation error due to refactor in below
> > commit.
> >
> > commit 0c5803b450e0cc29b3527df3f352e6f18a038cc6
>
> Hi Rushabh
>
> I am about to post a new patch set that fixes the deadlock problem,
> but in the meantime here is a rebase of those two patches (numbers
> changed to 0006 + 0007).  In the next version I think I should
> probably remove that 'stripe' concept.  The idea was to spread
> temporary files over the available temporary tablespaces, but it's a
> terrible API, since you have to promise to use the same stripe number
> when opening the same name later... Maybe I should use a hash of the
> name for that instead.  Thoughts?
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>

Reply via email to