On Wed, Jan 11, 2017 at 2:44 AM, David Rowley
<david.row...@2ndquadrant.com> wrote:
> It has come to my attention that when a user has a CONNECTION LIMIT
> set, and they make use of parallel query, that their queries can fail
> due to the connection limit being exceeded.
> Simple test case:
> postgres=# \c postgres user1
> You are now connected TO DATABASE "postgres" AS USER "user1".
> postgres=> CREATE TABLE t1 AS (SELECT i FROM GENERATE_SERIES(1,6000000) s(i));
> SELECT 6000000
> postgres=> SET max_parallel_workers_per_gather = 2;
> postgres=> SELECT COUNT(*) FROM t1;
> ERROR:  too many connections FOR ROLE "user1"
> CONTEXT:  parallel worker
> Now, as I understand it, during the design of parallel query, it was
> designed in such a way that nodeGather could perform all of the work
> in the main process in the event that no workers were available, and
> that the only user visible evidence of this would be the query would
> be slower than it would otherwise be.

This has been reported previously [1] and I have explained the reason
why such a behaviour is possible and why this can't be handled in
Gather node.

> After a little bit of looking around I see that CountUserBackends()
> does not ignore the parallel workers, and counts these as
> "CONNECTIONS". It's probably debatable to weather these are
> connections or not,

I think this is not only for parallel workers, rather any background
worker that uses database connection
(BGWORKER_BACKEND_DATABASE_CONNECTION) will be counted in a similar
way.  I am not sure if it is worth inventing something to consider
such background worker connections different from backend connections.
However, I think we should document it either in parallel query or in
background worker or in Create User .. Connection section.

[1] - 

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to