Re: Please help to improve SELECT JOIN performance, EXPLAIN ANALYZE attached

2021-02-19 Thread Michael Lewis
Your explain analyze is showing an example that runs in less than 15ms, and
your logs indicate queries taking 2-3 seconds. I am missing part of the
picture here. Can you clarify?


Re: Error with pg_dump (of data), with --role

2021-02-19 Thread Ken Tanzer
On Fri, Feb 19, 2021 at 4:21 PM Tom Lane  wrote:

>
> Yeah, that would be slightly safer.  If the public schema is
> world-writable, though, you're in big trouble anyway ...
>
>
Sorry, you lost me with the last sentence.  My scenario is that public
_isn't_ world-writable.  But everyone can set their own temp objects.  So
these would be found first unless pg_temp is explicitly specified at the
end of the search path.

And I guess that's a "No" on an option to show the complete search path. :)

Thanks again for your help!

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Error with pg_dump (of data), with --role

2021-02-19 Thread Tom Lane
Ken Tanzer  writes:
> One thing about the search path though, regarding pg_temp.  If I add a
> SET search_path = public;
> Do I need instead to specify "public, pg_temp" to prevent it from being
> (silently) at the beginning?

Yeah, that would be slightly safer.  If the public schema is
world-writable, though, you're in big trouble anyway ...

regards, tom lane




Re: Error with pg_dump (of data), with --role

2021-02-19 Thread Ken Tanzer
On Fri, Feb 19, 2021 at 3:22 PM Tom Lane  wrote:

> Ken Tanzer  writes:
> > On Thu, Feb 18, 2021 at 8:44 PM Tom Lane  wrote:
> >> There was a security change to pg_dump a few years ago to make it
> >> put "set search_path = pg_catalog" into the dump script.  This
> >> basically means that any user-defined function in indexes, check
> >> constraints, etc is on its own to be sure that it schema-qualifies
> >> non-system names, or has a "SET search_path" clause to do that
> >> for it.
>
> > Thank you Tom for that explanation.  To follow on, I tried adding:
> > SET search_path = public;
> > to the functions, but that prevents my function from working at all:
>
> No, the way to do it is with a SET function property, like
>
> create or replace function myfunc(...) returns ... language ...
>   as $$body here$$
>   SET search_path = whatever
>   ... other function properties ...
> ;
>
> That takes care of restoring the old value on the way out of the
> function, so it's okay to use in an immutable function.
>
> I think you can plaster this property onto an existing function
> with ALTER FUNCTION, which should be less error-prone than
> repeating the whole CREATE.
>
>


Great, that works, and ALTER FUNCTION definitely much better for me.

One thing about the search path though, regarding pg_temp.  If I add a

SET search_path = public;

Do I need instead to specify "public, pg_temp" to prevent it from being
(silently) at the beginning?

This seems to be what the "Writing SECURITY DEFINER Functions Safely"
section suggests (
https://www.postgresql.org/docs/9.6/sql-createfunction.html).  But pg_temp
isn't mentioned at all on the page about schemas (
https://www.postgresql.org/docs/9.6/ddl-schemas.html), so I'm a little
unclear.  Also if there are other hidden schemas in the search path.

And along those lines, any chance of seeing something like "SHOW
search_path_complete" (or search_path_explicit) implemented?  Seems like it
could be helpful!

Cheers,
Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Error with pg_dump (of data), with --role

2021-02-19 Thread Tom Lane
Ken Tanzer  writes:
> On Thu, Feb 18, 2021 at 8:44 PM Tom Lane  wrote:
>> There was a security change to pg_dump a few years ago to make it
>> put "set search_path = pg_catalog" into the dump script.  This
>> basically means that any user-defined function in indexes, check
>> constraints, etc is on its own to be sure that it schema-qualifies
>> non-system names, or has a "SET search_path" clause to do that
>> for it.

> Thank you Tom for that explanation.  To follow on, I tried adding:
> SET search_path = public;
> to the functions, but that prevents my function from working at all:

No, the way to do it is with a SET function property, like

create or replace function myfunc(...) returns ... language ...
  as $$body here$$
  SET search_path = whatever
  ... other function properties ...
;

That takes care of restoring the old value on the way out of the
function, so it's okay to use in an immutable function.

I think you can plaster this property onto an existing function
with ALTER FUNCTION, which should be less error-prone than
repeating the whole CREATE.

regards, tom lane




Please help to improve SELECT JOIN performance, EXPLAIN ANALYZE attached

2021-02-19 Thread Alexander Farber
Good evening,

I have a word game which uses PostgreSQL 13.2 and 80% of the code is
written as stored functions in PL/PgSQL or SQL.

Recently I have purchased some traffic and the number of daily games
increased from 100 to 700.

In the PostgreSQL log I have noticed that the duration for 2 particular
queries have increased, especially in the evenings:

2021-02-19 17:51:19.104 CET [68932] LOG:  duration: 2356.723 ms  execute
: SELECT words_stat_longest($1::int)
2021-02-19 17:55:23.290 CET [68602] LOG:  duration: 2326.507 ms  execute
: SELECT words_stat_longest($1::int)
2021-02-19 17:57:57.057 CET [68932] LOG:  duration: 1257.773 ms  execute
: SELECT out_uidAS uid,  out_fcmAS fcm,out_apns   AS
apns,   out_admAS adm,out_hmsAS hms,out_social AS social,
out_sidAS sid,out_body   AS bodyFROM words_play_game($1::int,
$2::int, $3::jsonb)
2021-02-19 18:02:51.359 CET [68603] LOG:  duration: 2305.950 ms  execute
: SELECT words_stat_longest($1::int)
2021-02-19 18:08:26.130 CET [68932] LOG:  duration: 2375.713 ms  execute
: SELECT words_stat_longest($1::int)

One of the queries is actually a simple SELECT ... LIMIT 15 to find the 15
longest words played by a user (they are displayed at the player profile
page).

I have run the EXPLAIN ANALYZE here: https://explain.depesz.com/s/IcqN

It is difficult for me to understand, what could be missing there, does
anybody please have an idea?

Thank you for any input
Alex

P.S. Here are the tables involved in the SELECT query, with the
words_moves and words_geoip holding the most records:

# \d words_scores
Table "public.words_scores"
 Column |  Type   | Collation | Nullable | Default
+-+---+--+-
 mid| bigint  |   | not null |
 gid| integer |   | not null |
 uid| integer |   | not null |
 word   | text|   | not null |
 score  | integer |   | not null |
Indexes:
"words_scores_gid_idx" btree (gid)
"words_scores_length_mid_idx" btree (length(word) DESC, mid DESC)
"words_scores_uid_idx" btree (uid)
Check constraints:
"words_scores_score_check" CHECK (score >= 0)
"words_scores_word_check" CHECK (word ~ '^[А-Я]{2,}$'::text)
Foreign-key constraints:
"words_scores_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid)
ON DELETE CASCADE
"words_scores_mid_fkey" FOREIGN KEY (mid) REFERENCES words_moves(mid)
ON DELETE CASCADE
"words_scores_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid)
ON DELETE CASCADE

# \d words_moves
  Table "public.words_moves"
 Column  |   Type   | Collation | Nullable |
 Default
-+--+---+--+--
 mid | bigint   |   | not null |
nextval('words_moves_mid_seq'::regclass)
 action  | text |   | not null |
 gid | integer  |   | not null |
 uid | integer  |   | not null |
 played  | timestamp with time zone |   | not null |
 tiles   | jsonb|   |  |
 score   | integer  |   |  |
 str | text |   |  |
 hand| text |   |  |
 letters | character(1)[]   |   |  |
 values  | integer[]|   |  |
Indexes:
"words_moves_pkey" PRIMARY KEY, btree (mid)
"words_moves_gid_played_idx" btree (gid, played DESC)
"words_moves_uid_action_played_idx" btree (uid, action, played)
"words_moves_uid_idx" btree (uid)
Check constraints:
"words_moves_score_check" CHECK (score >= 0)
Foreign-key constraints:
"words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid) ON
DELETE CASCADE
"words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid) ON
DELETE CASCADE
Referenced by:
TABLE "words_puzzles" CONSTRAINT "words_puzzles_mid_fkey" FOREIGN KEY
(mid) REFERENCES words_moves(mid) ON DELETE CASCADE
TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY
(mid) REFERENCES words_moves(mid) ON DELETE CASCADE

# \d words_geoip
 Table "public.words_geoip"
 Column |   Type   | Collation | Nullable | Default
+--+---+--+-
 block  | inet |   | not null |
 lat| double precision |   |  |
 lng| double precision |   |  |
Indexes:
"words_geoip_pkey" PRIMARY KEY, btree (block)

# \d words_games
  Table "public.words_games"
  Column  |   Type   | Collation | Nullable |
   Default
--+--+---+--+--
 gid  | integer  |   | not 

Re: Error with pg_dump (of data), with --role

2021-02-19 Thread Ken Tanzer
On Thu, Feb 18, 2021 at 8:44 PM Tom Lane  wrote:

> Ken Tanzer  writes:
> > I'm not sure what you mean or are suggesting by that.  Is there something
> > I'm supposed to do to set the search path?  Is that a known bug in
> > pg_dump?  Something else?  As mentioned, there is only one schema
>
> There was a security change to pg_dump a few years ago to make it
> put "set search_path = pg_catalog" into the dump script.  This
> basically means that any user-defined function in indexes, check
> constraints, etc is on its own to be sure that it schema-qualifies
> non-system names, or has a "SET search_path" clause to do that
> for it.  While that's annoying, it's also good practice.  Functions
> that could be invoked in these contexts really ought not assume
> what search path they are called with.
>
> I do not think any of the other details you mentioned, such as
> use of --role, have any impact on this.
>
>
Thank you Tom for that explanation.  To follow on, I tried adding:

SET search_path = public;

to the functions, but that prevents my function from working at all:

pg_dump: [archiver (db)] query failed: ERROR:  SET is not allowed in a
non-volatile function
CONTEXT:  SQL function "has_segment_access" during startup
pg_dump: [archiver (db)] query was: FETCH 100 FROM _pg_dump_cursor

I can get it to work by schema-qualifying every reference within the
functions involved.

So is the upshot of this that functions used for RLS need to either have
every reference schema-qualified, or else be marked volatile?  (At least in
order to also work with pg_dump?)  Or am I still misunderstanding or
missing something?

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Order by not working

2021-02-19 Thread Peter J. Holzer
On 2021-02-17 08:45:05 +0100, Laurenz Albe wrote:
> On Tue, 2021-02-16 at 16:11 -0600, Ron wrote:
> > SQL is only intuitive to people who've done programming... :)
> 
> SQL is quite counter-intuitive to people who have only done
> procedural programming.

Yes, different paradigm. SQL is more like a functional or logic
programming language (I often thought that Prolog would make a nice
query language for an RDBMS).

It also fell into what I like to call the COBOL trap: Designing a
language so that looks like normal English in the hope that "ordinary
people" will be able to use it. In reality that doesn't help
non-programmers much (it's still a formal language with precise
semantics and the computer will do what you say, not what you mean), but
makes it harder for programmers.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: PostgreSQL Replication

2021-02-19 Thread Peter J. Holzer
On 2021-02-17 12:22:50 +0300, Mutuku Ndeti wrote:
> I agree with you. Single master, with a standby replica, seems easier to
> manage. Is there a way to automatically promote the standby, when the active
> master fails?

Yes. There are several solutions. We use Patroni, which is simple to set
up and seems to "just work" (we routinely cause a failover every few
weeks and so far it has always worked fine). Patroni is quite
postgres-specific, though, so if you want to manage other applications,
too, you may want to look into more generic solutions.

> Is it feasible to have 2 instances of the application, writing onto
> the same DB, reason for two instances of the application is to allow
> for redundancy/load balancing. 

You can have any number of applications writing into the same database.
At some point you might exceed the capacity of your hardware, but that's
a function of load in general, not the number of applications.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: checkpointer and other server processes crashing

2021-02-19 Thread Peter J. Holzer
On 2021-02-15 16:15:44 -0500, Joe Abbate wrote:
> As I understand it, the PG server processes doing a SELECT are spawned as
> children of the Go process,

There's a misconception here: The server processes are children of the
postmaster process, not of the client program (whether it's written in
Go or any other language). Think about it: The database server and the
client may not even be on the same machine, so there cannot be a
parent/child relationship between client and server processes.

> so presumably if a "goroutine" dies, the associated PG process would
> die too,

A server process will terminate when the connection is closed, but it
may not notice that immediately. Moreover, since Goroutines are handled
by the Go runtime, not the Linux kernel, the connection may not be
closed just because a goroutine terminates (actually, I think the
standard API pretty much guarantuees the existence of a connection
pool).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: checkpointer and other server processes crashing

2021-02-19 Thread Peter J. Holzer
On 2021-02-15 15:34:30 -0800, Adrian Klaver wrote:
> It probably would not hurt to figure why this seemed to happen with the
> Python -> Go switch. Even if you can get the OOM not to kick in, the fact
> that it was kicking in would indicate you now have memory hungry processes
> that did not exist before.

Or just more of them. I could imagine that switching from
Python/Gunicorn to Go increased the number of queries that could be
in-flight at the same time.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-19 Thread luis . roberto
> De: "sivapostgres" 
> Para: "Benedict Holland" 
> Cc: "pgsql-general" , "Thomas Kellerer" 
> 
> Enviadas: Quarta-feira, 17 de fevereiro de 2021 11:09:38
> Assunto: Re: Slow while inserting and retrieval (compared to SQL Server)

> So far no performance tuning done for sql server.  It works fine for the load.
> Even the express edition which is free, works better.  I don't think postgres
> will be so slow to insert such a low number of records in an empty database.
> I'll be preparing the required sql statements to insert those records in 
> pgadmin
> and see the timings, tomorrow.
> 
>On Wednesday, 17 February, 2021, 07:29:29 pm IST, Benedict Holland
> wrote:
> 
> Sql server is a 10k dollar to 1 million dollar application (or more) that is
> specifically optimized for windows and had limited to no support anywhere 
> else.
> Postgres is free and from my experience, comes within 5% of any other dbms.
> Inserting 1 row at a time with auto commit on will be a bit slow but it
> shouldn't be noticeable. What times are you seeing if you do this with 
> pgadmin4
> compared to sql server? Also, have you done any performance tuning for 
> postgres
> server? There are many documents detailing performance tuning your servers,
> like you probably did, at some point, with your sql server.
> Thanks,Ben
> On Wed, Feb 17, 2021, 8:28 AM sivapostg...@yahoo.com 
> wrote:
> 
> We use datawindows.  Datawindows will send the required DML statements to the
> database.  And it sent in format 1 .
> IN start of the application, Autocommit set to True.Before update of any
> table(s)Autocommit is set to FalseInsert/Update/Delete recordsIf success 
> commit
> else rollbackAutocommit is set to True
> This has been followed for decades and it's working fine with Sql server.
> Here we are trying to insert just 10 records spread across 6 tables, which is
> taking more time.. that's what we feel.   The similar work in SQL Server takes
> much less time < as if no wait is there >.
>On Wednesday, 17 February, 2021, 06:48:35 pm IST, Thomas Kellerer
> wrote:
> 
> sivapostg...@yahoo.com schrieb am 17.02.2021 um 13:01:
>> To populate some basic data we try to insert few records (max 4
>> records) in few tables (around 6 tables) from one window.  We feel
>> that the insert time taken is longer than the time taken while using
>> Sql Server.  We tested almost a similar window that updated the
>> similar table(s) in SQL server, which was faster.  With Postgres
>> database, we need to wait for a couple of seconds before the
>> insert/update is over, which we didn't feel in Sql Server.
> 
> 
> Are you doing single-row inserts like:
> 
>    insert into ... values (..);
>    insert into ... values (..);
>    insert into ... values (..);
>    insert into ... values (..);
> 
> or are you doing multi-row inserts like this:
> 
>    insert into ... values (..), (..), (..), (..);
> 
> Typically the latter will perform much better (especially if autocommit is
> enabled)

Please provide the EXPLAIN ANALYZE plans so we can take a look at what is 
causing these 'slow' inserts.




Re: Slow while inserting and retrieval (compared to SQL Server)

2021-02-19 Thread Peter J. Holzer
On 2021-02-17 12:54:18 +, sivapostg...@yahoo.com wrote:
> Yes, that's what I feel.  With no records in any tables, working from the same
> machine where PG has been installed, with only one user working, inserting few
> records (10 records in total, in all 6 tables)  should not take this much
> time.  

How much time is "this much time"? Are we talking a few milliseconds
here? Less? More? Much more? 

It's hard to give advice if you don't tell us more than "slower than SQL
server". Please be specific. Use actual numbers.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Slow index creation

2021-02-19 Thread Peter J. Holzer
On 2021-02-16 19:30:23 +0100, Paul van der Linden wrote:
> I have 2 functions:
>
> CREATE FUNCTION foo(a text, b text, c text) RETURNS text AS
> $func$
> DECLARE
>     retVal text;
> BEGIN
>     SELECT
>       CASE
>         ... snip long list containing various tests on a,b and c
>       END INTO retval;
>     RETURN retVal;
> END
> $func$ LANGUAGE PLPGSQL IMMUTABLE;
> 
> CREATE FUNCTION bar(r text, geom geometry) RETURNS int AS
> $func$
> DECLARE
>     retVal int;
> BEGIN
>     SELECT
>       CASE
>         WHEN r='r1' AND st_area(geom)>100 THEN 1
>         ... snip long list containing various tests on r and st_area(geom)
>       END INTO retval;
>     RETURN retVal;
> END
> $func$ LANGUAGE PLPGSQL IMMUTABLE;
> 
> and a large table t (100M+ records) with columns a, b, c and geom running on 
> PG
> 11, on spinning disks with 64GB memory and 28 cores.
> 
> When I create a simple geom index with CREATE INDEX ON t USING gist(geom) it
> finishes in about an hour, but when I create a partial index using these 2
> functions
> CREATE INDEX ON t USING gist(geom) WHERE bar(foo(a,b,c),geom)<12 it takes over
> 20 hours...

It has to call these functions for each of those 100M+ lines. So that's
about 0.7 ms per line. Is that to be expected for what those functions
do? I don't know.

As depesz wrote, use smaller test case, like 1000 or 1 rows. You can
test the speed of those functions in isolation. Compare

select a, b, c from t;
select foo(a, b, c) from t;
select bar(foo(a, b, c), geom) from t;

That tells you how much time is spent in foo and bar. If the sum is
close to those 0.7 ms, you know that you need to make those functions
faster.

I think SQL doesn't short-circuit, so in your function bar, that
st_area(geom) in the first WHEN clause will always be called, regardless
of the value of r. You can either call that once (as depesz suggested)
or you can avoid calling it by nesting the cases:

case
when r = 'r1' then
case
when st_area(geom) > 100 then 1
else 2
end
when r = 'r2' then

end

You can then go one step furthe:

case
when r like 'r_' then
-- handle r1 ... r9
when r like 'r1_' then
-- handle r10 ... r19
when r like 'r2_' then
-- handle r20 ... r29
end

That reduces the average number of comparisons from 50+ to 8.

Depesz measured an overhead of 0.016ms per log message. That's low
enough that you can probably afford a few messages, even if each
function invocation only takes a few milliseconds. So definitely try
that if you need to know where your functions spend their time.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: how does PostgreSQL determine how many parallel processes to start

2021-02-19 Thread Laurenz Albe
On Fri, 2021-02-19 at 11:21 +0100, Luca Ferrari wrote:
> > At execution time, PostgreSQL will use as many of the planned workers
> > as are currently available (max_parallel_workers).
> 
> Thanks, but just to make it clear, assuming I execute almost
> simultanously two identical queries that can be therefore be
> parallelized, does that mean that the first one will be executed with
> the max available parallele capacity and the second will "starve" on
> parllelism being executed sequentially. Is this correct?
> As a consequence to that, this also could mean that a query over a
> small table could take more advanatge (in parallel sense) than a scan
> on a larger table that was issued just a moment after (assuming both
> table can be scanned in parallel), right?

Precisely.  That is why you have "max_parallel_workers_per_gather"
to limit the number of parallel workers available to a single query.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: how does PostgreSQL determine how many parallel processes to start

2021-02-19 Thread Luca Ferrari
On Fri, Feb 19, 2021 at 10:43 AM Laurenz Albe  wrote:
> At execution time, PostgreSQL will use as many of the planned workers
> as are currently available (max_parallel_workers).

Thanks, but just to make it clear, assuming I execute almost
simultanously two identical queries that can be therefore be
parallelized, does that mean that the first one will be executed with
the max available parallele capacity and the second will "starve" on
parllelism being executed sequentially. Is this correct?
As a consequence to that, this also could mean that a query over a
small table could take more advanatge (in parallel sense) than a scan
on a larger table that was issued just a moment after (assuming both
table can be scanned in parallel), right?

Luca




Re: how does PostgreSQL determine how many parallel processes to start

2021-02-19 Thread Laurenz Albe
On Fri, 2021-02-19 at 10:38 +0100, Luca Ferrari wrote:
> I know that parallel processes can be limited by
> max_parallel_workers_per_gather and max_parallel_workers, as well as
> the condition to consider a parallel plan is min_table_scan_size (and
> index). But I would like to understand, once a table has been
> considered for a parallel plan, and there is room for other workers,
> how will PostgreSQL decide to start another process?

During planning, it will generate parallel and non-parallel plans
and take the one it estimates to be cheapest.

At execution time, PostgreSQL will use as many of the planned workers
as are currently available (max_parallel_workers).

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





how does PostgreSQL determine how many parallel processes to start

2021-02-19 Thread Luca Ferrari
Hi all,
I know that parallel processes can be limited by
max_parallel_workers_per_gather and max_parallel_workers, as well as
the condition to consider a parallel plan is min_table_scan_size (and
index). But I would like to understand, once a table has been
considered for a parallel plan, and there is room for other workers,
how will PostgreSQL decide to start another process?

Thanks,
Luca