On 1/21/26 08:59, Adrian Klaver wrote:
On 1/21/26 08:12, Colin 't Hart wrote:

6. The 19 slowest queries in a 4 hour period are between 2 and 37 minutes, with an average of over 10 minutes; they are all `fetch 100 from c2`.

The slowness itself isn't my question here; it was caused by having too few cores in the new environment, while the application was still assuming the higher core count and generating too many concurrent processes.

My question is how to identify which connections / queries from postgres_fdw are generating the `fetch 100 from c2` queries, which, in turn, may quite possibly lead to a feature request for having these named uniquely.

My guess not.

See:

https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/ postgres_fdw.c

Starting at line ~5212

fetch_size = 100;

and ending at line ~5234

/* Construct command to fetch rows from remote. */
     snprintf(fetch_sql, sizeof(fetch_sql), "FETCH %d FROM c%u",
              fetch_size, cursor_number);

So c2 is a cursor number.

If I am following this something postgres_fdw does to fetch the result in batches, so all the queries will have them.

FYI, the  fetch_size can be changed, see here:

https://www.postgresql.org/docs/17/postgres-fdw.html#POSTGRES-FDW-CONFIGURATION-PARAMETERS

F.36.1.4. Remote Execution Options


If you want connection/query information I would enable from here:

https://www.postgresql.org/docs/17/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

log_connections

log_disconnections

And at least temporarily:

log_statement = 'all'

The above will generate a lot of logs so you don't want to keep set for too long.




Thanks,

Colin






--
Adrian Klaver
[email protected]


Reply via email to