On 2025-02-03 21:47, Rahila Syed wrote:
Hi,


Just idea; as an another option, how about blocking new
requests to
the target process (e.g., causing them to fail with an error
or
returning NULL with a warning) if a previous request is still
pending?
Users can simply retry the request if it fails. IMO failing
quickly
seems preferable to getting stuck for a while in cases with
concurrent
requests.

Thank you for the suggestion. I agree that it is better to fail
early and avoid waiting for a timeout in such cases. I will add a
"pending request" tracker for this in shared memory. This approach

will help prevent sending a concurrent request if a request for
the
same backend is still being processed.


Please find attached a patch that adds a request_pending field in
shared memory. This allows us to detect concurrent requests early
and return a WARNING message immediately, avoiding unnecessary
waiting and potential timeouts. This is added in v12-0002* patch.

Thanks for updating the patch!

The below comments would be a bit too detailed at this stage, but I’d like to share the points I noticed.


76 + arguments: PID and a boolean, get_summary. The function can send

Since get_summary is a parameter, should we enclose it in <parameter> tags, like <parameter>get_summary</parameter>?

387 + * The shared memory buffer has a limited size - it the process has too many
388 + * memory contexts,

Should 'it' be 'if'?

320 * By default, only superusers are allowed to signal to return the memory 321 * contexts because allowing any users to issue this request at an unbounded 322 * rate would cause lots of requests to be sent and which can lead to denial of
323  * service. Additional roles can be permitted with GRANT.

This comment seems to contradict the following code:

360 * Only superusers or users with pg_read_all_stats privileges can view the
361      * memory context statistics of another process
362      */
363     if (!has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS))
364         ereport(ERROR,
365                 (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
366 errmsg("memory context statistics privilege error")));


485 + if (memCtxState[procNumber].memstats_dsa_handle == DSA_HANDLE_INVALID)
486 +   {
487 +
488 +       LWLockRelease(&memCtxState[procNumber].lw_lock);

505 +   else
506 +   {
507 +       LWLockRelease(&memCtxState[procNumber].lw_lock);

The LWLockRelease() function appears in both the if and else branches. Can we move it outside the conditional block to avoid duplication?

 > 486 +   {
 > 487 +
 > 488 +       LWLockRelease(&memCtxState[procNumber].lw_lock);

The blank line at 487 seems unnecessary. Should we remove it?

534         {
535             ereport(LOG,
536 (errmsg("Wait for %d process to publish stats timed out, trying again",
537                             pid)));
538             if (num_retries > MAX_RETRIES)
539                 goto end;
540             num_retries = num_retries + 1;
541         }

If the target process remains unresponsive, the logs will repeatedly show:

  LOG:  Wait for xxxx process to publish stats timed out, trying again
  LOG:  Wait for xxxx process to publish stats timed out, trying again
  ...
  LOG:  Wait for xxxx process to publish stats timed out, trying again

However, the final log message is misleading because it does not actually try again. Should we adjust the last log message to reflect the correct behavior?

541         }
542
543     }

The blank line at 542 seems unnecessary. Should we remove it?

874 + context_id_lookup = hash_create("pg_get_remote_backend_memory_contexts",

Should 'pg_get_remote_backend_memory_contexts' be renamed to 'pg_get_process_memory_contexts' now?


899 + * Allocate memory in this process's dsa for storing statistics of the the

'the the' is a duplicate.


--
Regards,

--
Atsushi Torikoshi
Seconded from NTT DATA GROUP CORPORATION to SRA OSS K.K.


Reply via email to