Hi,

On Thu, Nov 14, 2024 at 5:18 PM Alvaro Herrera <alvhe...@alvh.no-ip.org>
wrote:

> On 2024-Nov-14, Michael Paquier wrote:
>
> > Already mentioned previously at [1] and echoing with some surrounding
> > arguments, but I'd suggest to keep it simple and just remove entirely
> > the part of the patch where the stats information gets spilled into
> > disk.  With more than 6000-ish context information available with a
> > hard limit in place, there should be plenty enough to know what's
> > going on anyway.
>
> Functionally-wise I don't necessarily agree with _removing_ the spill
> code, considering that production systems with thousands of tables would
> easily reach that number of contexts (each index gets its own index info
> context, each regexp gets its own memcxt); and I don't think silently
> omitting a fraction of people's memory situation (or erroring out if the
> case is hit) is going to make us any friends.
>
>
While I agree that removing the spill-to-file logic will simplify the code,
I also understand the rationale for retaining it to ensure completeness.
To achieve both completeness and avoid writing to a file, I can consider
displaying the numbers for the remaining contexts as a cumulative total
at the end of the output.

Something like follows:
```
postgres=# select * from pg_get_process_memory_contexts('237244', false);
                 name                  |                     ident
             |   type   |     path     | total_bytes | tot
al_nblocks | free_bytes | free_chunks | used_bytes |  pid
---------------------------------------+------------------------------------------------+----------+--------------+-------------+----
-----------+------------+-------------+------------+--------
 TopMemoryContext                      |
             | AllocSet | {0}          |       97696 |
         5 |      14288 |          11 |      83408 | 237244
 search_path processing cache          |
             | AllocSet | {0,1}        |        8192 |
         1 |       5328 |           7 |       2864 | 237244

*Remaining contexts total:  23456 bytes (total_bytes) , 12345(used_bytes),
11,111(free_bytes)*
```


> That said, it worries me that we choose a shared memory size so large
> that it becomes impractical to hit the spill-to-disk code in regression
> testing.  Maybe we can choose a much smaller limit size when
> USE_ASSERT_CHECKING is enabled, and use a test that hits that number?
> That way, we know the code is being hit and tested, without imposing a
> huge memory consumption on test machines.
>

Makes sense. I will look into writing such a test, if we finalize the
approach
of spill-to-file.

Please find attached a rebased and updated patch with a basic test
and some fixes. Kindly let me know your thoughts.

Thank you,
Rahila Syed

Attachment: v3-0001-Function-to-report-memory-context-stats-of-any-backe.patch
Description: Binary data

Reply via email to