Hi Evan,

On Fri, Sep 26, 2025 at 02:34:58PM +0800, Chao Li wrote:
> Hi Bertrand,
> 
> Thanks for the patch. The patch overall goods look to me. Just a few small 
> comments:

Thanks for looking at it!

> > On Sep 25, 2025, at 18:17, Bertrand Drouvot <[email protected]> 
> > wrote:
> > 
> > <v5-0001-Add-mem_exceeded_count-to-pg_stat_replication_slo.patch>
> 
> 
> 1.
> ```
> --- a/src/include/replication/reorderbuffer.h
> +++ b/src/include/replication/reorderbuffer.h
> @@ -690,6 +690,9 @@ struct ReorderBuffer
>       int64           streamCount;    /* streaming invocation counter */
>       int64           streamBytes;    /* amount of data decoded */
>  
> +     /* Number of times logical_decoding_work_mem has been reached */
> +     int64           memExceededCount;
> ```
> 
> For other metrics, the commented with “Statistics about xxx” above, and line 
> comment after every metric. Maybe use the same style, so that it would be 
> easy to add new metrics in future.

I'm not sure: for the moment we have only this stat related to 
logical_decoding_work_mem,
memory usage. If we add other stats in this area later, we could add a comment
"section" as you suggest.

> 2.
> ```
> --- a/src/backend/utils/adt/pgstatfuncs.c
> +++ b/src/backend/utils/adt/pgstatfuncs.c
> @@ -2100,7 +2100,7 @@ pg_stat_get_archiver(PG_FUNCTION_ARGS)
>  Datum
>  pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
>  {
> -#define PG_STAT_GET_REPLICATION_SLOT_COLS 10
> +#define PG_STAT_GET_REPLICATION_SLOT_COLS 11
>       text       *slotname_text = PG_GETARG_TEXT_P(0);
>       NameData        slotname;
>       TupleDesc       tupdesc;
> @@ -2125,11 +2125,13 @@ pg_stat_get_replication_slot(PG_FUNCTION_ARGS)
>                                          INT8OID, -1, 0);
>       TupleDescInitEntry(tupdesc, (AttrNumber) 7, "stream_bytes",
>                                          INT8OID, -1, 0);
> -     TupleDescInitEntry(tupdesc, (AttrNumber) 8, "total_txns",
> +     TupleDescInitEntry(tupdesc, (AttrNumber) 8, "mem_exceeded_count",
>                                          INT8OID, -1, 0);
> -     TupleDescInitEntry(tupdesc, (AttrNumber) 9, "total_bytes",
> +     TupleDescInitEntry(tupdesc, (AttrNumber) 9, "total_txns",
>                                          INT8OID, -1, 0);
> -     TupleDescInitEntry(tupdesc, (AttrNumber) 10, "stats_reset",
> +     TupleDescInitEntry(tupdesc, (AttrNumber) 10, "total_bytes",
> +                                        INT8OID, -1, 0);
> +     TupleDescInitEntry(tupdesc, (AttrNumber) 11, "stats_reset",
>                                          TIMESTAMPTZOID, -1, 0);
>       BlessTupleDesc(tupdesc);
> ```
> 
> Is it better to add the new field in the last place?
> 
> Say if a client does “select * from pg_stat_get_replication_slit()”, it will 
> just gets an extra column instead of mis-ordered columns.

I think it's good to have the function fields "ordering" matching the view
fields ordering. FWIW, the ordering has been discussed in [1].

> 3.
> ```
> +       <para>
> +        Number of times the memory used by logical decoding has exceeded
> +        <literal>logical_decoding_work_mem</literal>.
> +       </para>
> ```
> 
> Feels like “has” is not needed.

It's already done that way in other parts of the documentation:

$ git grep "has exceeded" "*sgml"
doc/src/sgml/maintenance.sgml:    vacuum has exceeded the defined insert 
threshold, which is defined as:
doc/src/sgml/monitoring.sgml:        logical decoding to decode changes from 
WAL has exceeded
doc/src/sgml/monitoring.sgml:        from WAL for this slot has exceeded
doc/src/sgml/monitoring.sgml:        Number of times the memory used by logical 
decoding has exceeded
doc/src/sgml/ref/create_subscription.sgml:          retention duration has 
exceeded the

So that looks ok to me (I'm not a native English speaker though).

[1]: 
https://www.postgresql.org/message-id/CAJpy0uBskXMq65rvWm8a-KR7cSb_sZH9CPRCnWAQrTOF5fciGw%40mail.gmail.com

Regards,

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com


Reply via email to