Hi Pang

The text column is exceptionally large, Your server must be out of memory,
Such a process ran out of memory while handling a large text column update.

I suggest using an S3 bucket for such files,  Consider increasing the
memory-related configuration parameters, like work_mem, maintenance_work_mem or
even the server's overall memory allocation if possible.

Or increase the shared buffer size.

If everything doesn't work, use physical replication to cope with it. 😄

in last let me know the datatype your are using for this column.



On Wed, Jul 31, 2024 at 7:18 AM James Pang <jamespang...@gmail.com> wrote:

> 2024-07-31 00:01:02.795 
> UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:
>  out of memory
> 2024-07-31 00:01:02.795 
> UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:
>  Cannot enlarge string buffer containing 378355896 bytes by 756711422 more
> bytes.
> 2024-07-31 00:01:02.795 
> UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT:
>  slot "pgreps_13801", output plugin "pgoutput", in the change callback,
> associated LSN 3D/318438E0
> 2024-07-31 00:01:02.795 
> UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT:
>  START_REPLICATION SLOT "pgreps_13801" LOGICAL 3C/F24C74D0 (proto_version
> '1', publication_names 'pgreps_13801')
>
> We use built-in pgoutput and a client application did an HOT update to a
> column , that data type is "text" and real length is 756711422.  But this
> table is NOT on publication list, possible to make logical decoding ignore
> "WAL records belong to tables that's not in publication list" ?
>
> Thanks,
>
> James
>

Reply via email to