We use built-in pgoutput and a client application did an HOT update to a
column , that data type is "text" and real data length is 756711422 bytes.
it's pg logical decoding throw out of memory error when decode "WAL
records belong to table" , and string buffer total size exceed 1GB. But
th
Hi Pang
The text column is exceptionally large, Your server must be out of memory,
Such a process ran out of memory while handling a large text column update.
I suggest using an S3 bucket for such files, Consider increasing the
memory-related configuration parameters, like work_mem, maintenance_
We use built-in pgoutput and a client application did an HOT update to
a column , that data type is "text" and real length is 756711422. But this
table is NOT on publication list, possible to make logical decoding ignore
"WAL records belong to tables that's not in publication list" ? or we
h
2024-07-31 00:01:02.795
UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:
out of memory
2024-07-31 00:01:02.795
UTC:10.240.6.139(33068):repl13801@pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:
Cannot enlarge string buffer containing 378355896 bytes by 756711422 more
bytes.
On Tue, Jul 30, 2024 at 11:34 AM Andrei Lepikhov wrote:
>
> Thanks for report. I see such cases frequently enough and the key
> problem here is data skew, as you already mentioned. Extended statistics
> doesn't help here. Also, because we can't estimate specific values
> coming from the outer Nest
On 29/7/2024 22:51, Jon Zeppieri wrote:
Of course, I'd prefer not to have to materialize this relation
explicitly. This particular query, for this particular user, benefits
from it, but similar queries or queries for different users may not.
I think the root of the problem is that population siz