Hi,

On 2018-12-16 12:06:16 -0300, Alvaro Herrera wrote:
> Found this on Postgres 9.6, but I think it affects back to 9.4.
> 
> I've seen a case where reorderbuffer keeps very large amounts of memory
> in use, without spilling to disk, if the main transaction does little or
> no changes and many subtransactions execute changes just below the
> threshold to spill to disk.
> 
> The particular case we've seen is the main transaction does one UPDATE,
> then a subtransaction does something between 300 and 4000 changes.
> Since all these are below max_changes_in_memory, nothing gets spilled to
> disk.  (To make matters worse: even if there are some subxacts that do
> more than max_changes_in_memory, only that subxact is spilled, not the
> whole transaction.)  This was causing a 16GB-machine to die, unable to
> process the long transaction; had to add additional 16 GB of physical
> RAM for the machine to be able to process the transaction.
> 
> I think there's a one-line fix, attached: just add the number of changes
> in a subxact to nentries_mem when the transaction is assigned to the
> parent.

Isn't this going to cause significant breakage, because we rely on
nentries_mem to be accurate?

        /* try to load changes from disk */
        if (entry->txn->nentries != entry->txn->nentries_mem)


Greetings,

Andres Freund

Reply via email to