Thanks Tom,

> How many is "a large number"?

377k approx.

I thought we had logged all the SQL statements last time we ran this, but we 
forgot to set log_statement. We'll do that next time so we can get a figure on 
exactly how many of different types of statement are in the transaction.

> Can you adjust your migration script to break it into multiple transactions 
> that process fewer tables per transaction?

We are considering this. It is not ideal, because if conversion fails for any 
reason we would like the database to roll back to the pre-migration state, but 
maybe that's not possible with a database of this size.

I am surprised by the error message though - I thought that if we'd hit some 
limit on a transaction we would get a more specific error.

Cheers
Mike

-----Original Message-----
From: Tom Lane <t...@sss.pgh.pa.us> 
Sent: Thursday, August 12, 2021 1:41 PM
To: Michael Harris <michael.har...@ericsson.com>
Cc: 'pgsql-general@lists.postgresql.org' <pgsql-general@lists.postgresql.org>
Subject: Re: ERROR: invalid memory alloc request size when committing 
transaction

Michael Harris <michael.har...@ericsson.com> writes:
>   - Next we run a migration script on each database in the cluster. The 
> migration script
>     converts a large number of tables from inheritance based partitioning to 
> declarative 
>     partitioning. Unfortunately I am not at liberty to share the migration 
> script.

How many is "a large number"?

> The migration is performed inside a transaction, and during the 
> process of committing the transaction the following error is thrown:
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 
> ERROR:  invalid memory alloc request size 1073741824
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 
> STATEMENT:  commit
> [2021-08-11 11:27:50 CEST] aue_75@218006 218015@[local] db_vrqv1 
> WARNING:  AbortTransaction while in COMMIT state

It's impossible to do more than speculate with this amount of information.
However, a plausible speculation is that you're processing so many objects in 
one transaction as to overrun the limit on how large a WAL commit record, or 
something related to that, can be.  Can you adjust your migration script to 
break it into multiple transactions that process fewer tables per transaction?

                        regards, tom lane


Reply via email to