We've been using pg_dump and pg_restore for many years now and it has always
worked well for us. However, we are currently undertaking a major db
architecture to partition our tenant data into separate postgres schemas
instead of storing all data in the public schema. When attempting to perform
a
* Ben (ben.fy...@champsoftware.com) wrote:
is killed off (6GB+ used by a single postmaster process). Here are the
[...]
Total number of relations across all schemas: 53,154
[...]
I should also mention that when performing these dumps there is absolutely
no other DB activity occurring. Do you
...@postgresql.org] On Behalf Of Stephen Frost
Sent: Monday, October 21, 2013 10:37 AM
To: Ben
Cc: 'PG-General Mailing List'
Subject: Re: [GENERAL] pg_dump resulting in excessive memory use by
postmaster process
* Ben (ben.fy...@champsoftware.com) wrote:
is killed off (6GB+ used by a single postmaster
Ben,
* Ben (ben.fy...@champsoftware.com) wrote:
When you say self-contained test case, what is it exactly that you're
looking for? A script that builds out a DB with hundreds of
schemas/relations, a pg_basebackup or something else?
Ideally, an SQL script that builds the DB and then a pg_dump