On 09/19/2016 01:55 PM, kbran...@pwhome.com wrote:
On 09/19/2016 01:06 PM, kbrannen(at)pwhome(dot)com wrote:
--- adrian(dot)klaver(at)aklaver(dot)com wrote:

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: kbrannen(at)pwhome(dot)com, pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] pg_restore question
Date: Mon, 19 Sep 2016 12:46:24 -0700

On 09/19/2016 11:46 AM, kbrannen(at)pwhome(dot)com wrote:

But if I dump using:
    pg_dump --format=d -j4 --file=/tmp/exp   # (an empty dir)
then the restore with with the schema still there and relying on --clean to 
    pg_restore --dbname=nms --clean --create --schema=public .
will fail with:

    pg_restore: [archiver (db)] Error while PROCESSING TOC:
    pg_restore: [archiver (db)] Error from TOC entry 2398; 1247 147632 TYPE 
app_kinds nmsroot
    pg_restore: [archiver (db)] could not execute query: ERROR:  type 
"app_kinds" already exists
        Command was: CREATE TYPE app_kinds AS ENUM (

Second, did it actually fail or did it just throw the error and keep on

So changes my process to:

    # create backup just in case
    echo "alter schema public rename to save; create schema public;" | psql
    pg_restore --dbname=nms --schema=public -j3 .

It still shows all the stuff below (from the original email) and a lot more 
ending with:

    WARNING: errors ignored on restore: 18

Meant to add to previous post. If any of the errors are of the 'objects
does not exist' variety you can get rid of then using:


    Use conditional commands (i.e. add an IF EXISTS clause) when
cleaning database objects. This option is not valid unless --clean is
also specified.

By doing it 1 command at a time with lots of compares, I think I'm starting to
understand what's going on.

Documenting this for those who are curious and to help others in the future...

Back to the part where I have the data spread over 2 schemas... Note that I'm
trying to backup only 1 at a time.  The reason for doing this is because the
important data is in the public schema, while the other schema is for logging
data and will be 100's of times bigger (we do back this up but not as often
because losing a little is not fatal).

Anyway, when I change the public schema to a new name, the tables in the
logging schema that use types from public automatically change, e.g. from
public.call_types to savepublic.call_types. That also means that if I were do
"drop schema public cascade", then the logging table definitions gets changed 
as the
column is dropped. Yikes! (Sadly, I should have realized this before but 

But that's why I'm getting the errors, because when I try to restore the public
schema, our custom types do already exist.

To your suggestion of added "--if-exists --clean", that does remove 4 errors,
leaving me with 14 that are true errors.

My take away from this is that if we have schemas that are dependent on each
other, we can't backup/restore just 1. That means we must break the dependency
or else back them both up all the time (not desirable to do this frequently
because of size). Sigh...

I'll go back and look at what it would take to break the dependencies, but any
other suggestions would be welcome.

One possible solution is to:

1) Do the pg_dump in custom format.

2) Do pg_restore with --schema-only to a file. Open file and change the search_path to include the logging schema.

3) Pass the schema only plain text file into psql to get the schema objects created.

4) pg_restore using:


Restore only the data, not the schema (data definitions). Table data, large objects, and sequence values are restored, if present in the archive.

This option is similar to, but for historical reasons not identical to, specifying --section=data.

Thanks for the advice and pointers Adrian!


Adrian Klaver

Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:

Reply via email to