A long time ago, (8.1.11 IIRC)
We got much better speed not using the compression flag with pg_dump instead
piping to gzip (or better yet something like pbzip2 or pigz, but I haven't used
them).
I think there was a thread about this that had a test case and numbers.
IIRC it's because you
=?utf-8?Q?Martin_Povolny?= writes:
> I had 5 databases, 4 dumped ok, the 5th, the largest failed dumping: I was
> unable to
> make a dump in the default 'tar' format. I got this message:
> pg_dump: [tar archiver] archive member too large for tar format
This is expected: tar format has a document
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Or even compress AND split it !
pg_dump -Fc dbname | split -b 1G - dump_dbname
and restore:
cat dump_dbname* | pg_restore -d dbname
or
cat dump_dbname* | pg_restore | psql dbname
Le 26/10/2010 23:51, Samuel Stearns a écrit :
> You can also try
You can also try piping the dump through gzip and then restoring using cat:
pg_dumpall | gzip > db.out-`date +\%Y\%m\%d\%H`.gz
cat db.out-`date +\%Y\%m\%d\%H`.gz | gunzip | psql template1
Sam
From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Martin
Zitat von Martin Povolny :
Hallo,
I have some quite grave problems with dumping and restoring large
databases (>4GB of
dump).
I had 5 databases, 4 dumped ok, the 5th, the largest failed dumping:
I was unable to
make a dump in the default 'tar' format. I got this message:
pg_dump: [tar a
>
> is this jdbc driver (postgres-8.4-701.jdbc3.jar ) compatible with postgres
> 8.4.5?
>
Yes it is compatible.
> just curious if we need to rebundle a new jar file... we are upgrading
> postgres from 8.4.1 to 8.4.5
>
>
If you are using jdbc driver compatible with 8.4.1 then, no need to.
hopefully quick question -
is this jdbc driver (postgres-8.4-701.jdbc3.jar ) compatible with
postgres 8.4.5?
just curious if we need to rebundle a new jar file... we are upgrading
postgres from 8.4.1 to 8.4.5
thanks, Maria Wilson
NASA Langley Research Center
Hampton, Virginia 23681
--
S
Glen,
Did you drop the indexes prior to the restore? If not, try doing so and
recreating the indexes afterwards. That will also speed up the data load.
Bob Lunney
--- On Mon, 2/15/10, Glen Brown wrote:
From: Glen Brown
Subject: [ADMIN] pg_dump/restore problems
To: pgsql-admin@postgresql.org
Hallo,
I have some quite grave problems with dumping and restoring large databases
(>4GB of
dump).
I had 5 databases, 4 dumped ok, the 5th, the largest failed dumping: I was
unable to
make a dump in the default 'tar' format. I got this message:
pg_dump: [tar archiver] archive member too large
Lukasz Brodziak, 26.10.2010 08:58:
The problem with a batch file is that you will need to provide the password
for the superuser in clear text - which is probably not something you will
want to do.
But if that isn't a problem (and you actually know the superuser password of
your client) then the
10 matches
Mail list logo