On Thu, 19 Feb 2004, Sally Sally wrote:
> I had a few questions concerning the backup/restore process for pg.
> �
> 1) Is it possible to dump�data onto an existing database that contains data
> (assumning the schema of
> both are the same). Has anyone done this? I am thinking of this in order to
> expediate the data load
> process
I do it all the time. Note that if you have constraints that adding the
new data would violate, it's likely to not import anything.
> 2) I read that when dumping and restoring data the insert option is safer but slower
> than copy? Does
> anyone know from experience how much slower (especially for a database containing
> millions of
> records).
Depends, but usually about twice as slow to as much as ten times slower.
It isn't really any "safer" just more portable to other databases.
> 3) can pg_restore accept a file that is not archived like a zipped file or plain
> text file (file.gz
> or file)
yes, plain text is fine. to do a .gz file you might have to do a gunzip
first. I usually just stick to plain text.
> 4) Is the general practise to have one�whole�dump of a database or several separate
> dumps (by table
> etc...)?
It's normal to see a single large dump. Where I work we run >80 databases
(running on 7.2.x so no schemas) with each database belonging to a
particular application. I wrote a custom wrapper for pg_dump that acts
something like pg_dumpall but dumps each database to a seperate file.
Makes restoring one table or something like that for a single database
much easier when you don't have to slog though gigabytes of unrelated
data.
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
http://archives.postgresql.org