Hi Enio,

Enio Schutt Junior wrote:

Hi
Here, where I work, the backups of the postgresql databases are being done the following way:
There is a daily copy of nearly all the hd (excluding /tmp, /proc, /dev and so on) in which databases are
and besides this there is also one script which makes the pg_dump of each one of the databases on the server.

Hmm, I don't really see what you are doing with a backup of /tmp, /proc, /dev/tmp, /proc, /dev.
I mean /tmp might be ok, but /proc shouldnt be backuped in my opinion, as /proc is NOT on your hd,
but pointing directly to Kernel Memory.
I would not dare to restore such a Backup!
And /dev as well, I mean, these are your devices, so its completely Hardwarebound.


This daily copy of the hd is made with postmaster being active (without stopping the daemon), so the data
from /usr/local/pgsql/data would not be 100% consistent, I guess.

You need to stop Postgres, else forget about your backup. The DB might not even come up again. Here at my site, we have a nice little script, which can be configured to do certain actions before doing a backup of a given directory, and also after the backup.

There are some questions I have about this backup routine:
If I recover data from that "inconsistent" backup hd, I know that the binary files (psql, pg_dump and so on)
will remain ok. The data may have some inconsistencies. Would these inconsistencies let the postmaster
start and work properly (that is, even with the possible presence of inconsistent data) Would it start and
be able to work normally and keep the information about users and groups? I am talking about users and
groups information because these ones are not dumped by pg_dump. I was thinking about using
"pg_dump -g" to generate this information.

I would really not go down this road.


Regards,
Dani


---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match

Reply via email to