[EMAIL PROTECTED] wrote:

Hi Folks,

We have a requirement to deal with large databases of the size Terabytes when we go into production. What is the best database back-up mechanism and possible issues?

pg_dump can back-up database but the dump file is limited by OS file-size limit. What about the option of compressing the dump file? How much time does it generally take for large databases? I heard, that it would be way too long (even one or days). I haven't tried it out, though.

What about taking zipped back-up of the database directory? We tried this out but the checkpoint data in pg_xlogs directory is also being backed-up. Since these logs keeps on increasing from day1 of database creation, the back_up size if increasing drastically. Can we back-up certain subdirectories without loss of information or consistency..?

Any quick comments/suggestions in this regard would be very helpful.


Please ask in the correct forum, either pgsql-general or pgsql-admin. This list is strictly for discussion of development of postgres, not usage questions.

(If all you need is a pg_dump backup, maybe you could just pipe its output to something like 'split -a 5 -b 1000m - mybackup')

cheers

andrew

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to