We have a requirement to deal with large databases of the size Terabytes when we go into production. What is the best database back-up mechanism and possible issues?

It depends.

Make sure you read Chapter 23. Backup and Restore of the user manual:


It discusses pg_dump and restore, as well as file system level backup. You'll probably want to set up continuous archiving, which allows you to take a file-system level backup without shutting down the database.

What about taking zipped back-up of the database directory? We tried this out but the checkpoint data in pg_xlogs directory is also being backed-up. Since these logs keeps on increasing from day1 of database creation, the back_up size if increasing drastically.

The amount of WAL files in pg_xlog directory is controlled by the checkpoint_segments configuration parameter.

Can we back-up certain subdirectories without loss of information or consistency..?


  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at


Reply via email to