On 08/29/2014 12:48 PM, Jesper Krogh wrote: > On 20/08/2014, at 23.22, Dimitri Maziuk <dmaz...@bmrb.wisc.edu> wrote: > >> FWIW, I usually pg_dump the schema to a text file and run a script that >> does '\copy ... to csv' for each table. Then commit them to git or rcs >> repository right there. Then rsync the repository to a couple of other >> servers. No bacula needed. > > That is not going to give you a backup gauranteed to be consistent.
It will give *me* a consistent backup, but in general you're right: it won't. (Wrapping it in a single transaction can be problematic.) The other problem is, like pg_dump it only works up to a point. Once your .csv's grow to a couple of gigabytes you'll have the same problems as with one huge pg_dump file. Dep. on the data and update frequency the deltas can grow into gigabytes even faster and kill your i/o on both vcs commits and repo sync. Again, *my* csv's aren't expected to get that big anytime soon, YMMV. So buyer beware, when it breaks you get to keep the pieces, and all that. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
------------------------------------------------------------------------------ Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/
_______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users