On Oct 11, 2006, at 7:57 AM, Markus Schaber wrote:
Mark Woodward wrote:
People are working it, someone even got so far as dealing with most
catalog upgrades. The hard part going to be making sure that even if
the power fails halfway through an upgrade that your data will
Well, I think that any *real* DBA understands and accepts that
power failure and hardware failure create situations where
conditions exist. :-) Stopping the database and copying the pg
addresses this problem, upon failure, it is a simple mv bkdir
you started again.
But when people have enough bandwith and disk space to copy the pg
directory, they also have enough to create and store a bzip2
dump of the database.
Or did I miss something?
Not necessarily. "copying" a directory on most modern unix systems
can be accomplished by snapshotting the filesystem. In this case,
you only pay the space and performance cost for blocks that are
changed between the time of the snap and the time it is discarded.
An actual copy of the database is often too large to juggle (which is
why we write stuff straight to tape libraries).
The real problem with a "dump" of the database is that you want to be
able to quickly switch back to a known working copy in the event of a
failure. A dump is the furthest possible thing from a working copy
as one has to rebuild the database (indexes, etc.) and in doing so,
you (1) spend the better part of a week running pg_restore and (2)
ANALYZE stats change, so your system's behavior changes in hard-to-
// Theo Schlossnagle
// CTO -- http://www.omniti.com/~jesus/
// OmniTI Computer Consulting, Inc. -- http://www.omniti.com/
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?