seems like that shouldn't happen.

Stephen Robert Norris wrote:

I've encountered this a few times with 7.2 and 7.3.

If I do pg_dump of some large (> 100Mb - the bigger the more likely)
database, and it gets interrupted for some reason (e.g. the target disk
fills up), the source database become corrupt. I start getting errors
like:

open of /var/lib/pgsql/data/pg_clog/0323 failed: No such file or
directory


and I have to drop/restore the table in question.

Is this a known problem? Is there some safe way to dump databases that
avoids it?

Stephen


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match

Reply via email to