kaolin fire <[EMAIL PROTECTED]> writes: > Where would I go to start tracking down recurring error messages of the > sort:
> FATAL 2: open of /usr/local/pgsql/data/pg_clog/06F7 failed: No such > file or directory > FATAL 2: open of /usr/local/pgsql/data/pg_clog/0707 failed: No such > file or directory > 06F7 and 0707 do not exist. Currently just looks like it goes from > 0000 (May 14 2002) to 004F (Feb 6 2004, and counting). Given those facts, you have corrupt data --- specifically, a wildly out-of-range transaction number in some tuple header, causing the tuple validity checker to try to fetch a nonexistent page of the CLOG. The odds are good that the corruption extends further than just the one field; that just happens to be the one that gets checked first. There are discussions in the mailing list archives about how to locate and clean up corrupted data. It's a pretty messy process but you can usually get back everything except the rows on the particular corrupted page (I'm optimistically assuming there's only one). Looking for threads mentioning pg_filedump might be the quickest way to find info. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html