It must be possible to create a tool based on the PostgreSQL sources that
can read all the tuples in a database and dump them to a file stream. All
the data remains in the file until overwritten with data after a vacuum.
It *should* be doable.

If there data in the table is worth anything, then it would be worth
extracting.

It would, of course, be a tool of last resort.



> "Kouber Saparev" <[EMAIL PROTECTED]> writes:
>> After asking the guys in the [EMAIL PROTECTED] channel they
>> told
>> me that the reason is the "Transaction ID wraparound", because I have
>> never
>> ran VACUUM on the whole database.
>
>> So they proposed to ask here for help. I have stopped the server, but
>> what
>> could I do in order to save the data if it's possible at all?
>
> I think you're pretty well screwed as far as getting it *all* back goes,
> but you could use pg_resetxlog to back up the NextXID counter enough to
> make your tables and databases reappear (and thereby lose the effects of
> however many recent transactions you back up over).
>
> Once you've found a NextXID setting you like, I'd suggest an immediate
> pg_dumpall/initdb/reload to make sure you have a consistent set of data.
> Don't VACUUM, or indeed modify the DB at all, until you have gotten a
> satisfactory dump.
>
> Then put in a cron job to do periodic vacuuming ;-)
>
>                       regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to