I wrote:
> Looking back at your original discussion of the bug,
> http://archives.postgresql.org/pgsql-hackers/2007-06/msg00234.php
> I'm wondering why you chose option #3 rather than option #4?
> I still find the proposed patch a bit crufty.

In particular, it seems like a patch per #4 would be a one-liner:

*** src/backend/access/transam/xlog.c.orig      Wed Sep 26 18:36:30 2007
--- src/backend/access/transam/xlog.c   Thu Sep 27 12:20:56 2007
***************
*** 5092,5101 ****
         *
         * If we stopped short of the end of WAL during recovery, then we are
         * generating a new timeline and must assign it a unique new ID.
         * Otherwise, we can just extend the timeline we were in when we ran out
         * of WAL.
         */
!       if (needNewTimeLine)
        {
                ThisTimeLineID = findNewestTimeLine(recoveryTargetTLI) + 1;
                ereport(LOG,
--- 5092,5103 ----
         *
         * If we stopped short of the end of WAL during recovery, then we are
         * generating a new timeline and must assign it a unique new ID.
+        * We also force a new timeline when recovering from an archive, to 
avoid
+        * problems with trying to overwrite existing archived segments.
         * Otherwise, we can just extend the timeline we were in when we ran out
         * of WAL.
         */
!       if (needNewTimeLine || (InArchiveRecovery && XLogArchivingActive()))
        {
                ThisTimeLineID = findNewestTimeLine(recoveryTargetTLI) + 1;
                ereport(LOG,

though I admit I've not tested this.  The comments in
exitArchiveRecovery probably need adjustment too --- re-reading them,
it seems obvious that the current approach is broken by design, because
it *must* lead to an attempt to overwrite a previously archived version
of the last segment.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to