Greetings,

I've recently discovered a peculiar problem on one of our big databases
(more than 1TB). The database has been upgraded from 9.2 to 9.3.4 (using
hardlinks to speedup the process)  on April 7th around 22:00 local time.
When doing  vacuum on any table, the system fails with the following error:

ERROR:  could not access status of transaction 2035987419
DETAIL:  Could not open file "pg_multixact/offsets/795A": No such file or
directory.

The erorr doesn't depend on the table being vacuumed, or even database, i.e:

postgres=# create database x;
CREATE DATABASE
postgres=# \c x
You are now connected to database "x" as user "postgres".
x=# create table test();
CREATE TABLE
x=# vacuum test;
ERROR:  could not access status of transaction 2035987419
DETAIL:  Could not open file "pg_multixact/offsets/795A": No such file or
directory.

The content of pg_multixact/offsets is:

pg_multixact$ ls -lR
./members:
-rw------- 1 postgres postgres 8192 Apr 16 18:20 0000
./offsets:
-rw------- 1 postgres postgres   8192 Apr  7 22:51 0000
-rw------- 1 postgres postgres 262144 Apr 16 18:20 79A6

the select mutlixact from pg_database gives me:

postgres=# select oid, datminmxid from pg_database;
  oid  | datminmxid
-------+------------
 12078 | 2040987419
 12073 | 2040987419
 16414 | 2035987419
 16415 | 2040987418
     1 | 2040987419
(5 rows)

and the 2035987419 = 0x795AB3DB belongs to 795A segment.
The 0000 file just contains a single page of all zeroes. Neither the 9.3.4
replica of this database, nor the original 9.2 cluster data directory
contain this file.

I guess what happens is that at the end of vacuum PostgreSQL checks the
status of the earliest available multixact (perhaps in order to truncate
it), and fails because the file has been already removed.

I also have a replica of this database, (which, as I said, doesn't have the
0000 segment). When I shut down this replica, copied the data directory,
promoted the copied directory (effectively making a copy of the master) and
issued vacuum I observed no errors. However, when I manually created this
multixact/offset/0000 with dd (populating 8192 bytes with 0), exactly the
same problem re-appeared as on the master.

I'm tempted to just remove the 0000 file from master and restart the
database, since it's effectively impossible to run vacuum now, but I'd like
to understand what's happening first. Below is the result of pg_filedump
for the master:

data$ pg_controldata .
pg_control version number:            937
Catalog version number:               201306121
Database system identifier:           5999656352803688379
Database cluster state:               in production
pg_control last modified:             Wed 04 Jun 2014 04:49:45 PM CEST
Latest checkpoint location:           25D0/D38A8DF0
Prior checkpoint location:            25D0/BFA7B068
Latest checkpoint's REDO location:    25D0/C27A54A0
Latest checkpoint's REDO WAL file:    00000001000025D0000000C2
Latest checkpoint's TimeLineID:       1
Latest checkpoint's PrevTimeLineID:   1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:          0/1655213806
Latest checkpoint's NextOID:          16795470
Latest checkpoint's NextMultiXactId:  2040987419
Latest checkpoint's NextMultiOffset:  3
Latest checkpoint's oldestXID:        1038291920
Latest checkpoint's oldestXID's DB:   16415
Latest checkpoint's oldestActiveXID:  1655189767
Latest checkpoint's oldestMultiXid:   2040987417
Latest checkpoint's oldestMulti's DB: 0
Time of latest checkpoint:            Wed 04 Jun 2014 04:45:45 PM CEST
Fake LSN counter for unlogged rels:   0/1
Minimum recovery ending location:     0/0
Min recovery ending loc's timeline:   0
Backup start location:                0/0
Backup end location:                  0/0
End-of-backup record required:        no
Current wal_level setting:            hot_standby
Current max_connections setting:      500
Current max_prepared_xacts setting:   0
Current max_locks_per_xact setting:   64
Maximum data alignment:               8
Database block size:                  8192
Blocks per segment of large relation: 131072
WAL block size:                       8192
Bytes per WAL segment:                16777216
Maximum length of identifiers:        64
Maximum columns in an index:          32
Maximum size of a TOAST chunk:        1996
Date/time type storage:               64-bit integers
Float4 argument passing:              by value
Float8 argument passing:              by value
Data page checksum version:           0

There were no wraparound-related messages in the logs of either old or new
cluster, nor we observed any other errors except this one (we originally
discovered it in the server logs, likely because the autovacuum was also
failing). Any hints on what's going on (and whether the data corruption is
a possibility)?


Cheers,
-- 
Alexey Klyukin

Reply via email to