On Tue, Jun 19, 2018 at 8:26 AM Matheus de Oliveira <
matioli.math...@gmail.com> wrote:

> Hello friends.
>
> On Tue, Jun 12, 2018 at 3:31 PM, Andres Freund <and...@anarazel.de> wrote:
>
>>
>> On 2018-06-11 17:39:14 -0700, Andres Freund wrote:
>> > I plan to go over the change again tomorrow, and then push. Unless
>> > somebody has comments before then, obviously.
>>
>> Done.
>>
>>
> Sorry to bother about this, but do you have any plan to do the minor
> release before planned due to this bug?
>
> There seem to have too many users affected by this. And worst is that many
> users may not have even noticed they have the problem, which can cause
> `age(datfrozenxid)` to keep increasing until reachs 2.1 billions and the
> system goes down.
>
> In my case, I have a server that its `age(datfrozenxid)` is already at 1.9
> billions, and I expect it to reach 2.1 billions in about 14 days.
> Fortunately, I have monitoring system over `age(datfrozenxid)`, that is why
> I found the issue in one of my servers.
>
> I'm pondering what is the best option to avoid a forced shutdown of this
> server:
> - should I just wait for a release (if it is soon, I would be fine)?
> - build PG from the git version by myself?
> - or is there a safer workaround to the problem? (not clear to me if
> deleting the `global/pg_internal.init` file is really the way to go, and
> the details, is it safe? Should I stop the server, delete, start?)
>
> Best regards,
> --
> Matheus de Oliveira
>
>
> Restarting the database has fixed the error on these pg_catalog tables,
allowing us to vacuum them and avoid wraparound.

We first noticed a restart fixed the issue because SAN snapshots did not
have the error. The only difference really being shared memory and nothing
disk-level.

Jeremy

Reply via email to