ERROR: uncommitted xmin 347341220 from before xid cutoff 967029200 needs to be frozen

2019-12-09 Thread rajesh kumar
Hi All,

We recently started seeing an error “ERROR:  uncommitted xmin 347341220
from before xid cutoff 967029200 needs to be frozen” on our user tables.
I’m unable to do ‘vacuum’, ‘vacuum freeze’ or ‘vacuum full’  on the
affected tables.

>From what I read, this was a bug couple of years ago on System tables and
it was fixed long back.
However, we are seeing these errors on two of our User tables now.

After some Google search, I found the fix but, they seem to be temporary.

These are the solutions I found :
1. Truncate the table and restore the dump

2. remove ‘pg_internal.init’ from global directory


I’m not yet sure about removing  the file ‘pg_internal.init’. So, I would
go ahead with table rebuilt for now.

Anyways, I would like to know if there is any permanent solution for this
issue as I did not find a proper solution.

We are running Postgresql 9.6.10 on SUSE 12.4 OS.
We are already in process of upgrading to the latest minor version of 9.6
and our Application supports only Postgres 9.6.x

I request you to please take a look at it and let me know the fix.


Re: ERROR: uncommitted xmin 347341220 from before xid cutoff 967029200 needs to be frozen

2019-12-09 Thread rajesh kumar
Hi Robert,

Thanks for your reply.
So, i have this question. I have seen a patch on similar issue with shared
catalog tables and it is fixed in PostgreSQL 9.6.10.
We are currently using 9.6.10.
Do you think we hit another bug ?
Is this because of some synchronization issue ?

Or is there something i should do to avoid this issue in the future ?

On Mon, 9 Dec 2019, 20:05 Robert Haas,  wrote:

> On Mon, Dec 9, 2019 at 4:52 AM rajesh kumar 
> wrote:
> > We recently started seeing an error “ERROR:  uncommitted xmin 347341220
> from before xid cutoff 967029200 needs to be frozen” on our user tables.
> > I’m unable to do ‘vacuum’, ‘vacuum freeze’ or ‘vacuum full’  on the
> affected tables.
> >
> > From what I read, this was a bug couple of years ago on System tables
> and it was fixed long back.
> > However, we are seeing these errors on two of our User tables now.
> >
> > After some Google search, I found the fix but, they seem to be temporary.
> >
> > These are the solutions I found :
> > 1. Truncate the table and restore the dump
> >
> > 2. remove ‘pg_internal.init’ from global directory
> >
> >
> > I’m not yet sure about removing  the file ‘pg_internal.init’. So, I
> would go ahead with table rebuilt for now.
> >
> > Anyways, I would like to know if there is any permanent solution for
> this issue as I did not find a proper solution.
>
> I think that the best thing to do would be to dump all of your data
> using pg_dump, create a whole new cluster using initdb, restore the
> data into the new cluster, and delete the old one.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Pg_stat_activity

2025-02-15 Thread Rajesh Kumar
If I see a sudden connection spike (say once in 2 months). What are the
steps I need to follow as a dba?

Check idle and kill? Check of stat activity for active queries? How do I
ensure how much time is long running query? Blocking queries? How much of
blocking time is considered to be dangerous? Locks - should we conclude
anything from wait event and wait event type ?

All these won't help nd have to look into pgbadger?