Re: Logging

2019-12-04 Thread Stephen Eilert
Usually, this is done by logrotate or a similar mechanism in your system. 
You’ll likely find that other logs in your system follow a similar pattern, not 
just Postgresql.

— Stephen
On Dec 4, 2019, 3:21 PM -0800, Rich Shepard , wrote:
> Running Slackware-14.2/x86_64 and postgresql-11.5.
>
> In /var/log/ are these files:
>
> -rw-r- 1 postgres wheel 0 Nov 23 04:40 postgresql-11
> -rw-r- 1 postgres wheel 723 Nov 23 04:40 postgresql-11.1
> -rw-r- 1 postgres wheel 324 Nov 20 04:40 postgresql-11.2.gz
> -rw-r- 1 postgres wheel 320 Nov 17 04:40 postgresql-11.3.gz
> -rw-r- 1 postgres wheel 322 Nov 14 04:40 postgresql-11.4.gz
> -rw-r- 1 postgres wheel 321 Nov 10 04:40 postgresql-11.5.gz
> -rw-r- 1 postgres wheel 325 Nov 6 04:40 postgresql-11.6.gz
> -rw-r- 1 postgres wheel 337 Oct 23 04:40 postgresql-11.7.gz
>
> I assume that they're an automatic backup that runs every 3-4 days. What's
> backed up and where is this controlled?
>
> I ask because I have a cron job that does a pg_dumpall each night at 11:30
> pm. (It's a small installation for my business use so the files are not
> excessive and I keep them for only short periods.)
>
> Regards,
>
> Rich
>
>


Re: Recommendation to run vacuum FULL in parallel

2019-04-03 Thread Stephen Eilert
> Ideally VACUUM FULL should not require a giant lock on the table.

It is a massively expensive operation, regardless. Not sure if it is something 
you want to run in production outside a maintenance window.

I would argue that frequent vacuum full is an antipattern. This will become a 
matter of superstition in your company.

If db size growth is a problem, make autovacuum more agressive. Or run your 
manual vacuum job (not full) more often than a week. Daily, if you have to. 
This will not reclaim disk space as reported by the OS, but it should make the 
space available for new row versions, so db should mostly stop growing from the 
OS point of view(mostly, because you may be adding new data, right?). If it is 
still a problem, then there may be something else going on.

Which PG version is that?


— Stephen
On Apr 3, 2019, 10:02 AM -0700, Perumal Raj , wrote:
> Hi All
>
> Thanks for all your valuable  inputs,
>
> Here is some more data,
>
> Though we have 150 GB free space spread across 500 Tables , Every alternative 
> day DB is growing with 1 GB rate.
> Also,We have manual vacuum job scheduled to run weekly basis, So seems to be 
> space is not reusing all the time ?
>
> So conclude the requirement here , The only way to parallelism is multiple 
> script. And no need to do REINDEX exclusively.
> Question : Do we need to consider  Table dependencies while preparing script 
> in order to avoid table locks during vacuum full ?
>
> At present Maintenance work memory set to 20 GB.
> Question : Do we need to tweak any other parameters ?
>
> Note:
> We are planning this activity with Application Downtime only.
>
> Let me know if i missed anything.
>
> Regards,
> Raj
>
>
>
>
>
> > On Wed, Apr 3, 2019 at 8:42 AM rihad  wrote:
> > > > And future updates can reuse it, too (an update is very similar to an
> > > > insert+delete).
> > >
> > >
> > > Hm, then it's strange our DB takes 6 times as much space compared to
> > > freshly restored one (only public schema is considered).
> > >
> > > > Not if autovacuum has a chance to run between updates.
> > >
> > > Ours is run regularly, although we had to tweak it down not to interfere
> > > with normal database activity, so it takes several hours each run on the
> > > table. We did that by setting autovacuum_vacuum_scale_factor = 0.05 from
> > > default 0.2.
> > >
> > >


Re: cannot execute VACUUM during recovery

2019-02-27 Thread Stephen Eilert
Are you running Vacuum on the slave node? It has to run on the master.

Thanks,

– Stephen
On Feb 27, 2019, 6:43 AM -0800, github kran , wrote:
> Hello Team,
>
> We are using a PostgreSQL 9.6 and seeing the below error while trying to run 
> a VACUUM on one of our live tables running in Production. We wanted to clean 
> up some DEAD tuples on the table.
>
>
> Command: VACUUM (ANALYZE,VERBOSE) table_name.
>
> ERROR:  cannot execute VACUUM during recovery
>
> Thanks
> Kranthi


Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Stephen Eilert
160 million is a very low number. I manage production databases which
reach this value in a day easily. As other said, 200 million is the
default threshold for the anti-wraparound vacuums. I wouldn't worry,
specially for template0.
That said, there is nothing preventing you from temporarily changing
DATALLOWCONN, running vaccuum (which should be very quick) and then
changing it back. But you should not have to.  I do that in our
production database, but only because it suffers from a bad schema
design and we ended up with thousands of tables, which is too much for
the autovacuum workers to cope alone, so they need a manual "boost". I
still don't disable autovacuum.
I don't really understand the bit about autovacuum changing query
performance. In which scenario would it be preferable to have outdated
analyzer statistics? This would be like running a system with garbage
collection disabled because GC can increase the amount of free memory.
That's the whole point.
— Stephen