On 4/3/19 3:45 PM, Perumal Raj wrote:
Hi Stephen
Thanks for the response ,
Version : 9.2
We never ran VACUUM FULL in the past, All we are doing just manual vacuum
( Weekly ) .
Based on the Observation ( test run ) , we were able to reclaim 150 GB out
of 500 GB .
We are heading to a planned down time soon , So thinking to run FULL
during that time .
Reason behind to run FULL : 1. Reclaim unused space which postgres never
using it.
Did you purge a *lot* of records?
2. Considering FULL may
increase the performance.
Maybe. But choose your tables wisely.
3. Daily backup size and
time will be reduced after reclaiming 150GB.
How are you currently performing backups? (The size won't change if you're
using pg_dump, and it won't change much if you're using pgbackrest with the
compression option -- thought it will probably run faster.)
Bottom line:
1. choose your tables wisely.
2. make sure you have enough disk space.
3. Either autovacuum more aggressively or explicitly vacuum certain tables
from a cron job.
Thanks,
Raj
On Wed, Apr 3, 2019 at 1:16 PM Stephen Eilert <cont...@stepheneilert.com
<mailto:cont...@stepheneilert.com>> wrote:
> Ideally VACUUM FULL should not require a giant lock on the table.
It is a massively expensive operation, regardless. Not sure if it is
something you want to run in production outside a maintenance window.
I would argue that frequent vacuum full is an antipattern. This will
become a matter of superstition in your company.
If db size growth is a problem, make autovacuum more agressive. Or run
your manual vacuum job (not full) more often than a week. Daily, if
you have to. This will not reclaim disk space as reported by the OS,
but it should make the space available for new row versions, so db
should mostly stop growing from the OS point of view(mostly, because
you may be adding new data, right?). If it is still a problem, then
there may be something else going on.
Which PG version is that?
— Stephen
On Apr 3, 2019, 10:02 AM -0700, Perumal Raj <peruci...@gmail.com
<mailto:peruci...@gmail.com>>, wrote:
Hi All
Thanks for all your valuable inputs,
Here is some more data,
Though we have 150 GB free space spread across 500 Tables , Every
alternative day DB is growing with 1 GB rate.
Also,We have manual vacuum job scheduled to run weekly basis, So
seems to be space is not reusing all the time ?
So conclude the requirement here , The only way to parallelism is
multiple script. And no need to do REINDEX exclusively.
Question : Do we need to consider Table dependencies while preparing
script in order to avoid table locks during vacuum full ?
At present Maintenance work memory set to 20 GB.
Question : Do we need to tweak any other parameters ?
Note:
We are planning this activity with Application Downtime only.
Let me know if i missed anything.
Regards,
Raj
On Wed, Apr 3, 2019 at 8:42 AM rihad <ri...@mail.ru
<mailto:ri...@mail.ru>> wrote:
> And future updates can reuse it, too (an update is very similar
to an
> insert+delete).
Hm, then it's strange our DB takes 6 times as much space compared to
freshly restored one (only public schema is considered).
> Not if autovacuum has a chance to run between updates.
Ours is run regularly, although we had to tweak it down not to
interfere
with normal database activity, so it takes several hours each run
on the
table. We did that by setting autovacuum_vacuum_scale_factor =
0.05 from
default 0.2.
--
Angular momentum makes the world go 'round.