I set the toast.autovacuum_vacuum_scale_factor to 0 and the
toast.autovacuum_vacuum threshold to 1 so it should be enough to force
a vacuum after the nightly deletes. Now , I changed the cost limit and the
cost delay, my question is if I have anything else to do ? My
maintenance_work_mem is
If there are high number of updates during normal daytime processes, then
yes you need to ensure autovacuum is handling this table as needed. If the
nightly delete is the only major source of bloat on this table, then
perhaps running a manual vacuum keeps things tidy after the big delete.
Granted,
No I don't run vacuum manually afterwards because the autovacuum should
run. This process happens every night. Yes , bloating is an issue because
the table grow and take a lot of space on disk. Regarding the autovacuum,
I think that it sleeps too much time (17h) during it's work, don't you
think
Thanks, that context is very enlightening. Do you manually vacuum after
doing the big purge of old session data? Is bloat causing issues for you?
Why is it a concern that autovacuum's behavior varies?
*Michael Lewis*
On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <
Maybe by explaining the tables purpose it will be cleaner. The original
table contains rows for sessions in my app. Every session saves for itself
some raw data which is saved in the toasted table. We clean old sessions
(3+ days) every night. During the day sessions are created so the size of
the
It is curious to me that the tuples remaining count varies so wildly. Is
this expected?
*Michael Lewis*
On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> I checked in the logs when the autovacuum vacuum my big toasted table
> during the week and I wanted
On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
I have 3 questions :
> 1)To what value do you recommend to increase the vacuum cost_limit ? 2000
> seems reasonable ? Or maybe its better to leave it as default and assign a
> specific value for big tables ?
>
Just to make sure that I understood :
-By increasing the cost_limit or decreasing the cost of the page_cost we
can decrease the time it takes the autovacuum process to vacuum a specific
table.
-The vacuum threshold/scale are used to decide how often the table will be
vacuum and not how long it
On Thu, 7 Feb 2019 at 02:34, Mariel Cherkassky
wrote:
> As I said, I set the next settings for the toasted table :
>
> alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0);
>
> alter table orig_table set (toast.autovacuum_vacuum_threshold =1);
These settings don't
On Wed, Feb 6, 2019 at 9:42 AM Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> Well, basically I'm trying to tune it because the table still keep
> growing. I thought that by setting the scale and the threshold it will be
> enough but its seems that it wasnt. I attached some of the logs
Well, basically I'm trying to tune it because the table still keep growing.
I thought that by setting the scale and the threshold it will be enough but
its seems that it wasnt. I attached some of the logs output to hear what
you guys think about it ..
בתאריך יום ד׳, 6 בפבר׳ 2019 ב-16:12 מאת
On Wed, Feb 6, 2019 at 5:29 AM Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> Now the question is how to handle or tune it ? Is there any change that I
> need to increase the cost_limit / cost_delay ?
>
Sometimes vacuum has more work to do, so it takes more time to do it.
There is
Hi all,
In the myriad of articles written about autovacuum tuning, I really like
this article by Tomas Vondra of 2ndQuadrant:
https://blog.2ndquadrant.com/autovacuum-tuning-basics/
It is a concise article that touches on all the major aspects of
autovacuuming tuning: thresholds, scale
which one you mean ? I changed the threshold and the scale for the specific
table...
בתאריך יום ד׳, 6 בפבר׳ 2019 ב-15:36 מאת dangal <
danielito.ga...@gmail.com>:
> Would it be nice to start changing those values found in the default
> postgres.conf so low?
>
>
>
> --
> Sent from:
>
Would it be nice to start changing those values found in the default
postgres.conf so low?
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
Hey,
As I said, I set the next settings for the toasted table :
alter table orig_table set (toast.autovacuum_vacuum_scale_factor = 0);
alter table orig_table set (toast.autovacuum_vacuum_threshold =1);
Can you explain a little bit more why you decided that the autovacuum spent
it time
On Thu, 7 Feb 2019 at 00:17, Laurenz Albe wrote:
>
> On Wed, 2019-02-06 at 12:29 +0200, Mariel Cherkassky wrote:
> > Now the question is how to handle or tune it ? Is there any change that I
> > need to increase the cost_limit / cost_delay ?
>
> Maybe configuring autovacuum to run faster will
On Wed, 2019-02-06 at 12:29 +0200, Mariel Cherkassky wrote:
> Hi,
> I have a table with a bytea column and its size is huge and thats why
> postgres created a toasted table for that column.
> The original table contains about 1K-10K rows but the toasted can contain up
> to 20M rows.
> I assigned
Hi,
I have a table with a bytea column and its size is huge and thats why
postgres created a toasted table for that column. The original table
contains about 1K-10K rows but the toasted can contain up to 20M rows. I
assigned the next two settings for the toasted table :
alter table orig_table
19 matches
Mail list logo