Re: ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"

2019-02-13 Thread Mariel Cherkassky
I meant the anaylze, if anaylze will run very often on the original table, arent there disadvantages for it ? ‫בתאריך יום ד׳, 13 בפבר׳ 2019 ב-18:54 מאת ‪Alvaro Herrera‬‏ <‪ alvhe...@2ndquadrant.com‬‏>:‬ > On 2019-Feb-13, Mariel Cherkassky wrote: > > > To be honest, it isnt my db, but I just have

Re: ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"

2019-02-13 Thread Alvaro Herrera
On 2019-Feb-13, Mariel Cherkassky wrote: > To be honest, it isnt my db, but I just have access to it ... Well, I suggest you forget the password then :-) > Either way, so I need to change the vacuum_Analyze_scale/threshold for the > original table ? But the value will be too high/low for the ori

Re: ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"

2019-02-13 Thread Mariel Cherkassky
To be honest, it isnt my db, but I just have access to it ... Either way, so I need to change the vacuum_Analyze_scale/threshold for the original table ? But the value will be too high/low for the original table. For example if my original table has 30,000 rows and my toasted has 100,000,000 rows.

Re: ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"

2019-02-13 Thread Alvaro Herrera
On 2019-Feb-13, Mariel Cherkassky wrote: > Hey, > I have a very big toasted table in my db(9.2.5). Six years of bugfixes missing there ... you need to think about an update. > Autovacuum doesnt gather > statistics on it because the analyze_scale/threshold are default and as a > result autoanalyz

ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"

2019-02-13 Thread Mariel Cherkassky
Hey, I have a very big toasted table in my db(9.2.5). Autovacuum doesnt gather statistics on it because the analyze_scale/threshold are default and as a result autoanalyze is never run and the statistics are wrong : select * from pg_stat_all_Tables where relname='pg_toast_13488395'; -[ RECORD 1 ]-

Re: understanding max_wal_size,wal_keep_segments and checkpoints

2019-02-13 Thread Laurenz Albe
Mariel Cherkassky wrote: > Yeah, so basically if we open a transaction and we do some insert queries, > until the transaction > is commited the changes(the wal data and not the blocked that are chaned) > are kept in the wal buffers ? > . When the user commits the transaction, the wal buffer(o

Re: understanding max_wal_size,wal_keep_segments and checkpoints

2019-02-13 Thread Mariel Cherkassky
> > I'm trying to understand the logic behind all of these so I would be > happy > > if you can confirm what I understood or correct me if I'm wrong : > > -The commit command writes all the data in the wal_buffers is written > into the wal files. > > All the transaction log for the transaction has

Re: understanding max_wal_size,wal_keep_segments and checkpoints

2019-02-13 Thread Laurenz Albe
Mariel Cherkassky wrote: > I'm trying to understand the logic behind all of these so I would be happy > if you can confirm what I understood or correct me if I'm wrong : > -The commit command writes all the data in the wal_buffers is written into > the wal files. All the transaction log for the

understanding max_wal_size,wal_keep_segments and checkpoints

2019-02-13 Thread Mariel Cherkassky
Hey, I'm trying to understand the logic behind all of these so I would be happy if you can confirm what I understood or correct me if I'm wrong : -The commit command writes all the data in the wal_buffers is written into the wal files. -Checkpoints writes the data itself (blocks that were changed)