As it happens our larger tables operate as a business log and are also
insert only.

- There is no partitioning at this time since we expect to have an
automated process to delete rows older than a certain date.
- Analyzing doing off-hours sounds like a good idea; if there is no other
way to determine effect on db we may end up doing that.
- We have an open schema and heavily depend on jsonb, so I'm not sure if
increasing the statistics target will be helpful.


On Tue, Dec 19, 2017 at 2:03 PM, Tomas Vondra <>

> On 12/19/2017 05:47 PM, Habib Nahas wrote:
> > Hi,
> >
> > We operate an RDS postgres 9.5 instance and have periodic CPU spikes to
> > 100%. These spikes appear to be due to autoanalyze kicking on our larger
> > tables.
> >
> > Our largest table has 75 million rows and the autoanalyze scale factor
> > is set to 0.05.
> >
> > The documentation I've read suggests that the analyze always operates on
> > the entire table and is not incremental. Given that supposition are
> > there ways to control cost(especially CPU) of the autoanalyze operation?
> > Would a more aggressive autoanalyze scale factor (0.01) help. With the
> > current scale factor we see an autoanalyze once a week, query
> > performance has been acceptable so far, which could imply that scale
> > factor could be increased if necessary.
> >
> No, reducing the scale factor to 0.01 will not help at all, it will
> actually make the issue worse. The only thing autoanalyze does is
> running ANALYZE, which *always* collects a fixed-size sample. Making it
> more frequent will not reduce the amount of work done on each run.
> So the first question is if you are not using the default (0.1), i.e.
> have you reduced it to 0.05.
> The other question is why it's so CPU-intensive. Are you using the
> default statistics_target value (100), or have you increased that too?
> regards
> --
> Tomas Vondra        
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to