Pallav Kalva wrote:

Hi Everybody.

I have a table in my production database which gets updated regularly and the stats on this table in pg_class are totally wrong. I used to run vacuumdb on the whole database daily once and when i posted the same problem of wrong stats in the pg_class most of them from this list and also from postgres docs suggested me to run the "vacuum analyze" more frequently on this table.

I had a setup a cronjob couple of weeks ago to run vacuum analyze every 3 hours on this table and still my stats are totally wrong. This is affecting the performance of the queries running on this table very badly.
How can i fix this problem ? or is this the standard postgres behaviour ?

Here are the stats from the problem table on my production database

relpages |  reltuples
  168730 | 2.19598e+06

If i rebuild the same table on dev db and check the stats they are totally different, I was hoping that there would be some difference in the stats from the production db stats but not at this extent, as you can see below there is a huge difference in the stats.

relpages | reltuples
   25230 |    341155

Thanks! Pallav

What version of the database? As I recall, there are versions which suffer from index bloat if there is a large amount of turnover on the table. I believe VACUUM FULL ANALYZE helps with this. As does increasing the max_fsm_pages (after a vacuum full verbose the last couple of lines can give you an indication of how big max_fsm_pages might need to be.)

Vacuum full does some locking, which means you don't want to do it all the time, but if you can do it on the weekend, or maybe evenings or something it might fix the problem.

I don't know if you can recover without a vacuum full, but there might also be something about rebuild index, or maybe dropping and re-creating the index.

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to