I've run into a case where I get bad performance that doesn't sound too
hard to solve. Question is: is it worth solving?
The situation is this: I have a table that can grow to a large number of
rows, then shrink to zero over a large number of quick, consecutive
transactions. The primary key index for the table is getting a lot of use
in the process.
But whenever perhaps one-third or so of the rows have been deleted, the
planner stops using that index and resorts to sequential scans. I tried
suppressing that by toggling enable_seqscan: works as advertised, but
performance is still terrible until (as far as I can make out) the next
analyze run has completed!
So I suppose the planner has a good reason to ignore the index at that
point. I'm assuming that this is something to do with the correlation
between the index and the column's statistics degrading in some way.
I also tried doing my own analyze runs on just the primary key index.
That will complete very quickly, and performance is restored for a while.
But as far as I can tell, a regular automatic analyze run will block my
own, more limited one on the same table. So performance is still bad, and
now it's irregular to boot.
This makes me wonder: when the planner finds that an index is no longer
worth using because its corresponding statistics are out of date, and it's
cheap to update those same stats, maybe it should do so? Even if there's
also going to be a full analyze on the table, it could be worthwhile to do
this quick limited run first. (Though not if one is already underway, of
All this is based largely on guesswork, so if I've got it all wrong,
please enlighten me!
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not