On Thu, 3 Apr 2025 at 18:07, Tom Lane <t...@sss.pgh.pa.us> wrote: > A simple-minded approach could be to just be pessimistic, and > increase our estimate of how many rows would need to be scanned as a > consequence of noticing that the columns have significant correlation. > The shape of that penalty function would be mostly guesswork though, > I fear. (Even with a clear idea of what to do, making this happen > seems a little complex --- just a SMOP, but I'm not very sure how to > wire it up.)
The problem with being pessimistic is that if the distribution was even or the col_a > 4996 were right at the start of an ordered scan of the idx_col_b_a index, it would have been a good plan. There might be just as many people getting good plans as there are bad and adding pessimism here might just make one set of people happy and the others sad. I don't have a clear idea, but from a few minutes thinking about it, maybe when we build the statistics on a table, once we're about done with what happens today, if we then took the sample rows set and sorted them according to the sort order of each amcanorder index then go through the MCVs lists and histograms for each column and record a min and max value for the percentage of the way through the sorted sample rows that we find each MCV value and value within each histogram bucket and store those in some new statistics kind against the index (maybe indexes could opt into this). This might sound like it'd need lots of storage, but even 1 byte each for the min and max would be better than today. 0 could mean near the start, 127 in the middle and 255 at the end. I imagine this could be done fairly efficiently by sorting the MCVs lists by value and bsearching that array as we walk through the indexed-ordered sample rows. The histograms should be sorted already. When planning the query in question, to figure out the weighting of how many rows we expect to have to read, we look at the stats against the index to find the minimum position for values in the col_a histogram where col_a > 4996. Instead of assuming an even distribution, you use that minimum value to tell you what percentage of the index must be read before a match is found. The stored maximum position value would do the same job for backward index scans. David