On 6/5/25 17:42, Mark Frost wrote:
Is there any good explanation for this behaviour? Preferably we’d like
some way for proper `most_common_elems` statistics to be collected in
our production database, in the hope that influences a good query plan
to always be selected.
most_common_elems has a limited size, and if all the elements have the
same freq, there's nothing we can do.
You could do: alter table test alter column tags set statistics X;
However, X is capped at 10000, which means that the size of
most_common_elems will be less than 100k, and it would probably be
stupid to go beyond that anyway.
It seems that postgres lacks some kind of "n_distinct_elems" for that
kind of case, but let's wait and see what the statistics gurus think.