dir_current changes often, but is analyzed after significant changes, so
effectively it's analyzed probably once an hour.
The approximate ratio of rows with volume_id=5 to the whole number of rows
doesn't change (i.e. volume_id=5 will appear roughly in 1.5M-2M rows, total
is around 750-800M rows).
Marcin Gozdalik writes:
> Sometimes Postgres will choose very inefficient plan, which involves
> looping many times over same rows, producing hundreds of millions or
> billions of rows:
Yeah, this can happen if the outer side of the join has a lot of
duplicate rows. The query planner is aware of
Hi
I am having a rare issue with extremely inefficient merge join. The query
plan indicates that PG is doing some kind of nested loop, although an index
is present.
PostgreSQL 9.6.17 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-39), 64-bit
Schema of dir_current (so