Re: Extremely inefficient merge-join

2021-03-17 Thread Marcin Gozdalik
dir_current changes often, but is analyzed after significant changes, so effectively it's analyzed probably once an hour. The approximate ratio of rows with volume_id=5 to the whole number of rows doesn't change (i.e. volume_id=5 will appear roughly in 1.5M-2M rows, total is around 750-800M rows).

Re: Extremely inefficient merge-join

2021-03-17 Thread Tom Lane
Marcin Gozdalik writes: > Sometimes Postgres will choose very inefficient plan, which involves > looping many times over same rows, producing hundreds of millions or > billions of rows: Yeah, this can happen if the outer side of the join has a lot of duplicate rows. The query planner is aware of