Extremely inefficient merge-join

2021-03-17 Thread Marcin Gozdalik
me_id = ANY ('{5}'::bigint[])) Note that 2153506389 / 1166 = 1 846 918. Similarly 831113101 / 450 = 1 846 918. I wonder how I can help Postgres query planner to choose a faster plan? -- Marcin Gozdalik

Re: Extremely inefficient merge-join

2021-03-17 Thread Marcin Gozdalik
hints would be appreciated. śr., 17 mar 2021 o 20:47 Tom Lane napisał(a): > Marcin Gozdalik writes: > > Sometimes Postgres will choose very inefficient plan, which involves > > looping many times over same rows, producing hundreds of millions or > > billions of rows: > > Yeah

Very slow "bloat query"

2021-05-14 Thread Marcin Gozdalik
of temporary tables created or make the analytics queries finish quicker. Apart from the above hack of filtering out live tuples to a separate table is there anything I could do? Thank you, Marcin Gozdalik -- Marcin Gozdalik

Re: Very slow "bloat query"

2021-05-14 Thread Marcin Gozdalik
the > stale-statistics logic. The control parameter for that, > vacuum_cleanup_index_scale_factor, will be removed entirely in v14. In v13, > it remains present to avoid breaking existing configuration files, but it > no longer does anything."* > > best, > Imre > > > M

Re: Very slow "bloat query"

2021-05-14 Thread Marcin Gozdalik
There is a long running analytics query (which is running usually for 30-40 hours). I agree that's not the best position to be in but right now can't do anything about it. pt., 14 maj 2021 o 15:04 Tom Lane napisał(a): > Marcin Gozdalik writes: > > I have traced the pro

Re: Very slow "bloat query"

2021-05-14 Thread Marcin Gozdalik
.6.22-1.pgdg110+1), > compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit > (1 row) > > Imre > > > Marcin Gozdalik ezt írta (időpont: 2021. máj. 14., P, > 14:11): > >> Unfortunately it's still 9.6. Upgrade to latest 13 is planned for this >> year. >>