erial=false;" produces an efficient plan. good to know there
> are *some* knobs to turn when the optimizer comes up with a bad plan. would
> be awesome if you could lock that plan into place w/out altering the variable.
>
> thanks for the help Hannu!
>
> On Mon, Mar 22, 20
"set enable_material=false;" produces an efficient plan. good to know there
are *some* knobs to turn when the optimizer comes up with a bad plan. would
be awesome if you could lock that plan into place w/out altering the
variable.
thanks for the help Hannu!
On Mon, Mar 22, 2021 at 4:3
you can play around various `enable_*` flags to see if disabling any
of these will *maybe* yield the plan you were expecting, and then
check the costs in EXPLAIN to see if the optimiser also thinks this
plan is cheaper.
On Mon, Mar 22, 2021 at 6:29 PM Chris Stephens wrote:
>
> we are but i was h
we are but i was hoping to get a better understanding of where the
optimizer is going wrong and what i can do about it.
chris
On Mon, Mar 22, 2021 at 9:54 AM Laurenz Albe
wrote:
> On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:
> > The following SQL takes ~25 seconds to run. I'm relat
On Mon, 2021-03-22 at 08:10 -0500, Chris Stephens wrote:
> The following SQL takes ~25 seconds to run. I'm relatively new to postgres
> but the execution plan (https://explain.depesz.com/s/N4oR) looks like it's
> materializing the entire EXISTS subquery for each row returned by the rest
> of the
AWS RDS v12
The following SQL takes ~25 seconds to run. I'm relatively new to postgres
but the execution plan (https://explain.depesz.com/s/N4oR) looks like it's
materializing the entire EXISTS subquery for each row returned by the rest
of the query before probing for plate_384_id existence. postg
if you run a manual ANALYZE 100 times, you'll sometimes get the bad
> plan. Maybe depending on the data visible at the time analyze is invoked.
>
I've been thinking to try to capture statistics info in the bad case,
I wonder if I could reproduce the situation that way.
> >
Ășt 15. 9. 2020 v 1:11 odesĂlatel Robert Treat napsal:
> Howdy folks,
>
> Recently i've run into a problem where autoanalyze is causing a query
> plan to flip over to using an index which is about 10x slower, and the
> problem is fixed by running an alayze manually. some relevant info:
>
> UPDATE
me relevant info:
I think it's because 1) the costs and scan rowcounts are similar ; and, 2) the
stats are probably near some threshold which causes the plan to change. I'm
guessing if you run a manual ANALYZE 100 times, you'll sometimes get the bad
plan. Maybe depending on the
Howdy folks,
Recently i've run into a problem where autoanalyze is causing a query
plan to flip over to using an index which is about 10x slower, and the
problem is fixed by running an alayze manually. some relevant info:
UPDATE sleeping_intents SET
raptor_after='2020-09-14T19:21:03.581106'::time
On 02/02/2018 10:02 AM, Johan Fredriksson wrote:
> tor 2018-02-01 klockan 20:34 + skrev Johan Fredriksson:
>>> Johan Fredriksson writes:
>>>> Bad plan: https://explain.depesz.com/s/avtZ
>>>> Good plan: https://explain.depesz.com/s/SJSt
>>>>
tor 2018-02-01 klockan 20:34 + skrev Johan Fredriksson:
> > Johan Fredriksson writes:
> > > Bad plan: https://explain.depesz.com/s/avtZ
> > > Good plan: https://explain.depesz.com/s/SJSt
> > > Any suggestions on how to make the planner make better dec
> Johan Fredriksson writes:
> > Bad plan: https://explain.depesz.com/s/avtZ
> > Good plan: https://explain.depesz.com/s/SJSt
> > Any suggestions on how to make the planner make better decisions for
> > this query?
>
> Core of the problem looks to be the mise
Johan Fredriksson writes:
> Bad plan: https://explain.depesz.com/s/avtZ
> Good plan: https://explain.depesz.com/s/SJSt
> Any suggestions on how to make the planner make better decisions for
> this query?
Core of the problem looks to be the misestimation here:
Index Onl
0".
Last week I upgraded PostgreSQL for this application (Request Tracker)
to version 10.1 and just for fun I decied to test to remove the patch
to see if the problem still persisted. For two cases it did not. The
planner handled them just fine. For one case however, the same problem
stil
In my opinion this is the Achilles heel of the postgres optimizer. Row
estimates should never return 1, unless the estimate is provably <=1. This
is particularly a problem with join estimates. A dumb fix for this is to
change clamp_join_row_est() to never return a value <2. This fixes most of
m
I've have a look to the plan with pgadmin, and I think the problem is
rather here :
-> Sort (cost=4997.11..4997.11 rows=1 width=69) (actual
time=27.427..28.896 rows=7359 loops=1)
Sort Key: amendment.id
Sort Method: quicksort Memory: 1227kB
-> Nested Loop (cost=183.44..4997.1
2018-01-23 16:18 GMT+01:00 Justin Pryzby :
> On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:
>
>> Here is the default plan :
>
> Can you resend without line breaks or paste a link to explain.depesz?
I hope it's better like that. I've attached it too, just in case.
>
> The proble
On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:
> Hello all,
>
> So I have a view, for which I can select all rows in about 3s (returns ~80k
> rows), but if I add a where clause on a column, it takes +300s to return
> the ~8k lines.
>
> From the plan, I see that it expects to re
Hello all,
So I have a view, for which I can select all rows in about 3s (returns ~80k
rows), but if I add a where clause on a column, it takes +300s to return
the ~8k lines.
>From the plan, I see that it expects to return only 1 row and so choose to
perform some nested loops. Of course, I did ru
Hi Tom,
Thanks for your help.
> On Dec 1, 2017, at 22:33, Tom Lane wrote:
>
>
> The seqscan formulation of the query results in evaluating
> this function afresh at most of the rows
The function is defined as STABLE. I though that means that there is no need
to reevaluate it on every row as
Roman Konoval writes:
> I have a problem on 9.3.14 with a query that accesses table:
I think the root of the problem is your intermediate function:
> CREATE OR REPLACE FUNCTION public.get_doc_path(document_id character varying)
> RETURNS ltree
> LANGUAGE plpgsql
> STABLE
> AS $function$
> DEC
Hi,
I have a problem on 9.3.14 with a query that accesses table:
Size: (retrieved by query
https://gist.github.com/romank0/74f9d1d807bd3f41c0729d0fc6126749)
schemaname |relname| size | toast | associated_idx_size | total
+---+++-
One more thing. Given this:
> The difference here is that, from the perspective of the outer query,
> the WHERE condition is a restriction clause on the "cim" relation,
> not a join clause. So it will get pushed down into the subquery
> without creating any join order constraints on the outer qu
Alex Reece writes:
> I managed to reduce my test case: the following query does not take
> advantage of the index on contribution metrics.
Yeah. What you're wishing is that the planner would push a join
condition down into a subquery, but it won't do that at present.
Doing so would require gener
=17 loops=1)
Planning time: 0.396 ms
Execution time: 0.165 ms
schema here:
https://gist.github.com/awreece/aeacbc818277c7c6d99477645e7fcd03
Best,
~Alex
On Tue, Nov 28, 2017 at 2:13 AM Alex Reece wrote:
> I'm on PostgreSQL 9.6.5 and getting an awkwardly bad plan c
I'm on PostgreSQL 9.6.5 and getting an awkwardly bad plan chosen for my
query.
I want to do:
select investments.id, cim.yield
FROM contributions
JOIN investments ON contributions.investment_id = investments.id
JOIN contribution_investment_metrics_view cim ON cim.investment_id =
investmen
27 matches
Mail list logo