Re: [PERFORM] Bad plan choices & statistic targets with a GIN index

2013-11-14 Thread Tom Lane
Dieter Komendera writes: > Because I didn't find any documentation or references on setting statistic > targets on indices, I just gave it a shot: > ALTER TABLE index_games_participants ALTER COLUMN "array" SET STATISTICS 1000; This works, and will help if the planner can make use of statistics

Re: [PERFORM] bad plan

2012-04-06 Thread Julien Cigar
On 04/05/2012 21:47, Ants Aasma wrote: On Thu, Apr 5, 2012 at 2:47 PM, Julien Cigar wrote: - http://www.pastie.org/3731956 : with default config - http://www.pastie.org/3731960 : this is with enable_seq_scan = off It looks like the join selectivity of (context_to_context_links, ancestors) is b

Re: [PERFORM] bad plan

2012-04-05 Thread Ants Aasma
On Thu, Apr 5, 2012 at 2:47 PM, Julien Cigar wrote: > - http://www.pastie.org/3731956 : with default config > - http://www.pastie.org/3731960 : this is with enable_seq_scan = off It looks like the join selectivity of (context_to_context_links, ancestors) is being overestimated by almost two order

Re: [PERFORM] bad plan

2012-04-05 Thread Kevin Grittner
Julien Cigar wrote: > I tried to play on the various cost settings but it's doesn't > change anything, except setting random_page_cost to 1 (which will > lead to bad plans for other queries, so not a solution) Yeah, you clearly don't have the active portion of your database fully cached, so yo

Re: [PERFORM] Bad plan by Planner (Already resolved?)

2011-10-29 Thread Robins Tharakan
Thanks Tom! Regret the delay in reply, but two of the three guesses were spot-on and resolved the doubt. 8.4.9 does take care of this case very well. On 10/27/2011 01:27 AM, Tom Lane wrote: I suspect that you're just fooling yourself here, and the "optimized" query is no such thing. :) I act

Re: [PERFORM] Bad plan by Planner (Already resolved?)

2011-10-26 Thread Tom Lane
Robins Tharakan writes: > ORIGINAL QUERY (on PostgreSQL 8.4.9): > http://explain.depesz.com/s/bTm > EXPLAIN ANALYZE SELECT field_a FROM large_table_a JOIN large_table_b > USING (field_a) WHERE field_b IN (SELECT large_table_b.field_b FROM > large_table_b WHERE field_a = 2673056) ; > --

Re: [PERFORM] Bad plan by Planner (Already resolved?)

2011-10-25 Thread Robins Tharakan
Thanks Kevin, That's a pretty neat way to managing (at least) minor upgrades. Like I said, this place is new, and so although I'm quite positive about upgrading to the latest, I should probably take things one-at-a-time and bring in this idea of implementing regular updates sometime in the fut

Re: [PERFORM] Bad plan by Planner (Already resolved?)

2011-10-18 Thread Kevin Grittner
Robins Tharakan wrote: > I'll try to answer in-line. Thanks; that's the preferred style on PostgreSQL lists. > On 10/17/2011 09:32 PM, Kevin Grittner wrote: >> First off, did you use pg_upgrade from an earlier major release? >> If so, be sure you've dealt with this issue: > Although I joined

Re: [PERFORM] Bad plan by Planner (Already resolved?)

2011-10-17 Thread Mark Kirkwood
On 17/10/11 19:28, Robins Tharakan wrote: Hi, I stumbled upon a situation where the planner comes with a bad query plan, but I wanted to mention upfront that I'm using a dated PG version and I already see an update which mentions about improving planner performance. I just wanted to check if

Re: [PERFORM] Bad plan by Planner (Already resolved?)

2011-10-17 Thread Robins Tharakan
Hi, I'll try to answer in-line. On 10/17/2011 09:32 PM, Kevin Grittner wrote: First off, did you use pg_upgrade from an earlier major release? If so, be sure you've dealt with this issue: Although I joined recently, I doubt whether pg_upgrade was used here. And this doesn't look like the issu

Re: [PERFORM] Bad plan by Planner (Already resolved?)

2011-10-17 Thread Kevin Grittner
Robins Tharakan wrote: > I stumbled upon a situation where the planner comes with a bad > query plan, but I wanted to mention upfront that I'm using a dated > PG version and I already see an update which mentions about > improving planner performance. I just wanted to check if this > issue is al

Re: [PERFORM] bad plan: 8.4.8, hashagg, work_mem=1MB.

2011-07-19 Thread Robert Haas
On Mon, Jun 20, 2011 at 3:31 PM, Jon Nelson wrote: > On Mon, Jun 20, 2011 at 11:08 AM, Tom Lane wrote: >> Jon Nelson writes: >>> I ran a query recently where the result was very large. The outer-most >>> part of the query looked like this: >> >>>  HashAggregate  (cost=56886512.96..56886514.96 ro

Re: [PERFORM] bad plan: 8.4.8, hashagg, work_mem=1MB.

2011-06-20 Thread Jon Nelson
On Mon, Jun 20, 2011 at 11:08 AM, Tom Lane wrote: > Jon Nelson writes: >> I ran a query recently where the result was very large. The outer-most >> part of the query looked like this: > >>  HashAggregate  (cost=56886512.96..56886514.96 rows=200 width=30) >>    ->  Result  (cost=0.00..50842760.97

Re: [PERFORM] bad plan: 8.4.8, hashagg, work_mem=1MB.

2011-06-20 Thread Tom Lane
Jon Nelson writes: > I ran a query recently where the result was very large. The outer-most > part of the query looked like this: > HashAggregate (cost=56886512.96..56886514.96 rows=200 width=30) >-> Result (cost=0.00..50842760.97 rows=2417500797 width=30) > The row count for 'Result' is

Re: [PERFORM] Bad plan when join on function

2011-01-18 Thread Shaun Thomas
On 01/17/2011 02:03 AM, Zotov wrote: select c.id from OneRow c join abstract a on a.id=AsInteger(c.id) OneRow Contains only one row, abstract contains 22 953 500 rows AsInteger is simple function on Delphi it just return input value Ok... there has to be some kind of misunderstanding, here.

Re: [PERFORM] Bad plan when join on function

2011-01-17 Thread Pavel Stehule
2011/1/17 Tom Lane : > Pavel Stehule writes: >> it should to work without functional index - but not sure about effectivity > > As long as the function is VOLATILE, the planner can't use any > intelligent query plan.  Merge or hash join both require at least > stable join keys. sure, my first adv

Re: [PERFORM] Bad plan when join on function

2011-01-17 Thread Tom Lane
Pavel Stehule writes: > it should to work without functional index - but not sure about effectivity As long as the function is VOLATILE, the planner can't use any intelligent query plan. Merge or hash join both require at least stable join keys. regards, tom lane -- Se

Re: [PERFORM] Bad plan when join on function

2011-01-17 Thread Pavel Stehule
2011/1/17 Kevin Grittner : > Zotov  wrote: > >> select c.id from OneRow c join abstract a on a.id=AsInteger(c.id) > >> Why SeqScan??? > > Because you don't have an index on AsInteger(c.id). > > If your function is IMMUTABLE (each possible combination of input > values always yields the same result)

Re: [PERFORM] Bad plan when join on function

2011-01-17 Thread Kevin Grittner
Zotov wrote: > select c.id from OneRow c join abstract a on a.id=AsInteger(c.id) > Why SeqScan??? Because you don't have an index on AsInteger(c.id). If your function is IMMUTABLE (each possible combination of input values always yields the same result), and you declare it such, then you c

Re: [PERFORM] Bad plan when join on function

2011-01-17 Thread Pavel Stehule
2011/1/17 Zotov : > It`s just a sample. > > select c.id from OneRow c join abstract a on a.id=AsInteger(c.id) > > "Nested Loop  (cost=0.00..786642.96 rows=1 width=4) (actual > time=91021.167..119601.344 rows=1 loops=1)" > "  Join Filter: ((a.id)::integer = asinteger((c.id)::integer))" > "  ->  Seq

Re: [PERFORM] Bad plan choice nestloop vs. hashjoin

2010-01-18 Thread Kenneth Marshall
On Mon, Jan 18, 2010 at 12:13:24PM -0500, Tom Lane wrote: > Kenneth Marshall writes: > > We have just upgraded our monitoring server software and > > now the following query for graphing the data performs > > abysmally with the default settings. Here is the results > > of the EXPLAIN ANALYZE run w

Re: [PERFORM] Bad plan choice nestloop vs. hashjoin

2010-01-18 Thread Tom Lane
Kenneth Marshall writes: > We have just upgraded our monitoring server software and > now the following query for graphing the data performs > abysmally with the default settings. Here is the results > of the EXPLAIN ANALYZE run with nestloops enabled: I poked at this a bit more and now think I s

Re: [PERFORM] Bad plan choice nestloop vs. hashjoin

2010-01-15 Thread Tom Lane
Kenneth Marshall writes: > We have just upgraded our monitoring server software and > now the following query for graphing the data performs > abysmally with the default settings. Here is the results > of the EXPLAIN ANALYZE run with nestloops enabled: That plan seems a bit wacko --- I don't see

Re: [PERFORM] Bad plan choice nestloop vs. hashjoin

2010-01-15 Thread Kenneth Marshall
On Fri, Jan 15, 2010 at 04:58:57PM -0600, Kevin Grittner wrote: > Kenneth Marshall wrote: > > > with the default settings > > Do you mean you haven't changed any settings in your postgresql.conf > file from their defaults? > > -Kevin > Sorry, here are the differences from the default: max_

Re: [PERFORM] Bad plan choice nestloop vs. hashjoin

2010-01-15 Thread Kevin Grittner
Kenneth Marshall wrote: > with the default settings Do you mean you haven't changed any settings in your postgresql.conf file from their defaults? -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresq

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-06-05 Thread Robert Haas
On Fri, Jun 5, 2009 at 8:29 PM, David Blewett wrote: > On Fri, Jun 5, 2009 at 7:32 PM, Josh Berkus wrote: >> My first thought would be to increase statistics dramatically on the >> filtered columns in hopes of making PG realize there's a lot of rows there; >> it's off by 8x.  Correlations stats ar

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-06-05 Thread David Blewett
On Fri, Jun 5, 2009 at 7:32 PM, Josh Berkus wrote: > My first thought would be to increase statistics dramatically on the > filtered columns in hopes of making PG realize there's a lot of rows there; > it's off by 8x.  Correlations stats are an ongoing issue in PostgreSQL. I started at a stats_ta

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-06-05 Thread Josh Berkus
David, My first thought would be to increase statistics dramatically on the filtered columns in hopes of making PG realize there's a lot of rows there; it's off by 8x. Correlations stats are an ongoing issue in PostgreSQL. -- Josh Berkus PostgreSQL Experts Inc. www.pgexperts.com -- Sent v

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-06-05 Thread David Blewett
On Mon, May 25, 2009 at 11:22 AM, David Blewett wrote: > On Sun, May 24, 2009 at 2:42 PM, Tom Lane wrote: >> >> It still feels like this schema design is obscuring correlations that >> the planner needs to know about in order to make decent estimates. > > I'm not sure how to make the planner awar

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-25 Thread David Blewett
On Sun, May 24, 2009 at 2:42 PM, Tom Lane wrote: > > It still feels like this schema design is obscuring correlations that > the planner needs to know about in order to make decent estimates. I'm not sure how to make the planner aware of these correlations. Is there something inherently flawed w

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-24 Thread Tom Lane
David Blewett writes: > I took the time to load this data into an 8.4beta2 install, and the same > query runs in a much more reasonable timeframe (~3s as opposed to ~50s). I > set the statistics target to 500, and got this explain [1]. > 1. http://explain.depesz.com/s/pw Hmm... the join size esti

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-22 Thread David Blewett
On Sat, May 9, 2009 at 11:52 AM, Tom Lane wrote: > David Blewett writes: > > On Fri, May 8, 2009 at 10:00 PM, Tom Lane wrote: > >> Thanks. Could I trouble you for one other data point --- about how many > >> rows are in each of these tables? > > > Not a problem: > > As best I can tell, the sel

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-10 Thread David Blewett
On Sat, May 9, 2009 at 11:52 AM, Tom Lane wrote: > As best I can tell, the selectivity numbers are about what they should > be --- for instance, using these stats I get a selectivity of 0.074 > for the join clause fkr.submission_id = tr.submission_id. Over the > entire relations (646484 and

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-09 Thread Tom Lane
David Blewett writes: > On Fri, May 8, 2009 at 10:00 PM, Tom Lane wrote: >> Thanks. Could I trouble you for one other data point --- about how many >> rows are in each of these tables? > Not a problem: As best I can tell, the selectivity numbers are about what they should be --- for instance,

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-09 Thread David Blewett
On Fri, May 8, 2009 at 10:00 PM, Tom Lane wrote: > Thanks. Could I trouble you for one other data point --- about how many > rows are in each of these tables? Not a problem: canvas_dateresponse 263819 canvas_foreignkeyresponse 646484 canvas_integerresponse 875375 canvas_submission

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-08 Thread Tom Lane
David Blewett writes: > Apparently there was a typo in the query that I didn't notice that > excluded that table's columns. Here is the new output including it: > http://pastesite.com/7017 Thanks. Could I trouble you for one other data point --- about how many rows are in each of these tables?

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-08 Thread Tom Lane
David Blewett writes: > On Thu, May 7, 2009 at 6:44 PM, Tom Lane wrote: >> Look into pg_stats for the rows concerning the columns used in the >> query's WHERE and JOIN/ON clauses. > Okay, here you go: > http://rafb.net/p/20y8Oh72.html I got some time to poke into this, but didn't get very far -

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-07 Thread David Blewett
On Thu, May 7, 2009 at 6:44 PM, Tom Lane wrote: > Look into pg_stats for the rows concerning the columns used in the > query's WHERE and JOIN/ON clauses. Okay, here you go: http://rafb.net/p/20y8Oh72.html David -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To ma

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-07 Thread Tom Lane
David Blewett writes: > On Thu, May 7, 2009 at 4:31 PM, Tom Lane wrote: >> as few as ten rows out, it'd likely switch to a different plan.  So the >> So the question to ask is why the rowcount estimates are so abysmally bad. >> You mentioned having tried to increase the stats targets, but without

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-07 Thread David Blewett
On Thu, May 7, 2009 at 4:31 PM, Tom Lane wrote: > as few as ten rows out, it'd likely switch to a different plan.  So the > So the question to ask is why the rowcount estimates are so abysmally bad. > You mentioned having tried to increase the stats targets, but without > seeing the actual stats d

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-07 Thread Tom Lane
David Blewett writes: > On Thu, May 7, 2009 at 12:53 PM, David Blewett wrote: >> 1. http://dpaste.com/hold/41842/ >> 2. http://explain.depesz.com/s/Wg >> 3. http://explain.depesz.com/s/1s >> 4. http://dpaste.com/hold/41846/ > Forgot to mention that I'm using Postgres 8.3.6 on linux 2.6.24. Well

Re: [PERFORM] Bad Plan for Questionnaire-Type Query

2009-05-07 Thread David Blewett
On Thu, May 7, 2009 at 12:53 PM, David Blewett wrote: > 1. http://dpaste.com/hold/41842/ > 2. http://explain.depesz.com/s/Wg > 3. http://explain.depesz.com/s/1s > 4. http://dpaste.com/hold/41846/ Forgot to mention that I'm using Postgres 8.3.6 on linux 2.6.24. Shared buffers are set to 1GB, effec

Re: [PERFORM] bad plan and LIMIT

2009-05-01 Thread James Nelson
I looked into the distribution of the filenames, in particular I ran a query to see how for into the table the 1st filename would be found. photoshelter=# select count(*) from ps_image where lower(file_name) < 'a-400-001.jpg'; count - 8915832 As you can see the first row is al

Re: [PERFORM] bad plan and LIMIT

2009-05-01 Thread James Nelson
The 'in' form and 'join' form produce identical plans for both limit and non-limit versions of the query, which I actually think reflects well on the query planner. I also tried a form of the query with the subselect in the from clause to try and force the order the tables were evaluated

Re: [PERFORM] bad plan and LIMIT

2009-05-01 Thread James Nelson
I had tried using exists but both the forms of the query (with limit and without) performed much worse. James On May 1, 2009, at 4:22 AM, Adam Ruth wrote: You could try changing the IN to an EXISTS, that may alter how the optimizer weighs the limit. SELECT ID FROM ps_image WHERE EXI

Re: [PERFORM] bad plan and LIMIT

2009-05-01 Thread Tom Lane
James Nelson writes: > Hi, I'm hoping you guys can help with improving this query I'm having > a problem with. The main problem is that the query plan changes > depending on the value of the LIMIT clause, with small values using a > poor plan and running very slowly. The two times are roughl

Re: [PERFORM] bad plan and LIMIT

2009-05-01 Thread Grzegorz Jaśkiewicz
EXISTS won't help much either, postgresql is not too fast, when it comes to that sort of approach. join is always going to be fast, it is about time you learn joins and use them ;) -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription:

Re: [PERFORM] bad plan and LIMIT

2009-05-01 Thread Grzegorz Jaśkiewicz
use join instead of where in(); -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] bad plan and LIMIT

2009-05-01 Thread Adam Ruth
You could try changing the IN to an EXISTS, that may alter how the optimizer weighs the limit. SELECT ID FROM ps_image WHERE EXISTS (SELECT null FROM ps_gallery_image WHERE gallery_id ='G7ejKGoWS_cY' and image_id = ps_image.id) ORDER BY LOWER(FILE_NAME) ASC On 30/04/2009, at 3:51 AM,

Re: [PERFORM] Bad plan for nested loop + limit

2009-03-30 Thread Alexander Staubo
On Sun, Mar 1, 2009 at 4:32 AM, Robert Haas wrote: > What do you have default_statistics_target set to?  If it's less than > 100, you should probably raise it to 100 and re-analyze (the default > value for 8.4 will be 100, but for 8.3 and prior it is 10). Changing it to 100 fixed the problem. Tha

Re: [PERFORM] Bad plan for nested loop + limit

2009-02-28 Thread Robert Haas
On Sat, Feb 28, 2009 at 11:20 AM, Alexander Staubo wrote: > On Fri, Feb 27, 2009 at 11:54 PM, Robert Haas wrote: >> The problem here is that the planner estimates the cost of a Limit >> plan node by adding up (1) the startup cost of the underlying plan >> node, in this case 0 for the nestjoin, an

Re: [PERFORM] Bad plan for nested loop + limit

2009-02-28 Thread Alexander Staubo
On Fri, Feb 27, 2009 at 11:54 PM, Robert Haas wrote: > The problem here is that the planner estimates the cost of a Limit > plan node by adding up (1) the startup cost of the underlying plan > node, in this case 0 for the nestjoin, and (2) a percentage of the run > cost, based on the ratio of the

Re: [PERFORM] Bad plan for nested loop + limit

2009-02-27 Thread Robert Haas
On Fri, Feb 27, 2009 at 3:18 PM, Alexander Staubo wrote: > On Sun, Feb 15, 2009 at 5:45 PM, Alexander Staubo wrote: >> On Sun, Feb 15, 2009 at 5:29 AM, David Wilson >> wrote: >>> On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo wrote: Output from "explain analyze":  Limit

Re: [PERFORM] Bad plan for nested loop + limit

2009-02-27 Thread Alexander Staubo
On Sun, Feb 15, 2009 at 5:45 PM, Alexander Staubo wrote: > On Sun, Feb 15, 2009 at 5:29 AM, David Wilson > wrote: >> On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo wrote: >>> >>> Output from "explain analyze": >>> >>>  Limit  (cost=0.00..973.63 rows=4 width=48) (actual >>> time=61.554..4039.

Re: [PERFORM] Bad plan for nested loop + limit

2009-02-15 Thread Alexander Staubo
On Sun, Feb 15, 2009 at 5:29 AM, David Wilson wrote: > On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo wrote: >> >> Output from "explain analyze": >> >> Limit (cost=0.00..973.63 rows=4 width=48) (actual >> time=61.554..4039.704 rows=1 loops=1) >> -> Nested Loop (cost=0.00..70101.65 rows=2

Re: [PERFORM] Bad plan for nested loop + limit

2009-02-14 Thread David Wilson
On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo wrote: > > Output from "explain analyze": > > Limit (cost=0.00..973.63 rows=4 width=48) (actual > time=61.554..4039.704 rows=1 loops=1) > -> Nested Loop (cost=0.00..70101.65 rows=288 width=48) (actual > time=61.552..4039.700 rows=1 loops=1) >

Re: [PERFORM] Bad plan for join to aggregate of join.

2006-09-12 Thread Mischa Sandberg
Tom Lane wrote: Mischa Sandberg <[EMAIL PROTECTED]> writes: can PG see that a join on an grouped-by field can be pushed down into the query as an indexable filter? No. The GROUP BY serves as a partial optimization fence. If you're concerned about the speed of this query, I recommend making a

Re: [PERFORM] Bad plan for join to aggregate of join.

2006-09-12 Thread Tom Lane
Mischa Sandberg <[EMAIL PROTECTED]> writes: > can PG see that a join on an grouped-by field > can be pushed down into the query as an indexable filter? No. The GROUP BY serves as a partial optimization fence. If you're concerned about the speed of this query, I recommend making a different view

Re: [PERFORM] Bad plan on a view

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 04:43:53PM +0100, PFC wrote: > > >Aren't you going to a lot of work to reinvent something that TOAST > >already does for you? (At least, in the cases where the text fields > >are wide enough that it really matters.) > > I know. But I have several text fields in the

Re: [PERFORM] Bad plan on a view

2006-03-01 Thread PFC
While TOAST has a similar goal I don't think it has enough AI to completely replace this manual process. It suffers in a number of use cases: 1) When you have a large number of moderate sized text fields instead of asingle very large text field. This is probably the case here.

Re: [PERFORM] Bad plan on a view

2006-03-01 Thread Greg Stark
Tom Lane <[EMAIL PROTECTED]> writes: > PFC <[EMAIL PROTECTED]> writes: > > So, in order to speed up requests which need a full table scan, I wanted > > to put the text fields in another table, and use a view to make it look > > like nothing happened. Also, the small table used for searching is

Re: [PERFORM] Bad plan on a view

2006-03-01 Thread PFC
Aren't you going to a lot of work to reinvent something that TOAST already does for you? (At least, in the cases where the text fields are wide enough that it really matters.) I know. But I have several text fields in the 20 to 200 characters, which is too small for toast, but large enough

Re: [PERFORM] Bad plan on a view

2006-03-01 Thread Tom Lane
PFC <[EMAIL PROTECTED]> writes: > So, in order to speed up requests which need a full table scan, I wanted > to put the text fields in another table, and use a view to make it look > like nothing happened. Also, the small table used for searching is a lot > more likely to fit in RAM than the

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-13 Thread Markus Bertheau
Ð ÐÑÐ, 11/05/2005 Ð 22:59 +0200, Guillaume Smet ÐÐÑÐÑ: > Anyway, I tried to work on the statistics as you told me and here are > the results: > ccm_perf=# ALTER TABLE acs_objects ALTER COLUMN object_id SET STATISTICS 30; > ALTER TABLE > ccm_perf=# ANALYZE acs_objects; > ANALYZE > > ccm_perf=# \i

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-11 Thread Mischa Sandberg
Quoting Guillaume Smet <[EMAIL PROTECTED]>: > Hi, > > We have some performances problem on a particular query. ... I have to say it, this was the best laid-out set of details behind a problem I've ever seen on this list; I'm going to try live up to it, the next time I have a problem of my own.

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-11 Thread Guillaume Smet
Josh, Tom, Thanks for your explanations. In the meantime it seems like the quickest answer for Guillaume might be to try to avoid keeping any NULLs in parent_application_id. I can't do that as the majority of the applications don't have any parent one. Moreover, we use a third party application an

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-11 Thread Tom Lane
Ah-ha, I can replicate the problem. This example uses tenk1 from the regression database, which has a column unique2 containing just the integers 0... regression=# create table t1(f1 int); CREATE TABLE regression=# insert into t1 values(5); INSERT 154632 1 regression=# insert into t1 values(7

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-11 Thread Guillaume Smet
> Well, those stats certainly appear to justify the planner's belief that > the indexscan needn't run very far: the one value of > parent_application_id is 1031 and this is below the smallest value of > object_id seen by analyze. Yes, it seems rather logical but why does it cost so much if it shoul

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-11 Thread Tom Lane
Guillaume Smet <[EMAIL PROTECTED]> writes: >> If so, can we see the pg_stats rows for the object_id and >> parent_application_id columns? > See attached file. Well, those stats certainly appear to justify the planner's belief that the indexscan needn't run very far: the one value of parent_applic

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-11 Thread Guillaume Smet
Tom, So, the usual questions: have these two tables been ANALYZEd lately? Yes, of course. As I wrote in my previous mail, here is how I reproduce the problem: - we load the dump in a new database (to be sure, there is no problem on an index or something like that) - query: it's fast (< 1ms) - *VAC

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-11 Thread Tom Lane
Josh Berkus writes: > -> Merge Right Join (cost=8.92..9.26 rows=1 width=529) (actual > time=129.100..129.103 rows=1 loops=1) >Merge Cond: ("outer".object_id = "inner".parent_application_id) >-> Index Scan using acs_objects_object_id_p_hhkb1 on > acs_ob

Re: [PERFORM] Bad plan after vacuum analyze

2005-05-11 Thread Josh Berkus
Guillaume, > We reproduced the problem on a 7.4.5 and on a 7.4.7 server. > * we load the dump in a new database > * query: it's fast (< 1ms) > * VACUUM FULL ANALYZE; > * query: it's really slow (130ms) and it's another plan > * set enable_seqscan=off; > * query: it's fast (< 1ms) : it uses the bes

Re: [PERFORM] bad plan

2005-03-08 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Tom Lane wrote: > Gaetano Mendola <[EMAIL PROTECTED]> writes: > >>>Since your query is so simple, I'm guessing v_sc_user_request is a view. >>>Can you provide the definition? > > >>Of course: > > > I don't think you've told us the whole truth abou

Re: [PERFORM] bad plan

2005-03-08 Thread Tom Lane
Richard Huxton writes: > There are also a lot of views involved here for very few output columns. > Tom - is the planner smart enough to optimise-out unneeded columns from > a SELECT * view if it's part of a join/subquery and you only use one or > two columns? If the view gets flattened, yes,

Re: [PERFORM] bad plan

2005-03-08 Thread Richard Huxton
Tom Lane wrote: Putting ORDER BYs in views that you intend to use as components of other views is a bad practice from a performance perspective... There are also a lot of views involved here for very few output columns. Tom - is the planner smart enough to optimise-out unneeded columns from a SEL

Re: [PERFORM] bad plan

2005-03-08 Thread Tom Lane
Gaetano Mendola <[EMAIL PROTECTED]> writes: >> Since your query is so simple, I'm guessing v_sc_user_request is a view. >> Can you provide the definition? > Of course: I don't think you've told us the whole truth about the v_sc_packages view. The definition as given doesn't work at all (it'll ha

Re: [PERFORM] bad plan

2005-03-08 Thread Richard Huxton
Gaetano Mendola wrote: Richard Huxton wrote: OK, so looking at the original EXPLAIN the order of processing seems to be: 1. v_sat_request is evaluated and filtered on login='...' (lines 7..15) This gives us 31 rows 2. The left-join from v_sat_request to v_sc_packages is processed (lines 5..6) This

Re: [PERFORM] bad plan

2005-03-08 Thread Gaetano Mendola
Richard Huxton wrote: > OK, so looking at the original EXPLAIN the order of processing seems to be: > 1. v_sat_request is evaluated and filtered on login='...' (lines 7..15) > This gives us 31 rows > 2. The left-join from v_sat_request to v_sc_packages is processed (lines > 5..6) > This involves t

Re: [PERFORM] bad plan

2005-03-08 Thread Richard Huxton
Gaetano Mendola wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Richard Huxton wrote: Gaetano Mendola wrote: running a 7.4.5 engine, I'm facing this bad plan: empdb=# explain analyze SELECT name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp empdb-#

Re: [PERFORM] bad plan

2005-03-08 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Richard Huxton wrote: > Gaetano Mendola wrote: > >> running a 7.4.5 engine, I'm facing this bad plan: >> >> empdb=# explain analyze SELECT >> name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp >> >> empdb-#

Re: [PERFORM] bad plan

2005-03-08 Thread Richard Huxton
Gaetano Mendola wrote: running a 7.4.5 engine, I'm facing this bad plan: empdb=# explain analyze SELECT name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp empdb-#FROM v_sc_user_request empdb-#WHERE empdb-#