Re: [PERFORM] 8.1 - 8.4 regression

2010-02-15 Thread Dennis Björklund
Can you force 8.4 to generate the same plan as 8.1? For example by running

  SET enable_hashjoin = off;

before you run EXPLAIN on the query? If so, then we can compare the
numbers from the forced plan with the old plan and maybe figure out why it
didn't use the same old plan in 8.4 as it did in 8.1.

Note that the solution is not to force the plan, but it can give us more
information.

/Dennis

 is at least one query which has degraded in performance quite a bit. Here
 is the plan on 8.4.2:
 http://wood.silentmedia.com/bench/842

 Here is the very much less compact plan for the same query on 8.1.19:
 http://wood.silentmedia.com/bench/8119



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow query problem

2004-01-09 Thread Dennis Björklund
On Thu, 8 Jan 2004, Bradley Tate wrote:

 We've set up a little test box (1GHz Athlon, 40G IDE drive, 256M RAM, 
 Redhat 9) to do some basic comparisons between postgresql and firebird 
 1.0.3 and 1.5rc8. Mostly the results are comparable, with one 
 significant exception.
 
 QUERY
 select invheadref, invprodref, sum(units)
 from invtran
 group by invheadref, invprodref
 
 RESULTS
 pg 7.3.4  -  5.5 min
 pg 7.4.0  -  10 min
 fb 1.0.3   -  64 sec
 fb 1.5 -   44 sec
 
 * The invtran table has about 2.5 million records, invheadref and 
 invprodref are both char(10) and indexed.

For the above query, shouldn't you have one index for both columns
(invheadref, invprodref). Then it should not need to sort at all to do the
grouping and it should all be fast.

-- 
/Dennis Björklund


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Poor delete performance AFTER vacuum analyze

2003-07-20 Thread Dennis Björklund
On Sat, 19 Jul 2003, Jeremy M. Guthrie wrote:

 100megs of new data each day.  However, the instant the system finishes only 
 a 'vacuum analyze', the whole thing slows WAY down to where each run can take 
 10-15 minutes.

Have you run EXPLAIN ANALYZE on the delete query before and after the 
vacuum? Does it explain why it goes slower?

-- 
/Dennis


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] index / sequential scan problem

2003-07-18 Thread Dennis Björklund
On Fri, 18 Jul 2003, Tom Lane wrote:

  Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.
 
  Doesn't sound very good and it will most likely make other queries slower.
 
 Seems like a reasonable approach to me --- certainly better than setting
 random_page_cost to physically nonsensical values.

Hehe, just before this letter there was talk about changing
random_page_cost. I kind of responed that 0.042 is not a good random page
cost. But now of course I can see that it says cpu_tuple_cost :-)

Sorry for adding confusion.

-- 
/Dennis


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] index / sequential scan problem

2003-07-17 Thread Dennis Björklund
On Fri, 18 Jul 2003, Fabian Kreitner wrote:

 Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.

Doesn't sound very good and it will most likely make other queries slower.
You could always turn off sequential scan before that query and turn it on
after.

 Anything I need to consider when raising it to such high values?

You could fill the table with more data and it will probably come to a 
point where it will stop using the seq. scan.

You could of course also change pg itself so it calculates a better
estimate.

-- 
/Dennis


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster