Re: [PERFORM] Query Optimizer Failure / Possible Bug

2005-04-03 Thread PFC
	Noticed this problem,too.
	You can always make the calculation you want done once inside a set  
returning function so it'll behave like a table, but that's ugly.

On Mon, 28 Mar 2005 16:14:44 +0200, Hannes Dorbath  
[EMAIL PROTECTED] wrote:

hm, a few days and not a single reply :|
any more information needed? test data? simplified test case? anything?
thanks
Hannes Dorbath wrote:
The query and the corresponding EXPLAIN is at
 http://hannes.imos.net/query.txt
 I'd like to use the column q.replaced_serials for multiple calculations
in the SELECT clause, but every time it is referenced there in some way
the whole query in the FROM clause returning q is executed again.
 This doesn't make sense to me at all and eats performance.
 If this wasn't clear enough, for every
 q.replaced_serials insert_random_calculation AS some_column
 in the SELECT clause there is new block of
 ---
-  Aggregate  (cost=884.23..884.23 rows=1 width=0)
  -  Nested Loop  (cost=0.00..884.23 rows=1 width=0)
-  Index Scan using ix_rma_ticket_serials_replace on
rma_ticket_serials rts  (cost=0.00..122.35
rows=190 width=4)
  Index Cond: (replace = false)
-  Index Scan using pk_serials on serials s
  (cost=0.00..3.51 rows=1 width=4)
  Index Cond: (s.serial_id = outer.serial_id)
  Filter: ((article_no = $0) AND (delivery_id = $1))
---
 in the EXPLAIN result.
 For those who wonder why I do this FROM (SELECT...). I was searching  
for
a way to use the result of an subselect for multiple calculations in the
SELECT clause and return that calculation results as individual columns.
 I tested a bit further and found out that PG behaves the same in case q
is a view. This makes me wonder how efficient the optimizer can work
with views - or even worse - nested views.
 Tested and reproduced on PG 7.4.1 linux and 8.0.0 win32.
  Thanks in advance,
Hannes Dorbath


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [PERFORM] Delete query takes exorbitant amount of time

2005-04-03 Thread Bruno Wolff III
On Tue, Mar 29, 2005 at 01:48:48 -0700,
  Karim A Nassar [EMAIL PROTECTED] wrote:
 
 For this FK check, there only need be one referring id to invalidate the
 delete. ISTM that for any delete with a FK reference, the index could
 always be used to search for a single value in the referring table
 (excepting very small tables). Why then must a sequential scan be
 performed in this case, and/or in general? 

First the index needs to exist. It isn't created automatically because not
everyone wants such an index. Second, you need to have analyzed the
referencing table so that the planner will know it is big enough that
using an indexed search is worthwhile. The planner is getting better
about dealing with size changes without reanalyzing, but it seems there
are still some gotchas in 8.0.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] coalesce alternative

2005-04-03 Thread Bruno Wolff III
On Tue, Mar 29, 2005 at 14:21:13 +0300,
  ALÝ ÇELÝK [EMAIL PROTECTED] wrote:
 I have used coalesce function for null fields but coalesce is too slow.
 I need fast alternative for coalesce

It is unlikely that coalesce is your problem. People might be able to provide
some help if you provide EXPLAIN ANALYZE output and the actual query for your
slow query.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[PERFORM] Correcting Hash Join Estimates

2005-04-03 Thread mark . lubratt
Hello!

I posted a similar question to this one about a month ago; but, for some
reason, it never seemed to be broadcast eventhough it ended up in the
archives.  So, since I'm still struggling with this, I thought I'd
repost...

I'm trying to optimize a query and the EXPLAIN ANALYZE (see link below)
shows that some hash join row estimates are wrong by a factor of 2-3,
and upwards of 7-8. There is a corresponding mis-estimation of the
amount of time taken for these steps. The database is vacuum analyzed
nightly by a cron job.  How would I go about tightening up these
errors?  I suspect that I need to SET STATISTIC on some columns, but
how can I tell which columns?

Any help would be appreciated.

WinXP (dual Xeon 1.2GB RAM) PgSQL 8.0.1
Explain Analyze:  http://www.indeq.com/EA.txt
View Definition: http://www.indeq.com/VGAUA.txt

The largest table contains about 10,000 rows.  All tables have indexes
on their foreign keys.

Thanks!
Mark



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Correcting Hash Join Estimates

2005-04-03 Thread Tom Lane
[EMAIL PROTECTED] writes:
 I'm trying to optimize a query and the EXPLAIN ANALYZE (see link below)
 shows that some hash join row estimates are wrong by a factor of 2-3,
 and upwards of 7-8.

I doubt that improving those estimates would lead to markedly better
results.  You need to think about improving the view design instead.
What context is this view used in --- do you just do select * from
view_get_all_user_award2, or are there conditions added to it, or
perhaps it gets joined with other things?  Do you really need the
DISTINCT constraint?  Do you really need the ORDER BY?  Can you
simplify the WHERE clause at all?

Half a second sounds pretty decent to me for a ten-way join with a WHERE
clause as unstructured as that.  If you really need it to execute in way
less time, you're probably going to have to rethink your data
representation to make the query simpler.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org