Peter Eisentraut wrote:
But you haven't offered any analysis about the cause of this problem, so any
speculation about normalization, usual cases, caching effects and so on are
unfounded and premature.
Ok. My previous message was a bit pompous and unfounded. Sorry.
Below I'll try to explain
Tom Lane wrote:
I'm having a hard time getting excited about improving this query when
it's so badly coded in the first place. What's an ORDER BY doing in
an EXISTS subquery? The LIMIT is unnecessary too. And the inner WHERE
says nothing so much as I don't know how to design a database
Simon Riggs wrote:
This type of work is 90% analysis, 10% coding. You'll need to do a lot
of investigation, lots of discussion and listening.
I absolutely agree with you and I am not about to rush into coding
right now. First of all I'm going to dig a lot in the PG sources,
readme's and so on.
Peter Eisentraut wrote:
If you know of a more effective way to do that beyond the effective_cache_size
parameter that we have now, let us know.
I don't know the better way and it is not my goal at all. I think about
more accurate cost estimation of nested loops join and subqueries.
Usual case
Say42 wrote:
select count(*) from conn.conn20060803 c
where
exists (select code from belg_mobile tc
...
Correction: replace 'trunk_codes' with 'belg_mobile'.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send
Tom Lane wrote:
Optimizing on the basis of only one example is seldom a good idea...
and I think you are falling into that trap by supposing that there
is a usual case.
Perhaps I am wrong but I assume normalization is a usual case, small
master (parent) tables are not very rare also.
Yes, my
I intend to play with some optimizer aspects. Just for fun. I'm a
novice in the DBMS development so I can not promise any available
results but if it can be useful even as yet another failed attempt I
will try.
That's what I want to do:
1. Replace not very useful indexCorrelation with