On Wed, 2005-08-10 at 18:55, Tom Lane wrote:
> Ian Westmacott <[EMAIL PROTECTED]> writes:
> > In a nutshell, I have a LIMIT query where the planner
> > seems to favor a merge join over a nested loop.
>
> The planner is already estimating only one row out of the join, and so
> the LIMIT doesn't aff
Ian Westmacott <[EMAIL PROTECTED]> writes:
> In a nutshell, I have a LIMIT query where the planner
> seems to favor a merge join over a nested loop.
The planner is already estimating only one row out of the join, and so
the LIMIT doesn't affect its cost estimates at all.
It appears to me that the
I have a case that I though was an example of this issue,
and that this patch would correct. I applied this patch
to an 8.0.3 source distribution, but it didn't seem to
solve my problem.
In a nutshell, I have a LIMIT query where the planner
seems to favor a merge join over a nested loop. I've
si
Tom Lane wrote:
>Could be. I went back to look at Sam Mason's report about three weeks
>ago, and it definitely seems to explain his issue.
I've just built a patched version as well and it appears to be doing
what I think is the right thing now. I.e. actually picking the
plan with the lower cost.
On 7/22/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> > This is quite strange. The nestloop plan definitely should be preferred
> > in the context of the LIMIT, considering that it has far lower estimated
> > cost. And it is preferred in simple tests for me.
>
> After a suitable period of contemplat
Simon Riggs <[EMAIL PROTECTED]> writes:
> Looks good. I think it explains a few other wierd perf reports also.
Could be. I went back to look at Sam Mason's report about three weeks
ago, and it definitely seems to explain his issue. The "fuzzy cost
comparison" logic is new in 8.0 so it hasn't had
On Fri, 2005-07-22 at 12:20 -0400, Tom Lane wrote:
> I think that this refutes the original scheme of using the same fuzz
> factor for both startup and total cost comparisons, and therefore
> propose the attached patch.
>
> Comments?
Looks good. I think it explains a few other wierd perf reports
I wrote:
> Dawid Kuroczko <[EMAIL PROTECTED]> writes:
>> qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;
>> Limit (cost=15912.20..15912.31 rows=1 width=272)
>> -> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)
>> If I set enable_hashjoin=false:
>> qnex=# EXPLA
On 7/22/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> > qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;
>
> > Limit (cost=15912.20..15912.31 rows=1 width=272)
> >-> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)
>
>
Dawid Kuroczko wrote:
>work_mem = 102400
>...I tried tweaking cpu_*, work_mem, effective_cache and so on, but without
>any luck.
I'm hoping you didn't tweak it enough! I posted something similar
this a while ago, but haven't since got around to figuring out
a useful test case to send to the list
Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;
> Limit (cost=15912.20..15912.31 rows=1 width=272)
>-> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)
> If I set enable_hashjoin=false:
> qnex=# EXPLAIN ANALYZE S
Which row do you want ? Do you want 'a row' at random ?
I presume you want the N latest rows ?
In that case you should use an ORDER BY on an indexed field, the serial
primary key will do nicely (ORDER BY id DESC) ; it's indexed so it will
use the index and it will fly.
Any
Hello, I have PostgreSQL 8.0.3 running on a "workstation" with 768 MB
of RAM, under FreeBSD. And I have a 47-milion row table:
qnex=# explain select * from log;
QUERY PLAN
13 matches
Mail list logo