Dennis Bjorklund wrote:
> On Wed, 22 Sep 2004, Gaetano Mendola wrote:
>>  Limit  (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0 
>>  Limit  (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0 
> These estimated costs are almost the same, but the runtime differs a bit.
> This means that maybe you need to alter settings like random_page_cost,
> effective_cache and maybe some others to make the cost reflect the runtime
> better.
> Since the costs are so close to each other very small changes can make it
> choose the other plan. It's also very hard to make an estimate that is
> correct in all situations. That's why it's called an estimate after all.

Is not feseable.

That values are obtained with random_page_cost = 2, effective_cache_size = 20000,
cpu_tuple_cost = 0.01
increasing or decreasing random_page_cost this means increase or decrease both

random_page_cost = 1.5 hashjoin on => 8.47 hashjoin off => 8.53

random_page_cost = 3 hashjoin on => 13.70 hashjoin off => 13.76

so is choosen the hasjoin method in both cases.

In the other side the effective_cache_size doesn't affect this costs.

Decreasing the cpu_tuple_cost have the same effect

cpu_tuple_cost = 0.005
        hashjoin on  =>  10.11
        hashjoin off =>  10.17

cpu_tuple_cost = 0.001
        hashjoin on  =>  10.03
        hashjoin off =>  10.03

cpu_tuple_cost = 0.0005
        hashjoin on  =>  10.01
        hashjoin off =>  10.01
        And when the two costs are the same the hashjoin path is choosen.

I think cpu_tuple_cost less then 0.001 is not a good idea

I think the only way is set the hashjoin = off.  Any other suggestion ?

Gaetano Mendola

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to