In the madness I did

  - setting statistics target to 1000 for all join & filter columns
  - cluster the playing tables
  - reindex the playing tables
  - analyze the playing tables

  and it helped now. I'm at ~50ms which satisfies me completely.

  If no hints - some debug for explain would be great to be able to
  track what's wrong for such a lame developers like me ;)
  The problem is solved but I can not tell that I understand why it was
  wrong before - and why it's OK now :(
  
  

> wstrzalka <wstrza...@gmail.com> writes:
>> Prior to the playing with statistics target (it was 100 by default) I
>> was able to go with the time to 30ms by adding to the query such a
>> condition:

> So what sort of "playing" did you do?  It looks to me like the core of
> the problem is the sucky join size estimate here:

>>    ->  Hash Join  (cost=101.53..15650.39 rows=95249 width=8) (actual
>> time=1102.977..1342.675 rows=152 loops=1)
>>          Hash Cond: (mal.message_id = m.messageid)

> If it were correctly estimating that only a few message_address_link
> rows would join to each messages row, it'd probably do the right thing.
> But it seems to think there will be thousands of joins for each one...

>                         regards, tom lane



-- 
Pozdrowienia,
 Wojciech Strzałka


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to