2017-04-21 8:49 GMT+02:00 Marco Renzi <renzi....@gmail.com>:

> This could look strange, but is fast as hell!
> The main problem is:
> Is everytime ok doing query like this with order by and limit? Is ok using
> an upperlimit to 1.000.000.000 records?
>

I am thinking so limit 10000 should be ok. Too big number can be messy for
optimizer similarly like too small number.

The planner is driven by statistics - and the statistics are not perfect -
usually it is working on 80% - like weather forecasting.

Usually it is working, but sometimes not.

Regards

Pavel


>
> SELECT * FROM (
> SELECT  fase.id
> FROM            tipofase
> JOIN       fase
> ON         (fase.tipofase = tipofase.id)
> WHERE agendafrontoffice = true
> ORDER BY        fase.id DESC   limit 1000000000 offset 0
> ) A
> ORDER BY        A.id DESC   limit 10 offset 0
>
> 2017-04-20 18:05 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
>
>>
>>
>>
>> I am afraid so is not possible to solve this issue by one query. In this
>>> case the planner expects early stop due finding few values. But because
>>> there are not any value, the LIMIT clause has not any benefit in executor
>>> time, but the planner is messed. Maybe try to increase LIMIT to some higher
>>> value .. 1000, 10000 so planner don't fall to this trap. PostgreSQL
>>> statistics are about most common values, but the values without any
>>> occurrence are not well registered by statistics.
>>>
>>> Regards
>>>
>>
>> It can looks strange, but it can work
>>
>> SELECT *
>>    FROM (your query ORDER BY .. OFFSET 0 LIMIT 10000) s
>>   ORDER BY ...
>>   LIMIT 10;
>>
>> Regards
>>
>> Pavel
>>
>

Reply via email to