2017-04-05 8:57 GMT+02:00 Daniel Westermann <
daniel.westerm...@dbi-services.com>:

> >> I have set work_mem to a very low value intentionally for demonstration
> >> purposes:
> >>
> >> postgres=# show work_mem;
> >>  work_mem
> >> ----------
> >>  16MB
> >> (1 row)
> >>
> >> postgres=# show shared_buffers ;
> >>  shared_buffers
> >> ----------------
> >>  128MB
> >> (1 row)
> >>
> >>
> >> When I run the following query ( I know that "not in" is not a good
> choice
> >> here ):
> >>
> >> postgres=# select count(user_id) from users where user_id not in (
> select id
> >> from ids);
>
> >"NOT IN" where the predate is a table column can lead to very poor
> >query plans especially where the haystack is not provably known (at
> >plan time) to contain only not null values.  By reducing work_mem, the
> >server has decided has to repeatedly search the table to search for
> >the presence of null values.  Try converting the query to NOT EXISTS.
>
> Thank you, Merlin. As said I know that "not in" is not a good choice in
> this case but I still do not get what is going here. Why does the server
> repeatedly search for NULL values when I decrease work_mem and why not when
> increasing work_mem?
>

what is result of EXPLAIN statement for slow and fast cases?

regards

Pavel


>
>
> Regards
> Daniel
>

Reply via email to