2015-08-19 21:29 GMT+09:00 Simon Riggs <si...@2ndquadrant.com>:
> On 19 August 2015 at 12:55, Kohei KaiGai <kai...@kaigai.gr.jp> wrote:
>>
>> 2015-08-19 20:12 GMT+09:00 Simon Riggs <si...@2ndquadrant.com>:
>> > On 12 June 2015 at 00:29, Tomas Vondra <tomas.von...@2ndquadrant.com>
>> > wrote:
>> >
>> >>
>> >> I see two ways to fix this:
>> >>
>> >> (1) enforce the 1GB limit (probably better for back-patching, if that's
>> >>     necessary)
>> >>
>> >> (2) make it work with hash tables over 1GB
>> >>
>> >> I'm in favor of (2) if there's a good way to do that. It seems a bit
>> >> stupid not to be able to use fast hash table because there's some
>> >> artificial
>> >> limit. Are there any fundamental reasons not to use the
>> >> MemoryContextAllocHuge fix, proposed by KaiGai-san?
>> >
>> >
>> > If there are no objections, I will apply the patch for 2) to HEAD and
>> > backpatch to 9.5.
>> >
>> Please don't be rush. :-)
>
>
> Please explain what rush you see?
>
Unless we have no fail-safe mechanism when planner estimated too
large number of tuples than actual needs, a strange estimation will
consume massive amount of RAMs. It's a bad side effect.
My previous patch didn't pay attention to the scenario, so needs to
revise the patch.

Thanks,

>> It is not difficult to replace palloc() by palloc_huge(), however, it may
>> lead
>> another problem once planner gives us a crazy estimation.
>> Below is my comment on the another thread.
>
>
>  Yes, I can read both threads and would apply patches for each problem.
>
> --
> Simon Riggs                http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
KaiGai Kohei <kai...@kaigai.gr.jp>


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to