On 19 August 2015 at 08:54, Kevin Grittner <kgri...@ymail.com> wrote:
> Kouhei Kaigai <kai...@ak.jp.nec.com> wrote: > > > long lbuckets; > > > lbuckets = 1 << my_log2(hash_table_bytes / bucket_size); > > > Assert(nbuckets > 0); > > > In my case, the hash_table_bytes was 101017630802, and bucket_size was > 48. > > So, my_log2(hash_table_bytes / bucket_size) = 31, then lbuckets will have > > negative number because both "1" and my_log2() is int32. > > So, Min(lbuckets, max_pointers) chooses 0x80000000, then it was set on > > the nbuckets and triggers the Assert(). > > > Attached patch fixes the problem. > > So you changed the literal of 1 to 1U, but doesn't that just double > the threshold for failure? Wouldn't 1L (to match the definition of > lbuckets) be better? > > I agree, but I can only imagine this is happening because the maximum setting of work_mem has been modified with the code you're running. hash_tables_bytes is set based on work_mem hash_table_bytes = work_mem * 1024L; The size of your hash table is 101017630802 bytes, which is: david=# select pg_size_pretty(101017630802); pg_size_pretty ---------------- 94 GB (1 row) david=# set work_mem = '94GB'; ERROR: 98566144 is outside the valid range for parameter "work_mem" (64 .. 2097151) So I think the only way the following could cause an error, is if bucket_size was 1, which it can't be. lbuckets = 1 << my_log2(hash_table_bytes / bucket_size); I think one day soon we'll need to allow larger work_mem sizes, but I think there's lots more to do than this change. Regards David Rowley -- David Rowley http://www.2ndQuadrant.com/ <http://www.2ndquadrant.com/> PostgreSQL Development, 24x7 Support, Training & Services