I just noticed a little bug in the way the levelStep parameter is calculated when we switch to buffering build:

set client_min_messages='debug2';

This works:

postgres=# set maintenance_work_mem='1GB';
SET
postgres=# create index i_gisttest on gisttest using gist (t collate "C") WITH (fillfactor=10);
DEBUG:  building index "i_gisttest" on table "gisttest"
DEBUG: switched to buffered GiST build; level step = 2, pagesPerBuffer = 232

But with higher maintenance_work_mem, it fails to switch to buffering mode:

postgres=# set maintenance_work_mem='2GB';
SET
postgres=# create index i_gisttest on gisttest using gist (t collate "C") WITH (fillfactor=10);
DEBUG:  building index "i_gisttest" on table "gisttest"
DEBUG:  failed to switch to buffered GiST build

This is because of this rather complex calculation here:

        while (
        /* subtree must fit in cache (with safety factor of 4) */
                   (1 - pow(avgIndexTuplesPerPage, (double) (levelStep + 1))) / (1 
- avgIndexTuplesPerPage) < effective_cache_size / 4
                   &&
        /* each node in the lowest level of a subtree has one page in memory */
                   (pow(maxIndexTuplesPerPage, (double) levelStep) < 
(maintenance_work_mem * 1024) / BLCKSZ)
                )

What happens here is that the calculation of (maintenance_work_mem * 1024) / BLCKSZ overflows the 32 bit signed integer type used there, if maintenance_work_mem >= 2GB. I think we should be doing all these calculations in double.

I'll go fix that..

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to