> On Wed, Sep 14, 2016 at 12:13 AM, Kouhei Kaigai <kai...@ak.jp.nec.com> wrote:
> > It looks to me pg_buffercache tries to allocate more than 1GB using
> > palloc(), when shared_buffers is more than 256GB.
> >
> > # show shared_buffers ;
> >  shared_buffers
> > ----------------
> >  280GB
> > (1 row)
> >
> > # SELECT buffers, d.datname, coalesce(c.relname, '???')
> >     FROM (SELECT count(*) buffers, reldatabase, relfilenode
> >             FROM pg_buffercache group by reldatabase, relfilenode) b
> >        LEFT JOIN pg_database d ON d.oid = b.reldatabase
> >        LEFT JOIN pg_class c ON d.oid = (SELECT oid FROM pg_database
> >                                          WHERE datname = current_database())
> >                            AND b.relfilenode = pg_relation_filenode(c.oid)
> >            ORDER BY buffers desc;
> > ERROR:  invalid memory alloc request size 1174405120
> >
> > It is a situation to use MemoryContextAllocHuge(), instead of palloc().
> > Also, it may need a back patching?
> 
> I guess so.  Although it's not very desirable for it to use that much
> memory, I suppose if you have a terabyte of shared_buffers you
> probably have 4GB of memory on top of that to show what they contain.
>
Exactly. I found this problem when a people asked me why shared_buffers=280GB
is slower than shared_buffers=128MB to scan 350GB table.
As I expected, most of shared buffers are not in-use and it also reduced
amount of free memory; usable for page-cache.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kai...@ak.jp.nec.com>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to