Hello Brad,

Thank you for this information.

We have database tables that are around 50-100 GB each (table). While 
processing such tables, it seems to be crucial that the table fits into memory 
(especially if the database table is not on a SSD drive). 

Until now we have thought "shared_buffers" parameter should be more than the 
size of the biggest table (that requires this kind of batch processing).

Do you think it does not matter what size we set the "shared_buffers" 
parameter, as long as the server has enough memory? (Even if the single table 
is this size: 50-100 GB) 

Why are large shared buffers not recommended?

Br,

Tapsa


--

Tapio Pitkäranta
RELEX Oy
Valimotie 27, 00380 Helsinki
puhelin: 050-5408550
email: [email protected]
internet: http://www.relex.fi





-----Original Message-----
From: Nicholson, Brad (Toronto, ON, CA) [mailto:[email protected]] 
Sent: 18. maaliskuuta 2011 16:17
To: Tapio Pitkäranta; Devrim GÜNDÜZ
Cc: [email protected]
Subject: RE: [ADMIN] Memory limits and PostgreSQL 8.4 - Over 63 GB of memory 
for DB?

> -----Original Message-----
> From: [email protected] [mailto:pgsql-admin- 
> [email protected]] On Behalf Of Tapio Pitkäranta
> Sent: Friday, March 18, 2011 4:10 AM
> To: Devrim GÜNDÜZ
> Cc: [email protected]
> Subject: Re: [ADMIN] Memory limits and PostgreSQL 8.4 - Over 63 GB of 
> memory for DB?
> 
> Hello,
> 
> Thank you for the reply. It seems you might be right:
> 
> /etc/sysctl.conf
> 
> # Controls the maximum shared segment size, in bytes kernel.shmmax = 
> 68719476736
> 
> # Controls the maximum number of shared memory segments, in pages 
> kernel.shmall = 4294967296
> 
> We have tried to set shared_buffers over 63 GB.
> 
> Do you have any advice on memory settings for servers with large 
> amounts of memory (100-200GB)? It seems there is not too much 
> documentation on that in the net.

This is unlikely to work out as you expect.  Values for shared buffers over the 
8-10GB range aren't recommended.  It may need to be much lower, depending on 
your workload. 

As far as recommendations - try and gauge the size of your working data set and 
size the shared buffers for that.  From there - test with your workload, and 
watch out for checkpoint spikes.

Unused memory will still be available to the filesystem to cache data there.

Brad.


-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to