> -----Original Message-----
> From: Tapio Pitkäranta [mailto:[email protected]]
> Sent: Monday, March 21, 2011 7:53 AM
> To: Nicholson, Brad (Toronto, ON, CA)
> Cc: [email protected]; Devrim GÜNDÜZ
> Subject: RE: [ADMIN] Memory limits and PostgreSQL 8.4 - Over 63 GB of
> memory for DB?
> 
> Hello Brad,
> 
> Thank you for this information.
> 
> We have database tables that are around 50-100 GB each (table). While
> processing such tables, it seems to be crucial that the table fits into
> memory (especially if the database table is not on a SSD drive).
> 
> Until now we have thought "shared_buffers" parameter should be more
> than the size of the biggest table (that requires this kind of batch
> processing).
> 
> Do you think it does not matter what size we set the "shared_buffers"
> parameter, as long as the server has enough memory? (Even if the single
> table is this size: 50-100 GB)
> 
> Why are large shared buffers not recommended?

There is the potential to dirty a lot buffers and have huge pauses at 
checkpoint time when those get flushed to disk.  Remember though that unused 
memory is going to go to your filesystem cache and there is very good chance 
that a lot of the tables you are accessing are still going to be in memory.

By all means, test it out.  It may be that it is fine with your workload, 
particularly if it is read only.  But be prepared for it to not work out as 
expected.

Brad.

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to