What Scott said ... seconded, all of it.
I'm running one 500GB database on a 64-bit, 8GB VMware virtual machine, with
2 vcores, PG 8.3.9 with shared_buffers set to 2GB, and it works great.
However, it's a modest workload, most of the database is archival for data
mining, and the working set for
On Wed, 24 Mar 2010, Campbell, Lance wrote:
I have 24 Gig of memory on my server...
Our server manager seems to think that I have way to much memory. He
thinks that we only need 5 Gig.
You organisation probably spent more money getting your server manager to
investigate how much RAM you
PostgreSQL 8.4.3
Linux Redhat 5.0
Question: How much memory do I really need?
From my understanding there are two primary strategies for setting up
PostgreSQL in relationship to memory:
1) Rely on Linux to cache the files. In this approach you set the
shared_buffers to a
On Wed, Mar 24, 2010 at 6:49 PM, Campbell, Lance la...@illinois.edu wrote:
PostgreSQL 8.4.3
Linux Redhat 5.0
Question: How much memory do I really need?
The answer is as much as needed to hold your entire database in
memory and a few gig left over for sorts and backends to play in.
From my
Arjen van der Meijden wrote:
I've heard that too, but it doesn't seem to make much sense
to me. If
you get to the point where your machine is _needing_ 2GB of swap then
something has gone horribly wrong (or you just need more RAM in the
machine) and it will just crawl until the kernel kills
Patrick,
Sorry for posting an obvious Linux question, but have any of you
encountered this and how have you fixed it.
I have 6gig Ram box. I've set my shmmax to 307200. The database
starts up fine without any issues. As soon as a query is ran
or a FTP process to the server is done,
I've heard that too, but it doesn't seem to make much sense
to me. If
you get to the point where your machine is _needing_ 2GB of swap then
something has gone horribly wrong (or you just need more RAM in the
machine) and it will just crawl until the kernel kills off whatever
process