Re: [PERFORM] memory question

2010-03-25 Thread Dave Crooke
What Scott said ... seconded, all of it.

I'm running one 500GB database on a 64-bit, 8GB VMware virtual machine, with
2 vcores, PG 8.3.9 with shared_buffers set to 2GB, and it works great.
However, it's a modest workload, most of the database is archival for data
mining, and the working set for routine OLTP is pretty modest and easily
fits in the 2GB, and it's back-ended on to a pretty decent EMC Clariion
FibreChannel array. Not the typical case.

For physical x86 servers, brand name (e.g. Kingston) ECC memory is down to
$25 per GB in 4GB DIMMs, and $36 per GB in 8GB DIMMs  dollars to
doughnuts you have a server somewhere with 2GB or 4GB parts that can be
pulled and replaced with double the density, et voila, an extra 16GB of RAM
for about $500.

Lots and lots of RAM is absolutely, positively a no-brainer when trying to
make a DB go fast. If for no other reason than people get all starry eyed at
GHz numbers, almost all computers tend to be CPU heavy and RAM light in
their factory configs. I build a new little server for the house every 3-5
years, using desktop parts, and give it a mid-life upgrade with bigger
drives and doubling the RAM density.

Big banks running huge Oracle OLTP setups use the strategy of essentially
keeping the whole thing in RAM  HP shifts a lot of Superdome's maxed out
with 2TB of RAM into this market - and that RAM costs a lot more than $25 a
gig ;-)

Cheers
Dave


Re: [PERFORM] memory question

2010-03-25 Thread Matthew Wakeling

On Wed, 24 Mar 2010, Campbell, Lance wrote:

I have 24 Gig of memory on my server...

Our server manager seems to think that I have way to much memory.  He
thinks that we only need 5 Gig.


You organisation probably spent more money getting your server manager to 
investigate how much RAM you need and scaring you about wasting resources, 
than it would cost to just slap 24GB in the machine.


24GB is the least amount of RAM I would consider putting in a new server 
nowadays. It's so cheap.


Matthew

--
Lord grant me patience, and I want it NOW!

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] memory question

2010-03-24 Thread Campbell, Lance
PostgreSQL 8.4.3

Linux Redhat 5.0

 

Question: How much memory do I really need?

 

From my understanding there are two primary strategies for setting up
PostgreSQL in relationship to memory:

 

1)  Rely on Linux to cache the files.  In this approach you set the
shared_buffers to a relatively low number.  

2)  You can set shared_buffers to a very large percentage of your
memory so that PostgreSQL reserves the memory for the database.

 

I am currently using option #1.  I have 24 Gig of memory on my server
and the database takes up 17 Gig of disk space.  When I do the Linux
command top I notice that 19 Gig is allocated for cache.  Is there a
way for me to tell how much of that cache is associated with the caching
of database files?

 

I am basically asking how much memory do I really need?  Maybe I have
complete over kill.  Maybe I am getting to a point where I might need
more memory.

 

My thought was I could use option #2 and then set the number to a lower
amount.  If the performance is bad then slowly work the number up.

 

Our server manager seems to think that I have way to much memory.  He
thinks that we only need 5 Gig.  I don't really believe that.  But I
want to cover myself.  With money tight I don't want to be the person
who is wasting resources.  We need to replace our database servers so I
want to do the right thing.

 

Thanks,

 

Lance Campbell

Software Architect/DBA/Project Manager

Web Services at Public Affairs

217-333-0382

 



Re: [PERFORM] memory question

2010-03-24 Thread Scott Marlowe
On Wed, Mar 24, 2010 at 6:49 PM, Campbell, Lance la...@illinois.edu wrote:
 PostgreSQL 8.4.3

 Linux Redhat 5.0

 Question: How much memory do I really need?

The answer is as much as needed to hold your entire database in
memory and a few gig left over for sorts and backends to play in.

 From my understanding there are two primary strategies for setting up
 PostgreSQL in relationship to memory:



 1)  Rely on Linux to cache the files.  In this approach you set the
 shared_buffers to a relatively low number.

 2)  You can set shared_buffers to a very large percentage of your memory
 so that PostgreSQL reserves the memory for the database.

The kernel is better at caching large amounts of memory.  Pg is better
at handling somewhat smaller amounts and not flushing out random
access data for sequential access data.

 I am currently using option #1.  I have 24 Gig of memory on my server and
 the database takes up 17 Gig of disk space.  When I do the Linux command
 “top” I notice that 19 Gig is allocated for cache.  Is there a way for me to
 tell how much of that cache is associated with the caching of database
 files?

Probably nearly all of that 19G for cache is allocated for pg files.
Not sure how to tell off the top of my head though.

 I am basically asking how much memory do I really need?  Maybe I have
 complete over kill.  Maybe I am getting to a point where I might need more
 memory.

Actually, there are three levels of caching that are possible.  1:
Entire db, tables and indexes, can fit in RAM.  This is the fastest
method.  Worth the extra $ for RAM if you can afford it / db isn't too
huge.  2: Indexes can fit in RAM, some of tables can.  Still pretty
fast.  Definitely worth paying a little extra for.  3: Neither indexes
nor tables can wholly fit in RAM.  At this point the speed of your
large disk array becomes important, and you want a fast cachine RAID
controller.  Both of these items (disk array and RAID controller) are
considerably more costly than 16 or 32 Gigs of RAM.

 My thought was I could use option #2 and then set the number to a lower
 amount.  If the performance is bad then slowly work the number up.

I'm not sure what you mean.  Install less RAM and let PG do all the
caching?  Usually a bad idea. Usually.  I'm sure there are use cases
that it might be a good idea on.  But keep in mind, a large amount of
shared_buffers doesn't JUST buffer your reads, it also results in a
much large memory space to keep track of in terms of things that need
to get written out etc.  I'm actually about to reduce the
shared_buffers from 8G on one reporting server down to 1 or 2G cause
that's plenty, and it's having a hard time keeping up with the huge
checkpoints it's having to do.

 Our server manager seems to think that I have way to much memory.  He thinks
 that we only need 5 Gig.

How much do you absolutely need to boot up, run postgresql, and not
run out of memory?  That's what you need and it's probably around
1Gig.  It's just no less arbitraty than 5G.  Did he show you how he
arrived at this number?  If your DB is 17Gig on disk, it's foolish to
be cheap on memory.

 I don’t really believe that.  But I want to cover
 myself.  With money tight I don’t want to be the person who is wasting
 resources.  We need to replace our database servers so I want to do the
 right thing.

You can waste your time (valuable but sunk cost) other people's time
(more valuable, also sunk cost) or waste a few dollars on memory.
24Gig isn't that expensive really compared to say 10 seconds per
transaction for 100 users, 1000 times a day.  Or 11 user days in a
single day.  10s of seconds start to add up.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Memory question

2003-06-29 Thread Thomas Swan
Arjen van der Meijden wrote:

I've heard that too, but it doesn't seem to make much sense 
to me.  If 
you get to the point where your machine is _needing_ 2GB of swap then 
something has gone horribly wrong (or you just need more RAM in the 
machine) and it will just crawl until the kernel kills off whatever 
process causes the swap space to be exceeded.  Seems to me that you 
should only have that much swap if you can't afford more RAM 
or you've 
tapped out your machine's capacity, and your application needs that 
much memory.
  -M@
   

I've heard the same, the reason behind it was that there needs to be
one-to-one copy of the memory to be able to swap out everything and to
have a gain in the total memory, you'd need twice as much swap as
memory to have a doubling of your memory.
But afaik this behaviour has been adjusted since the 2.4.5 kernel and
isn't a real issue anymore.
It may be different in vendor released kernels as the default overcommit 
behavior of the Linux kernel may vary.  More detailed discussions can be 
found on the LKML, or you can find some useful summaries by searching 
through the last couple Kernel Traffic issues http://kt.zork.net .. 
I had some unexpected problems on one system, an older RH distribution,  
until I actually set the swap to be double the 2GB of ram on the system: 
4GB.

Please keep in mind that I'm no expert at all on linux, so if you want
to be sure, you'd better mail to the kernel-mailinglist orso :)
Anyway, I manage a few machines with 1GB++ memory and none of them has
more than 1G of swap and none of them uses that swap for more than a few
MB unless something was terribly wrong, so the actual 'risk' probably
doesn't have a high chance to occur.
Arjen



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html
 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Memory question

2003-06-27 Thread Josh Berkus
Patrick,

 Sorry for posting an obvious Linux question, but have any of you
 encountered this and how have you fixed it.
 I have 6gig Ram box.  I've set my shmmax  to 307200.  The database
 starts up fine without any issues.  As soon as a query is ran
 or a FTP process to the server  is done, the used memory shoots up and
 appears to never be released.

What's you shared_buffers set to after our talk?  Do you actually need 3gb of 
shmmax?  

 My fear is that this may cause problems for my database if this number
 continues to grow.  Below is my TOP after running a query, and shutting
 down PgAdmin.  While not low now, the amount of free memory has dropped to
 around 11mg.  I'll admit I'm not that Linux savvy, but am I reading this
 correct?

No.   

 Mem:  6711564K av, 6517776K used,  193788K free,   0K shrd,   25168K

The used figure in Top doesn't really tell you anything, since it includes 
the kernel buffer which tries to take up all available memory.  If you 
actually look at the list of processes, I think you'll find that you're only 
using 1-2% of memory for applications.

I'm not sure what app would show your real free memory.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Memory question

2003-06-27 Thread Arjen van der Meijden
 I've heard that too, but it doesn't seem to make much sense 
 to me.  If 
 you get to the point where your machine is _needing_ 2GB of swap then 
 something has gone horribly wrong (or you just need more RAM in the 
 machine) and it will just crawl until the kernel kills off whatever 
 process causes the swap space to be exceeded.  Seems to me that you 
 should only have that much swap if you can't afford more RAM 
 or you've 
 tapped out your machine's capacity, and your application needs that 
 much memory.
-M@
I've heard the same, the reason behind it was that there needs to be
one-to-one copy of the memory to be able to swap out everything and to
have a gain in the total memory, you'd need twice as much swap as
memory to have a doubling of your memory.

But afaik this behaviour has been adjusted since the 2.4.5 kernel and
isn't a real issue anymore.

Please keep in mind that I'm no expert at all on linux, so if you want
to be sure, you'd better mail to the kernel-mailinglist orso :)

Anyway, I manage a few machines with 1GB++ memory and none of them has
more than 1G of swap and none of them uses that swap for more than a few
MB unless something was terribly wrong, so the actual 'risk' probably
doesn't have a high chance to occur.

Arjen




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html