On Mon, 2004-06-28 at 14:40, Josh Berkus wrote: > As one of the writers of that article, let me point out: > > " -- Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096) > -- Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768) " > > While this is probably a little conservative, it's still way bigger than 40.
I agree that 40 is a bit weak :) Chris' system has only 512 MB of RAM though. I thought the quick response "..for any kind of production server, try 5000-10000..." -- without considering how much memory he has -- was a bit... uhm... eager. Besides, if the shared memory is used to queue client requests, shouldn't that memory be sized according to workload (i.e. amount of clients, transactions per second, etc) instead of just taking a percentage of the total amount of memory? If there only a few connections, why waste shared memory on that when the memory could be better used as file system cache to prevent PG from going to the disk so often? I understand tuning PG is almost an art form, yet it should be based on actual usage patterns, not just by system dimensions, don't you agree? Regards, Frank
signature.asc
Description: This is a digitally signed message part