Yes, I agree it's unnecessary -- but you'll never have to worry about the postmaster not starting due to lack of allocatable memory -- when I was testing setups, I got sick of rebooting everytime I had to make a change to /etc/system, that I threw up my hands and said, "let it take all it wants". :)
"single user read only" -- the key is how many connections -- what's your application? Is this being driven by a front end application?
In my case, we run a website with an apache fronted, a tomcat server as middleware, and 4 applications. We may, at times, have only 1 user on, but the java code could be doing a half dozen queries in different threads for that one user.
run /usr/ucb/ps -auxww | grep <postgres user name> (-- we use postgres so "grep post" works for us) while under load and see how many backends are running. if it's more than 4 or 5, then you are using the cpu's.
On the topic of shared memory, watch for the ouput of top or prstat -a -- these programs count the shared memory block towards each process and therefor lie about amount of memory used. Looking at vmstat, etc show that the percentage of utilization reported by top or prstat is way off, and if you care to examine the memory for each proces, you'll see that the shared memory block address is, well, shared by each process (by definition, eh?) but it can be reported as if it were a different block for each process.
Not sure the e3500 is the best box for a data warehouse application
Paul Johnson wrote:
Hi Tom, I've made changes to postgresql.conf as recommended on Josh's site and this seems to be working well so far.
Given your comments on shared memory, it would appear that the following entry in /etc/system is unnecessary:
Ironically, we both have this identical setting!
Given that most of our queries are single-user read-only, how do we take advantage of the 6 CPUs? I'm guessing we can't!?!?!
Also, does this type of workload benefit from moving the txlog?
I'll check our settings against yours given the Solaris 9/E3500 setup that we both run.
Josh helped my company with this issue -- PG doesn't use shared memory like Oracle, it depends more on the OS buffers. Making shared mem too large a fraction is disasterous and seriously impact performance. (though I find myself having to justify this to Oracle trained DBA's) :)
What I found was the biggest performance improvement on the write side was to turn of file system journaling, and on the read side was to feed postgres as many CPU's as you can. What we found for a high use db (for example backending a web site) is that 8-400 g cpu's outperforms 2 or 4 fast cpus. The fast cpu's spend all of their time context switching as more connections are made.
Also make sure your txlog is on another spindle -- it might even be worth taking one out of the stripe to do this.
I am running solaris 9 on an e3500 also (though my disc setup is different)
Here's what I have things set to -- it's probably a pretty good starting point for you:
# - Memory -
shared_buffers = 65536 # min 16, at least max_connections*2, 8KB each sort_mem = 12000 # min 64, size in KB vacuum_mem = 64000 # min 1024, size in KB
# - Free Space Map -
max_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each #max_fsm_relations = 10000 # min 100, ~50 bytes each
# - Kernel Resource Usage -
#max_files_per_process = 1000 # min 25 #preload_libraries = ''
and the tail end of /etc/system:
* shared memory config for postgres set shmsys:shminfo_shmmax=0xFFFFFFFF set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=256 set shmsys:shminfo_shmseg=256 set semsys:seminfo_semmap=256 set semsys:seminfo_semmni=512 set semsys:seminfo_semmsl=1000 set semsys:seminfo_semmns=512 * end of shared memory setting * Set the hme card to force 100 full duplex and not to autonegotiate * since hme does not play well with cisco * set hme:hme_adv_autoneg_cap=0 set hme:hme_adv_100fdx_cap=1 set hme:hme_adv_100hdx_cap=0 set hme:hme_adv_10fdx_cap=0 set hme:hme_adv_10hdx_cap=0 set hme:hme_adv_100T4_cap=0
Paul Johnson wrote:
Hi Josh, there are 8 internal disks - all are [EMAIL PROTECTED],000 RPM, fibre connected.
The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1 filesystem that is striped across the other 6 disks.
The shared_buffers value is a semi-educated guess based on having made 4GB shared memory available via /etc/system, and having read all we could find on various web sites.
Should I knock it down to 400MB as you suggest?
I'll check out that URL.
I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres. We
use this Sun box solely for single user Postgres data warehousing
What's your disk system?
shared_buffers = 500000
This is highly unlikely to be optimal. That's 3GB. On test linux
up to 8GB, we've not seen useful values of shared buffers anywhere above
400mb. How did you arrive at that figure?
sort_mem = 2097152 vacuum_mem = 1000000
These could be fine on a single-user system. sort_mem is per *sort*
not per query, so you'd need to watch out for complex queries spillling
swap; perhaps set it a 0.5GB or 1GB? Otherwise, start with the config guide at
-- Josh Berkus Aglio Database Solutions San Francisco
---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?