Re: [PERFORM] Solaris 9 tuning

2005-02-09 Thread Tom Arthurs
Hi, Paul
Josh helped my company with this issue -- PG doesn't use shared memory like 
Oracle, it depends more on the OS buffers.  Making shared mem
too large a fraction is disasterous and seriously impact performance. (though I 
find myself having to justify this to Oracle trained
DBA's) :)
What I found was the biggest performance improvement on the write side was to 
turn of file system journaling, and on the read side was
to feed postgres as many CPU's as you can.  What we found for a high use db 
(for example backending a web site) is that 8-400 g cpu's
outperforms 2 or 4 fast cpus.  The fast cpu's spend all of their time context 
switching as more connections are made.
Also make sure your txlog is on another spindle -- it might even be worth 
taking one out of the stripe to do this.
I am running solaris 9 on an e3500 also (though my disc setup is different)
Here's what I have things set to -- it's probably a pretty good starting point 
for you:
# - Memory -
shared_buffers = 65536  # min 16, at least max_connections*2, 8KB each
sort_mem = 12000# min 64, size in KB
vacuum_mem = 64000  # min 1024, size in KB
# - Free Space Map -
max_fsm_pages = 10  # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1  # min 100, ~50 bytes each
# - Kernel Resource Usage -
#max_files_per_process = 1000   # min 25
#preload_libraries = ''
---
and the tail end of /etc/system:
* shared memory config for postgres
set shmsys:shminfo_shmmax=0x
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmsl=1000
set semsys:seminfo_semmns=512
* end of shared memory setting
* Set the hme card to force 100 full duplex and not to autonegotiate
* since hme does not play well with cisco
*
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100fdx_cap=1
set hme:hme_adv_100hdx_cap=0
set hme:hme_adv_10fdx_cap=0
set hme:hme_adv_10hdx_cap=0
set hme:hme_adv_100T4_cap=0
Paul Johnson wrote:
Hi Josh, there are 8 internal disks - all are [EMAIL PROTECTED],000 RPM, fibre
connected.
The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1
filesystem that is striped across the other 6 disks.
The shared_buffers value is a semi-educated guess based on having made 4GB
shared memory available via /etc/system, and having read all we could find
on various web sites.
Should I knock it down to 400MB as you suggest?
I'll check out that URL.
Cheers,
Paul.

Paul,
I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres.
We
use this Sun box solely for single user Postgres data warehousing
workloads.
What's your disk system?
shared_buffers = 50
This is highly unlikely to be optimal.   That's 3GB.   On test linux
systems
up to 8GB, we've not seen useful values of shared buffers anywhere above
400mb.How did you arrive at that figure?
sort_mem = 2097152
vacuum_mem = 100
These could be fine on a single-user system.   sort_mem is per *sort*
though,
not per query, so you'd need to watch out for complex queries spillling
into
swap; perhaps set it a 0.5GB or 1GB?
Otherwise, start with the config guide at
www.powerpostgresql.com/PerfList
--
Josh Berkus
Aglio Database Solutions
San Francisco



---(end of broadcast)---
TIP 6: Have you searched our list archives?
   http://archives.postgresql.org

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] Solaris 9 tuning

2005-02-09 Thread Tom Arthurs
Yes, I agree it's unnecessary -- but you'll never have to worry about the 
postmaster not starting due to lack of allocatable
memory -- when I was testing setups, I got sick of rebooting everytime I had to 
make a change to /etc/system, that I threw up my
hands and said, "let it take all it wants".  :)
"single user read only" -- the key is how many connections -- what's your 
application?  Is this being driven by a front end application?
In my case, we run a website with an apache fronted, a tomcat server as 
middleware, and 4 applications.  We may, at times, have only 1
user on, but the java code could be doing a half dozen queries in different 
threads for that one user.
run /usr/ucb/ps -auxww | grep   (-- we use postgres so "grep 
post" works for us) while under load and see
how many backends are running.  if it's more than 4 or 5, then you are using 
the cpu's.
On the topic of shared memory, watch for the ouput of top or prstat -a -- these 
programs count the shared memory block towards each process
and therefor lie about amount of memory used.  Looking at vmstat, etc show that 
the percentage of utilization reported by top or prstat is
way off, and if you care to examine the memory for each proces, you'll see that 
the shared memory block address is, well, shared by each
process (by definition, eh?) but it can be reported as if it were a different 
block for each process.
Not sure the e3500 is the best box for a data warehouse application
Paul Johnson wrote:
Hi Tom, I've made changes to postgresql.conf as recommended on Josh's site
and this seems to be working well so far.
Given your comments on shared memory, it would appear that the following
entry in /etc/system is unnecessary:
set shmsys:shminfo_shmmax=0x
Ironically, we both have this identical setting!
Given that most of our queries are single-user read-only, how do we take
advantage of the 6 CPUs? I'm guessing we can't!?!?!
Also, does this type of workload benefit from moving the txlog?
I'll check our settings against yours given the Solaris 9/E3500 setup that
we both run.
Many thanks,
Paul.

Hi, Paul
Josh helped my company with this issue -- PG doesn't use shared memory
like Oracle, it depends more on the OS buffers.  Making shared mem
too large a fraction is disasterous and seriously impact performance.
(though I find myself having to justify this to Oracle trained
DBA's) :)
What I found was the biggest performance improvement on the write side was
to turn of file system journaling, and on the read side was
to feed postgres as many CPU's as you can.  What we found for a high use
db (for example backending a web site) is that 8-400 g cpu's
outperforms 2 or 4 fast cpus.  The fast cpu's spend all of their time
context switching as more connections are made.
Also make sure your txlog is on another spindle -- it might even be worth
taking one out of the stripe to do this.
I am running solaris 9 on an e3500 also (though my disc setup is
different)
Here's what I have things set to -- it's probably a pretty good starting
point for you:
# - Memory -
shared_buffers = 65536  # min 16, at least max_connections*2, 8KB
each
sort_mem = 12000# min 64, size in KB
vacuum_mem = 64000  # min 1024, size in KB
# - Free Space Map -
max_fsm_pages = 10  # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1  # min 100, ~50 bytes each
# - Kernel Resource Usage -
#max_files_per_process = 1000   # min 25
#preload_libraries = ''
---
and the tail end of /etc/system:
* shared memory config for postgres
set shmsys:shminfo_shmmax=0x
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmsl=1000
set semsys:seminfo_semmns=512
* end of shared memory setting
* Set the hme card to force 100 full duplex and not to autonegotiate
* since hme does not play well with cisco
*
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100fdx_cap=1
set hme:hme_adv_100hdx_cap=0
set hme:hme_adv_10fdx_cap=0
set hme:hme_adv_10hdx_cap=0
set hme:hme_adv_100T4_cap=0
Paul Johnson wrote:

Hi Josh, there are 8 internal disks - all are [EMAIL PROTECTED],000 RPM, fibre
connected.
The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1
filesystem that is striped across the other 6 disks.
The shared_buffers value is a semi-educated guess based on having made
4GB
shared memory available via /etc/system, and having read all we could
find
on various web sites.
Should I knock it down to 400MB as you suggest?
I'll check out that URL.
Cheers,
Paul.

Paul,

I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres.
We

use this Sun box solely for single user Postgres data warehousing
workloads.

What's your disk system?

shared_buffers = 50
This is hig

Re: [PERFORM] Solaris 9 tuning

2005-02-09 Thread Paul Johnson
Hi Tom, I've made changes to postgresql.conf as recommended on Josh's site
and this seems to be working well so far.

Given your comments on shared memory, it would appear that the following
entry in /etc/system is unnecessary:

set shmsys:shminfo_shmmax=0x

Ironically, we both have this identical setting!

Given that most of our queries are single-user read-only, how do we take
advantage of the 6 CPUs? I'm guessing we can't!?!?!

Also, does this type of workload benefit from moving the txlog?

I'll check our settings against yours given the Solaris 9/E3500 setup that
we both run.

Many thanks,

Paul.

> Hi, Paul
>
> Josh helped my company with this issue -- PG doesn't use shared memory
> like Oracle, it depends more on the OS buffers.  Making shared mem
> too large a fraction is disasterous and seriously impact performance.
> (though I find myself having to justify this to Oracle trained
> DBA's) :)
>
> What I found was the biggest performance improvement on the write side was
> to turn of file system journaling, and on the read side was
> to feed postgres as many CPU's as you can.  What we found for a high use
> db (for example backending a web site) is that 8-400 g cpu's
> outperforms 2 or 4 fast cpus.  The fast cpu's spend all of their time
> context switching as more connections are made.
>
> Also make sure your txlog is on another spindle -- it might even be worth
> taking one out of the stripe to do this.
>
> I am running solaris 9 on an e3500 also (though my disc setup is
> different)
>
> Here's what I have things set to -- it's probably a pretty good starting
> point for you:
>
> # - Memory -
>
> shared_buffers = 65536  # min 16, at least max_connections*2, 8KB
> each
> sort_mem = 12000# min 64, size in KB
> vacuum_mem = 64000  # min 1024, size in KB
>
> # - Free Space Map -
>
> max_fsm_pages = 10  # min max_fsm_relations*16, 6 bytes each
> #max_fsm_relations = 1  # min 100, ~50 bytes each
>
> # - Kernel Resource Usage -
>
> #max_files_per_process = 1000   # min 25
> #preload_libraries = ''
>
> ---
>
> and the tail end of /etc/system:
>
> * shared memory config for postgres
> set shmsys:shminfo_shmmax=0x
> set shmsys:shminfo_shmmin=1
> set shmsys:shminfo_shmmni=256
> set shmsys:shminfo_shmseg=256
> set semsys:seminfo_semmap=256
> set semsys:seminfo_semmni=512
> set semsys:seminfo_semmsl=1000
> set semsys:seminfo_semmns=512
> * end of shared memory setting
> * Set the hme card to force 100 full duplex and not to autonegotiate
> * since hme does not play well with cisco
> *
> set hme:hme_adv_autoneg_cap=0
> set hme:hme_adv_100fdx_cap=1
> set hme:hme_adv_100hdx_cap=0
> set hme:hme_adv_10fdx_cap=0
> set hme:hme_adv_10hdx_cap=0
> set hme:hme_adv_100T4_cap=0
>
>
> Paul Johnson wrote:
>
>> Hi Josh, there are 8 internal disks - all are [EMAIL PROTECTED],000 RPM, 
>> fibre
>> connected.
>>
>> The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1
>> filesystem that is striped across the other 6 disks.
>>
>> The shared_buffers value is a semi-educated guess based on having made
>> 4GB
>> shared memory available via /etc/system, and having read all we could
>> find
>> on various web sites.
>>
>> Should I knock it down to 400MB as you suggest?
>>
>> I'll check out that URL.
>>
>> Cheers,
>>
>> Paul.
>>
>>
>>>Paul,
>>>
I would like to know what /etc/system and postgresql_conf values are
>>
>> recommended to deliver as much system resource as possible to Postgres.
>> We
>>
use this Sun box solely for single user Postgres data warehousing
>>
>> workloads.
>>
>>>What's your disk system?
>>>
shared_buffers = 50
>>>
>>>This is highly unlikely to be optimal.   That's 3GB.   On test linux
>>
>> systems
>>
>>>up to 8GB, we've not seen useful values of shared buffers anywhere above
>>
>> 400mb.How did you arrive at that figure?
>>
sort_mem = 2097152
vacuum_mem = 100
>>>
>>>These could be fine on a single-user system.   sort_mem is per *sort*
>>
>> though,
>>
>>>not per query, so you'd need to watch out for complex queries spillling
>>
>> into
>>
>>>swap; perhaps set it a 0.5GB or 1GB?
>>>Otherwise, start with the config guide at
>>
>> www.powerpostgresql.com/PerfList
>>
>>>--
>>>Josh Berkus
>>>Aglio Database Solutions
>>>San Francisco
>>
>>
>>
>>
>>
>>
>> ---(end of broadcast)---
>> TIP 6: Have you searched our list archives?
>>
>>http://archives.postgresql.org
>>
>>
>>
>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Solaris 9 tuning

2005-02-09 Thread Tom Arthurs
... Trying again again with right email address -- list server rejected 
previous :)
Hi, Paul
Josh helped my company with this issue -- PG doesn't use shared memory like 
Oracle, it depends more on the OS buffers.  Making shared mem
too large a fraction is disasterous and seriously impact performance. (though I 
find myself having to justify this to Oracle trained
DBA's) :)
What I found was the biggest performance improvement on the write side was to 
turn of file system journaling, and on the read side was
to feed postgres as many CPU's as you can.  What we found for a high use db 
(for example backending a web site) is that 8-400 g cpu's
outperforms 2 or 4 fast cpus.  The fast cpu's spend all of their time context 
switching as more connections are made.
Also make sure your txlog is on another spindle -- it might even be worth 
taking one out of the stripe to do this.
I am running solaris 9 on an e3500 also (though my disc setup is different)
Here's what I have things set to -- it's probably a pretty good starting point 
for you:
# - Memory -
shared_buffers = 65536  # min 16, at least max_connections*2, 8KB each
sort_mem = 12000# min 64, size in KB
vacuum_mem = 64000  # min 1024, size in KB
# - Free Space Map -
max_fsm_pages = 10  # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1  # min 100, ~50 bytes each
# - Kernel Resource Usage -
#max_files_per_process = 1000   # min 25
#preload_libraries = ''
---
and the tail end of /etc/system:
* shared memory config for postgres
set shmsys:shminfo_shmmax=0x
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmsl=1000
set semsys:seminfo_semmns=512
* end of shared memory setting
* Set the hme card to force 100 full duplex and not to autonegotiate
* since hme does not play well with cisco
*
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100fdx_cap=1
set hme:hme_adv_100hdx_cap=0
set hme:hme_adv_10fdx_cap=0
set hme:hme_adv_10hdx_cap=0
set hme:hme_adv_100T4_cap=0
Paul Johnson wrote:
Hi Josh, there are 8 internal disks - all are [EMAIL PROTECTED],000 RPM, fibre
connected.
The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1
filesystem that is striped across the other 6 disks.
The shared_buffers value is a semi-educated guess based on having made 4GB
shared memory available via /etc/system, and having read all we could find
on various web sites.
Should I knock it down to 400MB as you suggest?
I'll check out that URL.
Cheers,
Paul.

Paul,
I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres.
We
use this Sun box solely for single user Postgres data warehousing
workloads.
What's your disk system?
shared_buffers = 50
This is highly unlikely to be optimal.   That's 3GB.   On test linux
systems
up to 8GB, we've not seen useful values of shared buffers anywhere above
400mb.How did you arrive at that figure?
sort_mem = 2097152
vacuum_mem = 100
These could be fine on a single-user system.   sort_mem is per *sort*
though,
not per query, so you'd need to watch out for complex queries spillling
into
swap; perhaps set it a 0.5GB or 1GB?
Otherwise, start with the config guide at
www.powerpostgresql.com/PerfList
--
Josh Berkus
Aglio Database Solutions
San Francisco



---(end of broadcast)---
TIP 6: Have you searched our list archives?
   http://archives.postgresql.org


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Solaris 9 tuning

2005-02-09 Thread Paul Johnson
Hi Josh, there are 8 internal disks - all are [EMAIL PROTECTED],000 RPM, fibre
connected.

The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1
filesystem that is striped across the other 6 disks.

The shared_buffers value is a semi-educated guess based on having made 4GB
shared memory available via /etc/system, and having read all we could find
on various web sites.

Should I knock it down to 400MB as you suggest?

I'll check out that URL.

Cheers,

Paul.

> Paul,
>> I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres.
We
>> use this Sun box solely for single user Postgres data warehousing
workloads.
> What's your disk system?
>> shared_buffers = 50
> This is highly unlikely to be optimal.   That's 3GB.   On test linux
systems
> up to 8GB, we've not seen useful values of shared buffers anywhere above
400mb.How did you arrive at that figure?
>> sort_mem = 2097152
>> vacuum_mem = 100
> These could be fine on a single-user system.   sort_mem is per *sort*
though,
> not per query, so you'd need to watch out for complex queries spillling
into
> swap; perhaps set it a 0.5GB or 1GB?
> Otherwise, start with the config guide at
www.powerpostgresql.com/PerfList
> --
> Josh Berkus
> Aglio Database Solutions
> San Francisco





---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Solaris 9 tuning

2005-02-07 Thread Josh Berkus
Paul,

> I would like to know what /etc/system and postgresql_conf values are
> recommended to deliver as much system resource as possible to Postgres. We
> use this Sun box solely for single user Postgres data warehousing
> workloads.

What's your disk system?

> shared_buffers = 50

This is highly unlikely to be optimal.   That's 3GB.   On test linux systems 
up to 8GB, we've not seen useful values of shared buffers anywhere above 
400mb.How did you arrive at that figure?

> sort_mem = 2097152
> vacuum_mem = 100

These could be fine on a single-user system.   sort_mem is per *sort* though, 
not per query, so you'd need to watch out for complex queries spillling into 
swap; perhaps set it a 0.5GB or 1GB?

Otherwise, start with the config guide at www.powerpostgresql.com/PerfList

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] Solaris 9 tuning

2005-02-07 Thread Paul Johnson
Hi all, we have an Sun E3500 running Solaris 9. It's got 6x336MHz CPU and
10GB RAM.

I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres. We
use this Sun box solely for single user Postgres data warehousing
workloads.

Changes made to /etc/system values are:

set shmsys:shminfo_shmmax=0x
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set shmsys:shminfo_shmmin=1
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32

Changes made to postgresql.conf are:

shared_buffers = 50
sort_mem = 2097152
vacuum_mem = 100

Thanks.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[PERFORM] Solaris 9 Tuning Tips requested

2004-11-10 Thread Rod Taylor
I'm looking for suggestions on tuning Solaris 9 for a SunFire 890 (Ultra
IV chips) connected to an Hitachi 9500V running PostgreSQL 7.4.

Database is approx 160GB in size with a churn of around 4GB per day (2
GB updated,  2GB inserted, very little removed). It's a mixture of OLTP
and reporting.

5% is reports which do trickle writes 95% is short (30 second or less)
transactions with about 10 selects, 10 writes (inserts, updates, deletes
all mixed in) affecting 150 tuples.

Thanks for any tips -- particularly Solaris kernel tuning or oddities in
Disk IO or configuration settings as they related to Solaris (as they
differ from an Intel).




-- 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html