Re: [PERFORM] Very Bad Performance.

2005-01-04 Thread Christopher Browne
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Pallav Kalva) 
wrote:
>> Then you have to look at individual slow queries to determine why
>> they are slow, fortunately you are running 7.4 so you can set
>> log_min_duration to some number like 1000ms and then
>> try to analyze why those queries are slow.
>
> I had that already set on my database , and when i look at the log
> for all the problem queries, most of the queries are slow from one of
> the table. when i look at the stats on that table they are really
> wrong, not sure how to fix them. i run vacuumdb and analyze daily.

Well, it's at least good news to be able to focus attention on one
table, rather than being unfocused.

If the problem is that stats on one table are bad, then the next
question is "Why is that?"

A sensible answer might be that the table is fairly large, but has
some fields (that are relevant to indexing) that have a small number
of values where some are real common and others aren't.

For instance, you might have a customer/supplier ID where there are
maybe a few hundred unique values, but where the table is dominated by
a handful of them.

The default in PostgreSQL is to collect a histogram of statistics
based on having 10 "bins," filling them using 300 samples.  If you
have a pretty skewed distribution on some of the fields, that won't be
good enough.

I would suggest looking for columns where things are likely to be
"skewed" (customer/supplier IDs are really good candidates for this),
and bump them up to collect more stats.

Thus, something like:

  alter table my_table alter column something_id set statistics 100;

Then ANALYZE MY_TABLE, which will collect 100 bins worth of stats for
the 'offending' column, based on 3000 sampled records, and see if that
helps.

>> Also hyperthreading may not be helping you..
>
> does it do any harm to the system if it is hyperthreaded ?

Yes.  If you have multiple "hyperthreads" running on one CPU, that'll
wind up causing extra memory contention of one sort or another.
-- 
let name="cbbrowne" and tld="linuxfinances.info" in name ^ "@" ^ tld;;
http://www.ntlug.org/~cbbrowne/sgml.html
"People who don't use computers are more sociable, reasonable, and ...
less twisted" -- Arthur Norman

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Very Bad Performance.

2005-01-04 Thread Pallav Kalva
Dave Cramer wrote:
Well, it's not quite that simple
the rule of thumb is 6-10% of available memory before postgres loads 
is allocated to shared_buffers.
then effective cache is set to the SUM of shared_buffers + kernel buffers

Then you have to look at individual slow queries to determine why they 
are slow, fortunately you are running 7.4 so you can set 
log_min_duration to some number like 1000ms and then
try to analyze why those queries are slow. 
   I had that already set on my database , and when i look at the log 
for all the problem queries, most of the queries are slow from one of 
the table. when i look at the stats on that table they are really wrong, 
not sure how to fix them. i run vacuumdb and analyze daily.


Also hyperthreading may not be helping you.. 
   does it do any harm to the system if it is hyperthreaded ?

Dave
Pallav Kalva wrote:
Hi ,
I am experiencing a very bad performance on my production 
database lately , all my queries are slowing down. Our application is 
a webbased system with lot of selects and updates. I am running 
"vacuumdb" daily on all the databases, are the below postgres 
configuration parameters are set properly ? can anyone take a look.  
Let me know if you need anymore information.

Postgres Version: 7.4
Operating System: Linux Red Hat 9
Cpus: 2 Hyperthreaded
RAM: 4 gb
Postgres Settings:
max_fsm_pages | 2
max_fsm_relations | 1000
shared_buffers   | 65536
sort_mem   | 16384
vacuum_mem| 32768
wal_buffers| 64
effective_cache_size  | 393216
Thanks!
Pallav
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] Very Bad Performance.

2005-01-03 Thread Dave Cramer
Well, it's not quite that simple
the rule of thumb is 6-10% of available memory before postgres loads is 
allocated to shared_buffers.
then effective cache is set to the SUM of shared_buffers + kernel buffers

Then you have to look at individual slow queries to determine why they 
are slow, fortunately you are running 7.4 so you can set 
log_min_duration to some number like 1000ms and then
try to analyze why those queries are slow.

Also hyperthreading may not be helping you..
Dave
Pallav Kalva wrote:
Hi ,
I am experiencing a very bad performance on my production database 
lately , all my queries are slowing down. Our application is a 
webbased system with lot of selects and updates. I am running 
"vacuumdb" daily on all the databases, are the below postgres 
configuration parameters are set properly ? can anyone take a look.  
Let me know if you need anymore information.

Postgres Version: 7.4
Operating System: Linux Red Hat 9
Cpus: 2 Hyperthreaded
RAM: 4 gb
Postgres Settings:
max_fsm_pages | 2
max_fsm_relations | 1000
shared_buffers   | 65536
sort_mem   | 16384
vacuum_mem| 32768
wal_buffers| 64
effective_cache_size  | 393216
Thanks!
Pallav
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings