Re: [PERFORM] Low Performance for big hospital server ..

2005-01-04 Thread amrit
Today is the first official day of this weeks and the system run better in
serveral points but there are still some points that need to be corrected. Some
queries or some tables are very slow. I think the queries inside the programe
need to be rewrite.
Now I put the sort mem to a little bit bigger:
sort mem = 16384   increase  the sort mem makes no effect on the slow point
eventhough there is little connnection.
shared_buffers = 27853
effective cache = 12

I will put more ram but someone said RH 9.0 had poor recognition on the Ram
above 4 Gb?
Should I close the hyperthreading ? Would it make any differnce between open and
close the hyperthreading?
Thanks for any comment
Amrit
Thailand

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-04 Thread Gavin Sherry
On Tue, 4 Jan 2005 [EMAIL PROTECTED] wrote:

 Today is the first official day of this weeks and the system run better in
 serveral points but there are still some points that need to be corrected. 
 Some
 queries or some tables are very slow. I think the queries inside the programe
 need to be rewrite.
 Now I put the sort mem to a little bit bigger:
 sort mem = 16384   increase  the sort mem makes no effect on the slow 
 point
 eventhough there is little connnection.
 shared_buffers = 27853
 effective cache = 12

Even though others have said otherwise, I've had good results from setting
sort_mem higher -- even if that is per query.


 I will put more ram but someone said RH 9.0 had poor recognition on the Ram
 above 4 Gb?

I think they were refering to 32 bit architectures, not distributions as
such.

 Should I close the hyperthreading ? Would it make any differnce between open 
 and
 close the hyperthreading?
 Thanks for any comment

In my experience, the largest performance increases come from intensive
analysis and optimisation of queries. Look at the output of EXPLAIN
ANALYZE for the queries your application is generating and see if they can
be tuned in anyway. More often than not, they can.

Feel free to ask for assistence on irc at irc.freenode.net #postgresql.
People there help optimise queries all day ;-).

 Amrit
 Thailand

Gavin

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Very Bad Performance.

2005-01-04 Thread Pallav Kalva
Dave Cramer wrote:
Well, it's not quite that simple
the rule of thumb is 6-10% of available memory before postgres loads 
is allocated to shared_buffers.
then effective cache is set to the SUM of shared_buffers + kernel buffers

Then you have to look at individual slow queries to determine why they 
are slow, fortunately you are running 7.4 so you can set 
log_min_duration to some number like 1000ms and then
try to analyze why those queries are slow. 
   I had that already set on my database , and when i look at the log 
for all the problem queries, most of the queries are slow from one of 
the table. when i look at the stats on that table they are really wrong, 
not sure how to fix them. i run vacuumdb and analyze daily.


Also hyperthreading may not be helping you.. 
   does it do any harm to the system if it is hyperthreaded ?

Dave
Pallav Kalva wrote:
Hi ,
I am experiencing a very bad performance on my production 
database lately , all my queries are slowing down. Our application is 
a webbased system with lot of selects and updates. I am running 
vacuumdb daily on all the databases, are the below postgres 
configuration parameters are set properly ? can anyone take a look.  
Let me know if you need anymore information.

Postgres Version: 7.4
Operating System: Linux Red Hat 9
Cpus: 2 Hyperthreaded
RAM: 4 gb
Postgres Settings:
max_fsm_pages | 2
max_fsm_relations | 1000
shared_buffers   | 65536
sort_mem   | 16384
vacuum_mem| 32768
wal_buffers| 64
effective_cache_size  | 393216
Thanks!
Pallav
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Josh Berkus
Yann,

 are there any plans for rewriting queries to preexisting materialized
 views?  I mean, rewrite a query (within the optimizer) to use a
 materialized view instead of the originating table?

Automatically, and by default, no.   Using the RULES system?  Yes, you can 
already do this and the folks on the MattView project on pgFoundry are 
working to make it easier.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Wager, Ryan D [NTK]
All,
   I am currently working on a project for my company that entails
Databasing upwards of 300 million specific parameters.  In the current
DB Design, these parameters are mapped against two lookup tables (2
million, and 1.5 million respectively) and I am having extreme issues
getting PG to scale to a working level.  Here are my issues:
  1)the 250 million records are currently whipped and reinserted as a
daily snapshot and the fastest way I have found COPY to do this from
a file is no where near fast enough to do this.  SQL*Loader from Oracle
does some things that I need, ie Direct Path to the db files access
(skipping the RDBMS), inherently ignoring indexing rules and saving a
ton of time (Dropping the index, COPY'ing 250 million records, then
Recreating the index just takes way too long).
  2)Finding a way to keep this many records in a fashion that can be
easily queried.  I even tried breaking it up into almost 2800 separate
tables, basically views of the data pre-broken down, if this is a
working method it can be done this way, but when I tried it, VACUUM, and
the COPY's all seemed to slow down extremely.
   If there is anyone that can give me some tweak parameters or design
help on this, it would be ridiculously appreciated.  I have already
created this in Oracle and it works, but we don't want to have to pay
the monster if something as wonderful as Postgres can handle it.


Ryan Wager

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Josh Berkus
Sent: Tuesday, January 04, 2005 12:06 PM
To: pgsql-performance@postgresql.org
Cc: Yann Michel
Subject: Re: [PERFORM] query rewrite using materialized views

Yann,

 are there any plans for rewriting queries to preexisting materialized
 views?  I mean, rewrite a query (within the optimizer) to use a
 materialized view instead of the originating table?

Automatically, and by default, no.   Using the RULES system?  Yes, you
can 
already do this and the folks on the MattView project on pgFoundry are 
working to make it easier.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Rod Taylor
   1)the 250 million records are currently whipped and reinserted as a
 daily snapshot and the fastest way I have found COPY to do this from
 a file is no where near fast enough to do this.  SQL*Loader from Oracle
 does some things that I need, ie Direct Path to the db files access
 (skipping the RDBMS), inherently ignoring indexing rules and saving a
 ton of time (Dropping the index, COPY'ing 250 million records, then
 Recreating the index just takes way too long).

If you have the hardware for it, instead of doing 1 copy, do 1 copy
command per CPU (until your IO is maxed out anyway) and divide the work
amongst them. I can push through 100MB/sec using methods like this --
which makes loading 100GB of data much faster.

Ditto for indexes. Don't create a single index on one CPU and wait --
send off one index creation command per CPU.

   2)Finding a way to keep this many records in a fashion that can be
 easily queried.  I even tried breaking it up into almost 2800 separate
 tables, basically views of the data pre-broken down, if this is a
 working method it can be done this way, but when I tried it, VACUUM, and
 the COPY's all seemed to slow down extremely.

Can you send us EXPLAIN ANALYSE output for the slow selects and a little
insight into what your doing? A basic table structure, and indexes
involved would be handy. You may change column and table names if you
like.

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Josh Berkus
 Sent: Tuesday, January 04, 2005 12:06 PM
 To: pgsql-performance@postgresql.org
 Cc: Yann Michel
 Subject: Re: [PERFORM] query rewrite using materialized views
 
 Yann,
 
  are there any plans for rewriting queries to preexisting materialized
  views?  I mean, rewrite a query (within the optimizer) to use a
  materialized view instead of the originating table?
 
 Automatically, and by default, no.   Using the RULES system?  Yes, you
 can 
 already do this and the folks on the MattView project on pgFoundry are 
 working to make it easier.
 
-- 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Josh Berkus
Wagner,

If there is anyone that can give me some tweak parameters or design
 help on this, it would be ridiculously appreciated.  I have already
 created this in Oracle and it works, but we don't want to have to pay
 the monster if something as wonderful as Postgres can handle it.

In addition to Rod's advice, please increase your checkpoint_segments and 
checkpoint_timeout parameters and make sure that the pg_xlog is on a seperate 
disk resource from the database.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Wager, Ryan D [NTK]
Rod,
  I do this, PG gets forked many times, it is tough to find the max
number of times I can do this, but I have a Proc::Queue Manager Perl
driver that handles all of the copy calls.  I have a quad CPU machine.
Each COPY only hits ones CPU for like 2.1% but anything over about 5
kicks the load avg up.

  Ill get some explain analysis and table structures out there pronto.

-Original Message-
From: Rod Taylor [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 04, 2005 1:02 PM
To: Wager, Ryan D [NTK]
Cc: Postgresql Performance
Subject: Re: [PERFORM] query rewrite using materialized views

   1)the 250 million records are currently whipped and reinserted as a
 daily snapshot and the fastest way I have found COPY to do this
from
 a file is no where near fast enough to do this.  SQL*Loader from
Oracle
 does some things that I need, ie Direct Path to the db files access
 (skipping the RDBMS), inherently ignoring indexing rules and saving a
 ton of time (Dropping the index, COPY'ing 250 million records, then
 Recreating the index just takes way too long).

If you have the hardware for it, instead of doing 1 copy, do 1 copy
command per CPU (until your IO is maxed out anyway) and divide the work
amongst them. I can push through 100MB/sec using methods like this --
which makes loading 100GB of data much faster.

Ditto for indexes. Don't create a single index on one CPU and wait --
send off one index creation command per CPU.

   2)Finding a way to keep this many records in a fashion that can be
 easily queried.  I even tried breaking it up into almost 2800 separate
 tables, basically views of the data pre-broken down, if this is a
 working method it can be done this way, but when I tried it, VACUUM,
and
 the COPY's all seemed to slow down extremely.

Can you send us EXPLAIN ANALYSE output for the slow selects and a little
insight into what your doing? A basic table structure, and indexes
involved would be handy. You may change column and table names if you
like.

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Josh
Berkus
 Sent: Tuesday, January 04, 2005 12:06 PM
 To: pgsql-performance@postgresql.org
 Cc: Yann Michel
 Subject: Re: [PERFORM] query rewrite using materialized views
 
 Yann,
 
  are there any plans for rewriting queries to preexisting
materialized
  views?  I mean, rewrite a query (within the optimizer) to use a
  materialized view instead of the originating table?
 
 Automatically, and by default, no.   Using the RULES system?  Yes, you
 can 
 already do this and the folks on the MattView project on pgFoundry are

 working to make it easier.
 
-- 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Very Bad Performance.

2005-01-04 Thread Christopher Browne
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Pallav Kalva) 
wrote:
 Then you have to look at individual slow queries to determine why
 they are slow, fortunately you are running 7.4 so you can set
 log_min_duration to some number like 1000ms and then
 try to analyze why those queries are slow.

 I had that already set on my database , and when i look at the log
 for all the problem queries, most of the queries are slow from one of
 the table. when i look at the stats on that table they are really
 wrong, not sure how to fix them. i run vacuumdb and analyze daily.

Well, it's at least good news to be able to focus attention on one
table, rather than being unfocused.

If the problem is that stats on one table are bad, then the next
question is Why is that?

A sensible answer might be that the table is fairly large, but has
some fields (that are relevant to indexing) that have a small number
of values where some are real common and others aren't.

For instance, you might have a customer/supplier ID where there are
maybe a few hundred unique values, but where the table is dominated by
a handful of them.

The default in PostgreSQL is to collect a histogram of statistics
based on having 10 bins, filling them using 300 samples.  If you
have a pretty skewed distribution on some of the fields, that won't be
good enough.

I would suggest looking for columns where things are likely to be
skewed (customer/supplier IDs are really good candidates for this),
and bump them up to collect more stats.

Thus, something like:

  alter table my_table alter column something_id set statistics 100;

Then ANALYZE MY_TABLE, which will collect 100 bins worth of stats for
the 'offending' column, based on 3000 sampled records, and see if that
helps.

 Also hyperthreading may not be helping you..

 does it do any harm to the system if it is hyperthreaded ?

Yes.  If you have multiple hyperthreads running on one CPU, that'll
wind up causing extra memory contention of one sort or another.
-- 
let name=cbbrowne and tld=linuxfinances.info in name ^ @ ^ tld;;
http://www.ntlug.org/~cbbrowne/sgml.html
People who don't use computers are more sociable, reasonable, and ...
less twisted -- Arthur Norman

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Josh Berkus
Ryan,

I do this, PG gets forked many times, it is tough to find the max
  number of times I can do this, but I have a Proc::Queue Manager Perl
  driver that handles all of the copy calls.  I have a quad CPU machine.
  Each COPY only hits ones CPU for like 2.1% but anything over about 5
  kicks the load avg up.

That's consistent with Xeon problems we've seen elsewhere.   Keep the # of 
processes at or below the # of processors.

Moving pg_xlog is accomplished through:
1) in 8.0, changes to postgresql.conf
(in 8.0 you'd also want to explore using multiple arrays with 
tablespaces to 
make things even faster)
2) in other versions:
a) mount a seperate disk on PGDATA/pg_xlog, or
b) symlink PGDATA/pg_xlog to another location

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Simon Riggs
On Tue, 2005-01-04 at 14:02 -0500, Rod Taylor wrote:
1)the 250 million records are currently whipped and reinserted as a
  daily snapshot and the fastest way I have found COPY to do this from
  a file is no where near fast enough to do this.  SQL*Loader from Oracle
  does some things that I need, ie Direct Path to the db files access
  (skipping the RDBMS), inherently ignoring indexing rules and saving a
  ton of time (Dropping the index, COPY'ing 250 million records, then
  Recreating the index just takes way too long).
 
 If you have the hardware for it, instead of doing 1 copy, do 1 copy
 command per CPU (until your IO is maxed out anyway) and divide the work
 amongst them. I can push through 100MB/sec using methods like this --
 which makes loading 100GB of data much faster.
 
 Ditto for indexes. Don't create a single index on one CPU and wait --
 send off one index creation command per CPU.

Not sure what you mean by whipped. If you mean select and re-insert
then perhaps using a pipe would produce better performance, since no
disk access for the data file would be involved.

In 8.0 COPY and CREATE INDEX is optimised to not use WAL at all if
archive_command is not set. 8 is great...

2)Finding a way to keep this many records in a fashion that can be
  easily queried.  I even tried breaking it up into almost 2800 separate
  tables, basically views of the data pre-broken down, if this is a
  working method it can be done this way, but when I tried it, VACUUM, and
  the COPY's all seemed to slow down extremely.
 
 Can you send us EXPLAIN ANALYSE output for the slow selects and a little
 insight into what your doing? A basic table structure, and indexes
 involved would be handy. You may change column and table names if you
 like.

There's a known issue using UNION ALL views in 8.0 that makes them
slightly more inefficient than using a single table. Perhaps that would
explain your results.

There shouldn't be any need to do the 2800 table approach in this
instance.

-- 
Best Regards, Simon Riggs


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-04 Thread amrit
  I will put more ram but someone said RH 9.0 had poor recognition on the Ram
  above 4 Gb?

 I think they were refering to 32 bit architectures, not distributions as
 such.

Sorry for wrong reason , then should I increase more RAM than 4 Gb. on 32 bit
Arche.?

  Should I close the hyperthreading ? Would it make any differnce between
 open and
  close the hyperthreading?
  Thanks for any comment

 In my experience, the largest performance increases come from intensive
 analysis and optimisation of queries. Look at the output of EXPLAIN
 ANALYZE for the queries your application is generating and see if they can
 be tuned in anyway. More often than not, they can.

So what you mean is that the result is the same whether  close or open
hyperthreading ?
Will it be any harm if I open it ?
The main point shiuld be adjustment the query , right.

 Feel free to ask for assistence on irc at irc.freenode.net #postgresql.
 People there help optimise queries all day ;-).

How could I contact with those people ;= which url ?
Thanks again.
Amrit
Thailand

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org