Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread amrit
> shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is
> shared, so no matter how many connections you have it will only use 96M.

Now I use the figure of 27853

> >
> >Will the increasing in effective cache size to arround 20 make a little
> bit
> >improvement ? Do you think so?
> >
Decrease the sort mem too much [8196] make the performance much slower so I use
sort_mem = 16384
and leave effective cache to the same value , the result is quite better but I
should wait for tomorrow morning [official hour]  to see the end result.

> >
> I would leave it at the figure you proposed (128897), and monitor your
> performance.
> (you can always increase it later and see what the effect is).
Yes , I use this figure.

If the result still poor , putting more ram "6-8Gb" [also putting more money
too] will solve the problem ?
Thanks ,
Amrit
Thailand


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread William Yu
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings on my 
 8GB Opteron server and 10K seems to be the best setting.


also effective cache is the sum of kernel buffers + shared_buffers so it
should be bigger than shared buffers.
also make the effective cache to 2097152 [2 Gb].
I will give you the result , because tomorrow [4/12/05] will be the official day
of my hospital [which have more than 1700 OPD patient/day].
To figure out your effective cache size, run top and add free+cached.

Also turning hyperthreading off may help, it is unlikely it is doing any
good unless you are running a relatively new (2.6.x) kernel.
Why , could you give me the reason?
Pre 2.6, the kernel does not know the difference between logical and 
physical CPUs. Hence, in a dual processor system with hyperthreading, it 
actually sees 4 CPUs. And when assigning processes to CPUs, it may 
assign to 2 logical CPUs in the same physical CPU.



I presume you are vacuuming on a regular basis?
Yes , vacuumdb daily.
Do you vacuum table by table or the entire DB? I find over time, the 
system tables can get very bloated and cause a lot of slowdowns just due 
to schema queries/updates. You might want to try a VACUUM FULL ANALYZE 
just on the system tables.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Mike Mascari
William Yu wrote:
[EMAIL PROTECTED] wrote:
Yes , vacuumdb daily.
Do you vacuum table by table or the entire DB? I find over time, the 
system tables can get very bloated and cause a lot of slowdowns just due 
to schema queries/updates. You might want to try a VACUUM FULL ANALYZE 
just on the system tables.
A REINDEX of the system tables in stand-alone mode might also be in 
order, even for a 7.4.x database:

http://www.postgresql.org/docs/7.4/interactive/sql-reindex.html
If a dump-reload-analyze cycle yields significant performance 
improvements then we know it's due to dead-tuple bloat - either heap 
tuples or index tuples.

Mike Mascari
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Gregory S. Williamson
Amrit --

>-Original Message-
>From:  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
>Sent:  Mon 1/3/2005 12:18 AM
>To:Mark Kirkwood
>Cc:PGsql-performance
>Subject:   Re: [PERFORM] Low Performance for big hospital server ..
>> shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is
>> shared, so no matter how many connections you have it will only use 96M.
>
>Now I use the figure of 27853
>
>> >
>> >Will the increasing in effective cache size to arround 20 make a >little
>> bit
>> >improvement ? Do you think so?
>> >
>Decrease the sort mem too much [8196] make the performance much slower so I 
>>use
>sort_mem = 16384
>and leave effective cache to the same value , the result is quite better but >I
>should wait for tomorrow morning [official hour]  to see the end result.
>
>> >
>> I would leave it at the figure you proposed (128897), and monitor your
>> performance.
>> (you can always increase it later and see what the effect is).
>Yes , I use this figure.
>
>If the result still poor , putting more ram "6-8Gb" [also putting more money
>too] will solve the problem ?

Adding RAM will almost always help, at least for a while. Our small runitme 
servers have 2 gigs of RAM; the larger ones have 4 gigs; I do anticipate the 
need to add RAM as we add users.

If you have evaluated the queries that are running and verified that they are 
using indexes properly, etc., and tuned the other parameters for your system 
and its disks, adding memory helps because it increases the chance that data is 
already in memory, thus saving the time to fetch it from disk. Studying 
performance under load with top, vmstat, etc. and detailed analysis of queries 
can often trade some human time for the money that extra hardware would cost. 
Sometimes easier to do than getting downtime for a critical server, as well.

If you don't have a reliable way of reproducing real loads on a test system, it 
is best to change things cautiously, and observe the system under load; if you 
change too many things (ideally only 1 at a time but often that is not 
possible) you mau actually defeat a good change with a bad one; at the least,m 
you may not know which change was the most important one if you make several at 
once.

Best of luck,

Greg Williamson
DBA
GlobeXplorer LLC
>Thanks ,
>Amrit
>Thailand


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Pierre-Frédéric Caillaud

Decrease the sort mem too much [8196] make the performance much slower  
so I use
sort_mem = 16384
and leave effective cache to the same value , the result is quite better  
but I
should wait for tomorrow morning [official hour]  to see the end result.
	You could also profile your queries to see where those big sorts come  
from, and maybe add some indexes to try to replace sorts by  
index-scans-in-order, which use no temporary memory. Can you give an  
example of your queries which make use of big sorts like this ?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Dave Cramer

William Yu wrote:
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].

1536 is probaby too low. I've tested a bunch of different settings on 
my  8GB Opteron server and 10K seems to be the best setting.
Be careful here, he is not using opterons which can access physical 
memory above 4G efficiently. Also he only has 4G the 6-10% rule still 
applies


also effective cache is the sum of kernel buffers + shared_buffers 
so it
should be bigger than shared buffers.

also make the effective cache to 2097152 [2 Gb].
I will give you the result , because tomorrow [4/12/05] will be the 
official day
of my hospital [which have more than 1700 OPD patient/day].

To figure out your effective cache size, run top and add free+cached.
My understanding is that effective cache is the sum of shared buffers, 
plus kernel buffers, not sure what free + cached gives you?


Also turning hyperthreading off may help, it is unlikely it is doing 
any
good unless you are running a relatively new (2.6.x) kernel.

Why , could you give me the reason?

Pre 2.6, the kernel does not know the difference between logical and 
physical CPUs. Hence, in a dual processor system with hyperthreading, 
it actually sees 4 CPUs. And when assigning processes to CPUs, it may 
assign to 2 logical CPUs in the same physical CPU.
Right, the pre 2.6 kernels don't really know how to handle hyperthreaded 
CPU's



I presume you are vacuuming on a regular basis?

Yes , vacuumdb daily.

Do you vacuum table by table or the entire DB? I find over time, the 
system tables can get very bloated and cause a lot of slowdowns just 
due to schema queries/updates. You might want to try a VACUUM FULL 
ANALYZE just on the system tables.
You may want to try this but regular vacuum analyze should work fine as 
long as you have the free space map settings correct. Also be aware that 
pre-7.4.x the free space map is not populated on startup so you should 
do a vacuum analyze right after startup.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Dave Cramer




Amrit,

I realize you may be stuck with 7.3.x but you should be aware that 7.4
is considerably faster, and 8.0 appears to be even faster yet.

I would seriously consider upgrading, if at all possible.

A few more hints. 

Random page cost is quite conservative if you have reasonably fast
disks.
Speaking of fast disks, not all disks are created equal, some RAID
drives are quite slow (Bonnie++ is your friend here)

Sort memory can be set on a per query basis, I'd consider lowering it
quite low and only increasing it when necessary.

Which brings us to how to find out when it is necessary.
Turn logging on and turn on log_pid, and log_duration, then you will
need to sort through the logs to find the slow queries.

There are some special cases where postgresql can be quite slow, and
minor adjustments to the query can improve it significantly

For instance pre-8.0 select * from foo where id = '1'; where id is a
int8 will never use an index even if it exists.


Regards,

Dave


[EMAIL PROTECTED] wrote:

  
The common wisdom of shared buffers is around 6-10% of available memory.
Your proposal below is about 50% of memory.

I'm not sure what the original numbers actually meant, they are quite large.


  
  I will try to reduce shared buffer to 1536 [1.87 Mb].

  
  
also effective cache is the sum of kernel buffers + shared_buffers so it
should be bigger than shared buffers.

  
  also make the effective cache to 2097152 [2 Gb].
I will give you the result , because tomorrow [4/12/05] will be the official day
of my hospital [which have more than 1700 OPD patient/day].


  
  
Also turning hyperthreading off may help, it is unlikely it is doing any
good unless you are running a relatively new (2.6.x) kernel.

  
  Why , could you give me the reason?

  
  
I presume you are vacuuming on a regular basis?

  
  Yes , vacuumdb daily.




  


-- 
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561





Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Merlin Moncure
amrit wrote:
> I try to adjust my server for a couple of weeks with some sucess but
it
> still
> slow when the server has stress in the moring from many connection . I
> used
> postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of
4
> Gb.
> Since 1 1/2 yr. when I started to use the database server after
optimizing
> the
> postgresql.conf everything went fine until a couple of weeks ago , my
> database
> grew up to 3.5 Gb and there were more than 160 concurent connections.
> The server seemed to be slower in the rush hour peroid than before .
There
> is some swap process too. My top and meminfo are shown here below:

well, you've hit the 'wall'...your system seems to be more or less at
the limit of what 32 bit technology can deliver.  If upgrade to Opteron
and 64 bit is out of the question, here are a couple of new tactics you
can try.  Optimizing postgresql.conf can help, but only so much.  

Optimize queries:
One big often looked performance gainer is to use functional indexes to
access data from a table.  This can save space by making the index
smaller and more efficient.  This wins on cache and speed at the price
of some flexibility.  

Optimize datums:  replace numeric(4) with int2, numeric(6) with int4,
etc.  This will save a little space on the tuple which will ease up on
the cache a bit.  Use constraints where necessary to preserve data
integrity.

Materialized views:  These can provide an enormous win if you can deal
incorporate them into your application.  With normal views, multiple
backends can share a query plan.  With mat-views, backends can share
both the plan and its execution.

Merlin


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread amrit
> I realize you may be stuck with 7.3.x but you should be aware that 7.4
> is considerably faster, and 8.0 appears to be even faster yet.

There are a little bit incompatibility between 7.3 -8 , so rather difficult to
change.

> I would seriously consider upgrading, if at all possible.
>
> A few more hints.
>
> Random page cost is quite conservative if you have reasonably fast disks.
> Speaking of fast disks, not all disks are created equal, some RAID
> drives are quite slow (Bonnie++ is your friend here)
>
> Sort memory can be set on a per query basis, I'd consider lowering it
> quite low and only increasing it when necessary.
>
> Which brings us to how to find out when it is necessary.
> Turn logging on and turn on log_pid, and log_duration, then you will
> need to sort through the logs to find the slow queries.

In standard RH 9.0 , if I enable both of the log [pid , duration] , where could
I look for the result of the log, and would it make the system to be slower?


Amrit
Thailand


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[PERFORM] query rewrite using materialized views

2005-01-03 Thread Yann Michel
Hi,

are there any plans for rewriting queries to preexisting materialized
views?  I mean, rewrite a query (within the optimizer) to use a
materialized view instead of the originating table?

Regards,
Yann

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Robert Treat
On Monday 03 January 2005 10:40, [EMAIL PROTECTED] wrote:
> > I realize you may be stuck with 7.3.x but you should be aware that 7.4
> > is considerably faster, and 8.0 appears to be even faster yet.
>
> There are a little bit incompatibility between 7.3 -8 , so rather difficult
> to change.
>

Sure, but even moving to 7.4 would be a bonus, especially if you use a lot of 
select * from tab where id in (select ... ) type queries, and the 
incompataibility is less as well. 

> > I would seriously consider upgrading, if at all possible.
> >
> > A few more hints.
> >

One thing I didn't see mentioned that should have been was to watch for index 
bloat, which was a real problem on 7.3 machines.  You can determine which 
indexes are bloated by studying vacuum output or by comparing index size on 
disk to table size on disk.  

Another thing I didn't see mentioned was to your free space map settings.  
Make sure these are large enough to hold your data... max_fsm_relations 
should be larger then the total # of tables you have in your system (check 
the archives for the exact query needed) and max_fsm_pages needs to be big 
enough to hold all of the pages you use in a day... this is hard to calculate 
in 7.3, but if you look at your vacuum output and add the number of pages 
cleaned up for all tables, this could give you a good number to work with. It 
would certainly tell you if your setting is too small. 

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Hardware purchase question

2005-01-03 Thread Mitch Pirtle
On Mon, 13 Dec 2004 09:23:13 -0800, Joshua D. Drake
<[EMAIL PROTECTED]> wrote:
> 
> RAID 10 will typically always outperform RAID 5 with the same HD config.

Isn't RAID10 just RAID5 mirrored?  How does that speed up performance?
 Or am I missing something?

-- Mitch

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] Hardware purchase question

2005-01-03 Thread Madison Kelly
Mitch Pirtle wrote:
On Mon, 13 Dec 2004 09:23:13 -0800, Joshua D. Drake
<[EMAIL PROTECTED]> wrote:
RAID 10 will typically always outperform RAID 5 with the same HD config.

Isn't RAID10 just RAID5 mirrored?  How does that speed up performance?
 Or am I missing something?
-- Mitch
Hi Mitch,
  Nope, Raid 10 (one zero) is a mirror is stripes, no parity. with r10 
you get the benefit of a full mirror which means your system does not 
need to calculate the XOR parity but you only get 50% disk usage. The 
mirror causes a slight write hit as the data needs to be split between 
two disk (or in this case, to striped pairs) but reads can be up to 
twice as fast (theoretically). By adding the stripe you negate the write 
hit and actually gain write performance because half the data goes to 
mirror A, half to mirror B (same with reads, roughly).

  Raid 10 is a popular choice for software raid because of the reduced 
overhead. Raid 5 on the otherhand does require that a parity bit is 
calculated for every N-1 disks. With r5 you get N-1 disk usage (you get 
the combined capacity of 3 disks in a 4 disk r5 array) and still get the 
benefit of striping across the disks so long as you have a dedicated 
raid asic that can do the XOR calculations. Without it, specially in a 
failure state, the performance can collapse as the CPU performs all that 
extra math.

hth
Madison
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread William Yu
Dave Cramer wrote:

William Yu wrote:
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].

1536 is probaby too low. I've tested a bunch of different settings on 
my  8GB Opteron server and 10K seems to be the best setting.

Be careful here, he is not using opterons which can access physical 
memory above 4G efficiently. Also he only has 4G the 6-10% rule still 
applies
10% of 4GB is 400MB. 10K buffers is 80MB. Easily less than the 6-10% rule.

To figure out your effective cache size, run top and add free+cached.

My understanding is that effective cache is the sum of shared buffers, 
plus kernel buffers, not sure what free + cached gives you?
Not true. Effective cache size is the free memory available that the OS 
can use for caching for Postgres. In a system that runs nothing but 
Postgres, it's free + cached.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Hardware purchase question

2005-01-03 Thread Madison Kelly
Madison Kelly wrote:
  Nope, Raid 10 (one zero) is a mirror is stripes, no parity. with r10 
Woops, that should be "mirror of stripes".
By the way, what you are thinking of is possible, it would be 51 (five 
one; a raid 5 built on mirrors) or 15 (a mirror of raid 5 arrays). 
Always be careful, 10 and 01 are also not the same. You want to think 
carefully about what you want out of your array before building it.

Madison
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Hardware purchase question

2005-01-03 Thread Greg Stark

Madison Kelly <[EMAIL PROTECTED]> writes:

> Without it, specially in a failure state, the performance can collapse as
> the CPU performs all that extra math.

It's really not the math that makes raid 5 hurt. It's that in order to
calculate the checksum block the raid controller needs to read in the existing
checksum block and write out the new version. So every write causes not just
one drive seeking and writing, but a second drive seeking and performing a
read and a write.

The usual strategy for dealing with that is stuffing a huge nonvolatile cache
in the controller so those reads are mostly cached and the extra writes don't
saturate the i/o throughput. But those kinds of controllers are expensive and
not an option for software raid.

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Hardware purchase question

2005-01-03 Thread Mitch Pirtle
You are right, I now remember that setup was originally called "RAID
10 plus 1", and I believe is was an incorrect statement from an
overzealous salesman ;-)

Thanks for the clarification!

- Mitch

On Mon, 03 Jan 2005 15:19:04 -0500, Madison Kelly <[EMAIL PROTECTED]> wrote:
> Madison Kelly wrote:
> >   Nope, Raid 10 (one zero) is a mirror is stripes, no parity. with r10
> 
> Woops, that should be "mirror of stripes".
> 
> By the way, what you are thinking of is possible, it would be 51 (five
> one; a raid 5 built on mirrors) or 15 (a mirror of raid 5 arrays).
> Always be careful, 10 and 01 are also not the same. You want to think
> carefully about what you want out of your array before building it.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] sudden drop in statement turnaround latency -- yay!.

2005-01-03 Thread Merlin Moncure
Tom Lane wrote:
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > I took advantage of the holidays to update a production server (dual
> > Opteron on win2k) from an 11/16 build (about beta5 or so) to the
latest
> > release candidate.  No configuration changes were made, just a
binary
> > swap and a server stop/start.
> 
> > I was shocked to see that statement latency dropped by a fairly
large
> > margin.
> 
> Hmm ... I trawled through the CVS logs since 11/16, and did not see
very
> many changes that looked like they might improve performance (list
> attached) --- and even of those, hardly any looked like the change
would
> be significant.  Do you know whether the query plans changed?  Are you
> running few enough queries per connection that backend startup
overhead
> might be an issue?

No, everything is run over persistent connections and prepared
statements.  All queries boil down to an index scan of some sort, so the
planner is not really a factor.  It's all strictly execution times, and
data is almost always read right off of the cache.  The performance of
the ISAM driver is driven by 3 factors (in order).
1. network latency (including o/s overhead context switches, etc.)
2. i/o factors (data read from cache, disk, etc).
3. overhead for pg to execute trivial transaction.

#1 & #2 are well understood problems.  It's #3 that improved
substantially and without warning.  See my comments below:

>   regards, tom lane
> 
> 
> 2004-12-15 14:16  tgl
> 
>   * src/backend/access/nbtree/nbtutils.c: Calculation of
>   keys_are_unique flag was wrong for cases involving redundant
>   cross-datatype comparisons.  Per example from Merlin Moncure.

Not likely to have a performance benefit.
 
> 2004-12-02 10:32  momjian
> 
>   * configure, configure.in, doc/src/sgml/libpq.sgml,
>   doc/src/sgml/ref/copy.sgml, src/interfaces/libpq/fe-connect.c,
>   src/interfaces/libpq/fe-print.c,
src/interfaces/libpq/fe-secure.c,
>   src/interfaces/libpq/libpq-fe.h,
src/interfaces/libpq/libpq-int.h:
>   Rework libpq threaded SIGPIPE handling to avoid interference
with
>   calling applications.  This is done by blocking sigpipe in the
>   libpq thread and using sigpending/sigwait to possibily discard
any
>   sigpipe we generated.

Doubtful.
 
> 2004-12-01 20:34  tgl
> 
>   * src/: backend/optimizer/path/costsize.c,
>   backend/optimizer/util/plancat.c,
>   test/regress/expected/geometry.out,
>   test/regress/expected/geometry_1.out,
>   test/regress/expected/geometry_2.out,
>   test/regress/expected/inherit.out,
test/regress/expected/join.out,
>   test/regress/sql/inherit.sql, test/regress/sql/join.sql: Make
some
>   adjustments to reduce platform dependencies in plan selection.
In
>   particular, there was a mathematical tie between the two
possible
>   nestloop-with-materialized-inner-scan plans for a join (ie, we
>   computed the same cost with either input on the inside),
resulting
>   in a roundoff error driven choice, if the relations were both
small
>   enough to fit in sort_mem.  Add a small cost factor to ensure we
>   prefer materializing the smaller input.  This changes several
>   regression test plans, but with any luck we will now have more
>   stability across platforms.

No.  The planner is not a factor.
 
> 2004-12-01 14:00  tgl
> 
>   * doc/src/sgml/catalogs.sgml, doc/src/sgml/diskusage.sgml,
>   doc/src/sgml/perform.sgml, doc/src/sgml/release.sgml,
>   src/backend/access/nbtree/nbtree.c, src/backend/catalog/heap.c,
>   src/backend/catalog/index.c, src/backend/commands/vacuum.c,
>   src/backend/commands/vacuumlazy.c,
>   src/backend/optimizer/util/plancat.c,
>   src/backend/optimizer/util/relnode.c,
src/include/access/genam.h,
>   src/include/nodes/relation.h,
src/test/regress/expected/case.out,
>   src/test/regress/expected/inherit.out,
>   src/test/regress/expected/join.out,
>   src/test/regress/expected/join_1.out,
>   src/test/regress/expected/polymorphism.out: Change planner to
use
>   the current true disk file size as its estimate of a relation's
>   number of blocks, rather than the possibly-obsolete value in
>   pg_class.relpages.  Scale the value in pg_class.reltuples
>   correspondingly to arrive at a hopefully more accurate number of
>   rows.  When pg_class contains 0/0, estimate a tuple width from
the
>   column datatypes and divide that into current file size to
estimate
>   number of rows.  This improved methodology allows us to jettison
>   the ancient hacks that put bogus default values into pg_class
when
>   a table is first created.  Also, per a suggestion from Simon,
make
>   VACUUM (but not VACUUM FULL or ANALYZE) adjust the value it puts
>   into pg_class.reltuples to try to represent the mean tuple
density
>   instead of the minimal density that actually prevails just after

[PERFORM] Bad Index Choices with user defined data type

2005-01-03 Thread Adam Palmblad



I've got a table using a data type that I have 
created as the type for its primary key.  I (hope) I have the type set up 
properly - it seems okay, and does not have any problem creating a b-tree index 
for the type.  The problem I am having is that the index seems to never be 
chosen for use.  I can force the use of the index by setting enable_seqscan 
to off.  The table has about 1.2 million rows.  I have also analyzed 
the table - and immediately afterwards there is no affect on the index's 
behaviour.
 
Any thoughts?
 
-Adam


[PERFORM] Very Bad Performance.

2005-01-03 Thread Pallav Kalva
Hi ,
I am experiencing a very bad performance on my production database 
lately , all my queries are slowing down. Our application is a webbased 
system with lot of selects and updates. I am running "vacuumdb" daily on 
all the databases, are the below postgres configuration parameters are 
set properly ? can anyone take a look.  Let me know if you need anymore 
information.

Postgres Version: 7.4
Operating System: Linux Red Hat 9
Cpus: 2 Hyperthreaded
RAM: 4 gb
Postgres Settings:
max_fsm_pages | 2
max_fsm_relations | 1000
shared_buffers   | 65536
sort_mem   | 16384
vacuum_mem| 32768
wal_buffers| 64
effective_cache_size  | 393216
Thanks!
Pallav
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Very Bad Performance.

2005-01-03 Thread Dave Cramer
Well, it's not quite that simple
the rule of thumb is 6-10% of available memory before postgres loads is 
allocated to shared_buffers.
then effective cache is set to the SUM of shared_buffers + kernel buffers

Then you have to look at individual slow queries to determine why they 
are slow, fortunately you are running 7.4 so you can set 
log_min_duration to some number like 1000ms and then
try to analyze why those queries are slow.

Also hyperthreading may not be helping you..
Dave
Pallav Kalva wrote:
Hi ,
I am experiencing a very bad performance on my production database 
lately , all my queries are slowing down. Our application is a 
webbased system with lot of selects and updates. I am running 
"vacuumdb" daily on all the databases, are the below postgres 
configuration parameters are set properly ? can anyone take a look.  
Let me know if you need anymore information.

Postgres Version: 7.4
Operating System: Linux Red Hat 9
Cpus: 2 Hyperthreaded
RAM: 4 gb
Postgres Settings:
max_fsm_pages | 2
max_fsm_relations | 1000
shared_buffers   | 65536
sort_mem   | 16384
vacuum_mem| 32768
wal_buffers| 64
effective_cache_size  | 393216
Thanks!
Pallav
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] PostgreSQL's Statspack?

2005-01-03 Thread Stan Y
Besides the tables pg_stat_xxx, are there any stronger tools for
PostgreSQL as the counterpart of Oracle's Statspack?  Is it possible at
all to trace and log the cpu and io cost for each committed
transaction?
Thanks a lot!  -Stan

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] sudden drop in statement turnaround latency -- yay!.

2005-01-03 Thread Tom Lane
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Add a small cost factor to ensure we
>> prefer materializing the smaller input.  This changes several
>> regression test plans, but with any luck we will now have more
>> stability across platforms.

> No.  The planner is not a factor.

You are missing the point: the possible change in a generated plan could
be a factor.

>> Change planner to use
>> the current true disk file size as its estimate of a relation's
>> number of blocks, rather than the possibly-obsolete value in
>> pg_class.relpages.

> doesn't seem like this would apply.

Same point.  Unless you have done EXPLAINs to verify that the same plans
were used before and after, you can't dismiss this.

>> * src/backend/utils/cache/relcache.c: Avoid scanning the
>> relcache
>> during AtEOSubXact_RelationCache when there is nothing to do,
>> which
>> is most of the time.  This is another simple improvement to cut
>> subtransaction entry/exit overhead.

> Not clear from the comments: does this apply to every transaction, or
> only ones with savepoints?  If all transactions, it's a contender.

It only applies to subtransactions, ie something involving savepoints.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Dave Cramer

[EMAIL PROTECTED] wrote:
I realize you may be stuck with 7.3.x but you should be aware that 7.4
is considerably faster, and 8.0 appears to be even faster yet.
   

There are a little bit incompatibility between 7.3 -8 , so rather difficult 
to
change.
 

I would seriously consider upgrading, if at all possible.
A few more hints.
Random page cost is quite conservative if you have reasonably fast disks.
Speaking of fast disks, not all disks are created equal, some RAID
drives are quite slow (Bonnie++ is your friend here)
Sort memory can be set on a per query basis, I'd consider lowering it
quite low and only increasing it when necessary.
Which brings us to how to find out when it is necessary.
Turn logging on and turn on log_pid, and log_duration, then you will
need to sort through the logs to find the slow queries.
   

In standard RH 9.0 , if I enable both of the log [pid , duration] , where could
I look for the result of the log, and would it make the system to be slower?
 

On a redhat system logging is more or less disabled if you used the rpm
you can set syslog=2 in the postgresql.conf and then you will get the 
logs in messages.log
Yes, it will make it slower, but you have to find out which queries are 
slow.

Dave
Amrit
Thailand
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html
 

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-03 Thread Dave Cramer

William Yu wrote:
Dave Cramer wrote:

William Yu wrote:
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].


1536 is probaby too low. I've tested a bunch of different settings 
on my  8GB Opteron server and 10K seems to be the best setting.

Be careful here, he is not using opterons which can access physical 
memory above 4G efficiently. Also he only has 4G the 6-10% rule still 
applies

10% of 4GB is 400MB. 10K buffers is 80MB. Easily less than the 6-10% 
rule.

Correct, I didn't actually do the math, I refrain from giving actual 
numbers as every system is different.


To figure out your effective cache size, run top and add free+cached.

My understanding is that effective cache is the sum of shared 
buffers, plus kernel buffers, not sure what free + cached gives you?

Not true. Effective cache size is the free memory available that the 
OS can use for caching for Postgres. In a system that runs nothing but 
Postgres, it's free + cached.
You still need to add in the shared buffers as they are part of the 
"effective cache"

Dave
---(end of broadcast)---
TIP 8: explain analyze is your friend

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [PERFORM] Hardware purchase question

2005-01-03 Thread Grega Bremec
...and on Mon, Jan 03, 2005 at 03:44:44PM -0500, Mitch Pirtle used the keyboard:
>
> You are right, I now remember that setup was originally called "RAID
> 10 plus 1", and I believe is was an incorrect statement from an
> overzealous salesman ;-)
>

Just an afterthought - that could well be the unfortunate consequence of
salesmen specializing in sales as an act rather than the goods they were
selling - it might be that he/she was referring to the specifics of the
concrete configuration they were selling you (or trying to sell you),
which should, in the case you were mentioning, probably be called "a
RAID10 array with a hotspare drive" - that is, it would be preconfigured
to, upon the failure of one of array members, detect the failed drive and
automatically replace it with one that has been sitting there all the time,
doing nothing but waiting for one of its active companions to fail.

But this already falls into the category that has, so far, probably
caused the vast majority of misunderstandings, failed investments and
grey hair in RAID, namely data safety, and I don't feel particularly
qualified for getting into specifics of this at this moment, as it
happens to be 2AM, I had a couple of beers (my friend's birthday's due)
and I'm dying to get some sleep. :)

HTH, cheers,
-- 
Grega Bremec
gregab at p0f dot net


pgp3e62chGpdT.pgp
Description: PGP signature


Re: [PERFORM] Bad Index Choices with user defined data type

2005-01-03 Thread Michael Fuhr
On Mon, Jan 03, 2005 at 01:44:27PM -0800, Adam Palmblad wrote:

> I've got a table using a data type that I have created as the type for
> its primary key.  I (hope) I have the type set up properly - it seems
> okay, and does not have any problem creating a b-tree index for the
> type.  The problem I am having is that the index seems to never be
> chosen for use.  I can force the use of the index by setting
> enable_seqscan to off.  The table has about 1.2 million rows.  I have
> also analyzed the table - and immediately afterwards there is no affect
> on the index's behaviour.

Please post the query and the EXPLAIN ANALYZE output for both cases:
one query with enable_seqscan on and one with it off.  It might
also be useful to see the column's statistics from pg_stats, and
perhaps the SQL statements that create the table, the type, the
type's operators, etc.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org