[PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Mindaugas Riauba

  Hello,

  We have problems with one postgresql database with high
data change rate. Actually we are already under pressure
to change postgresql to Oracle.

  I cannot post schema and queries to list but can do this
privately.

  Tables are not big (2-15 rows each) but have very high
turnover rate - 100+ updates/inserts/deletes/selects per second.
So contents of database changes very fast. Problem is that when
pg_autovacuum does vacuum those changes slows down too much.
And we keep autovacuum quite aggressive (-v 1000 -V 0.5 -a 1000
-A 0.1 -s 10) to not bloat database and to avoid bigger impact.
analyze seems not to impact performance too much.

  Tables have 2-3 indexes each and one table have foreign key
contraint. Postgresql is 8.0.1. vmstat shows that IO and CPU are
not saturated. DB is on RAID1+0 controller with battery backed write
cache.

  What can we tune to improve performance in our case? Please help
to defend PostgreSQL against Oracle in this case :).

  Thanks,

  Mindaugas


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Steinar H. Gunderson
On Fri, May 13, 2005 at 03:52:38PM +0300, Mindaugas Riauba wrote:
   Tables are not big (2-15 rows each) but have very high
 turnover rate - 100+ updates/inserts/deletes/selects per second.
 So contents of database changes very fast. Problem is that when
 pg_autovacuum does vacuum those changes slows down too much.
 And we keep autovacuum quite aggressive (-v 1000 -V 0.5 -a 1000
 -A 0.1 -s 10) to not bloat database and to avoid bigger impact.
 analyze seems not to impact performance too much.

Are you using the bgwriter?
See http://developer.postgresql.org/~wieck/vacuum_cost/ for details.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Tom Lane
Mindaugas Riauba [EMAIL PROTECTED] writes:
 ... So contents of database changes very fast. Problem is that when
 pg_autovacuum does vacuum those changes slows down too much.

The vacuum cost parameters can be adjusted to make vacuums fired
by pg_autovacuum less of a burden.  I haven't got any specific numbers
to suggest, but perhaps someone else does.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Mindaugas Riauba

  ... So contents of database changes very fast. Problem is that when
  pg_autovacuum does vacuum those changes slows down too much.

 The vacuum cost parameters can be adjusted to make vacuums fired
 by pg_autovacuum less of a burden.  I haven't got any specific numbers
 to suggest, but perhaps someone else does.

  It looks like that not only vacuum causes our problems. vacuum_cost
seems to lower vacuum impact but we are still noticing slow queries storm.
We are logging queries that takes 2000ms to process.
  And there is quiet periods and then suddenly 30+ slow queries appears in
log within the same second. What else could cause such behaviour? WAL log
switch? One WAL file seems to last 1 minute.

  And also in slow queries log only function call is shown. Maybe it is
possible
to get exact query which slows everything down in the serverlog?

  Thanks,

  Mindaugas


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Tom Lane
Mindaugas Riauba [EMAIL PROTECTED] writes:
   It looks like that not only vacuum causes our problems. vacuum_cost
 seems to lower vacuum impact but we are still noticing slow queries storm.
 We are logging queries that takes 2000ms to process.
   And there is quiet periods and then suddenly 30+ slow queries appears in
 log within the same second. What else could cause such behaviour?

Checkpoints?  You should ensure that the checkpoint settings are such
that checkpoints don't happen too often (certainly not oftener than
every five minutes or so), and make sure the bgwriter is configured
to dribble out dirty pages at a reasonable rate, so that the next
checkpoint doesn't have a whole load of stuff to write.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Mindaugas Riauba

It looks like that not only vacuum causes our problems. vacuum_cost
  seems to lower vacuum impact but we are still noticing slow queries
storm.
  We are logging queries that takes 2000ms to process.
And there is quiet periods and then suddenly 30+ slow queries appears
in
  log within the same second. What else could cause such behaviour?

 Checkpoints?  You should ensure that the checkpoint settings are such
 that checkpoints don't happen too often (certainly not oftener than
 every five minutes or so), and make sure the bgwriter is configured
 to dribble out dirty pages at a reasonable rate, so that the next
 checkpoint doesn't have a whole load of stuff to write.

  bgwriter settings are default. bgwriter_delay=200, bgwriter_maxpages=100,
bgwriter_percent=1. checkpoint_segments=8, checkpoint_timeout=300,
checkpoint_warning=30.

  But there's no checkpoint warnings in serverlog. And btw we are running
with fsync=off (yes I know the consequences).

  Database from the viewpoint of disk is practically write only since amount
of data is smaller than memory available. I also added some 'vmstat 1'
output.

  How to get more even load. As you see neither disk nor cpu looks too busy.

  Thanks,

  Mindaugas

   procs  memoryswap  io system
cpu
 r  b  w   swpd   free   buff  cache  si  sobibo   incs  us  sy
id
 1  0  0 194724  12140  10220 1045356   0   13324   6020  13   3
83
 2  0  0 194724  11988  10228 1045464   0   012 0 1147  6107  13   4
82
 0  2  0 194724  12172  10284 1046076   0   0   244 20692 2067  3117   8   8
84
 1  0  0 194724  12164  10280 1045912   0   0 0 4  876  8831  15  11
74
 3  0  0 194724  11704  10328 1045952   0   024  2116  928  5122  13  12
75
 1  0  0 194724  11444  10236 1046264   0   0   340 0 1048  6538  19  10
71
 1  0  0 194724  11924  10236 1045816   0   0 0 0  885  7616  14  20
66
 0  0  0 194724  11408  10252 1044824   0   028  5488  959  4749  11  14
75
 1  0  0 194724  11736  10296 1042992   0   0   460  2868 1001  4116  12  12
75
 0  0  0 194724  12024  10296 1043064   0   036 0  903  5081  13  12
76
 1  0  0 194724  12404  10240 1043440   0   0   280 0  899  4246  12  12
75
 1  0  0 194724  13128  10236 1043472   0   0 0 0 1016  5394  12  10
78
 0  4  0 194724  13064  10244 1043652   0   0 0 14736 1882  9290  10  15
74
 0  4  0 194724  13056  10252 1043660   0   0 0  6012 1355  2378   2   3
95
12 21  0 194724  13140  10220 1043640   0   0 8 4  723  2984   5   3
92
 1  0  0 194724  13712  10228 1043956   0   0   200 0 1144 10310  30  21
50
 0  0  0 194724  13100  10220 1043992   0   0 4 0  840  4676  15  14
71
 0  0  0 194724  13048  10296 1041212   0   0 4  6132  918  4074  10  10
80
 0  0  0 194724  12688  10324 1041508   0   0   240  1864  849  3873  12  11
77
 2  0  0 194724  12544  10240 1041944   0   032 0 1171  4844  14   7
78
 1  0  0 194724  12384  10232 1041756   0   0 4 0  973  6063  16   9
75
 1  0  0 194724  12904  10244 1042116   0   0   264  6052 1049  4762  15  14
71
 0  0  0 194724  12616  10236 1042164   0   0 8 0  883  4748  13   8
79
 2  0  0 194724  12576  10288 1042460   0   0   252  3136  857  3929  13  15
73
 2  0  0 194724  12156  10284 1042504   0   0 0 0  858  8832  13   6
81
 2  0  0 194724  12024  10284 1042556   0   0 0 0  834  4229  16  10
74
 3  1  0 194724  12024  10364 1043096   0   0   316 10328 1024  5686  14   7
80
 0  5  0 194724  12024  10352 1043116   0   0 4  7996 2156  2816   4   5
90
 0  4  0 194724  12024  10360 1043124   0   0 4  8560 1369  2700   6   5
90
 3  0  0 194724  12024  10264 1043124   0   0 0 4 1037  5132  14  15
71
 1  1  0 194724  11876  10264 1043176   0   0 4 0  932  7761  20  20
6


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Cosimo Streppone
Mindaugas Riauba wrote:
The vacuum cost parameters can be adjusted to make vacuums fired
by pg_autovacuum less of a burden.  I haven't got any specific numbers
to suggest, but perhaps someone else does.
  It looks like that not only vacuum causes our problems. vacuum_cost
seems to lower vacuum impact but we are still noticing slow queries storm.
We are logging queries that takes 2000ms to process.
  And there is quiet periods and then suddenly 30+ slow queries appears in
log within the same second. What else could cause such behaviour?
I've seen that happen when you're placing (explicitly or
*implicitly*) locks on the records you're trying to update/delete.
If you're willing to investigate, `pg_locks' system view holds
information about db locks.
--
Cosimo
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Mindaugas Riauba

 The vacuum cost parameters can be adjusted to make vacuums fired
 by pg_autovacuum less of a burden.  I haven't got any specific numbers
 to suggest, but perhaps someone else does.
 
It looks like that not only vacuum causes our problems. vacuum_cost
  seems to lower vacuum impact but we are still noticing slow queries
storm.
  We are logging queries that takes 2000ms to process.
And there is quiet periods and then suddenly 30+ slow queries appears
in
  log within the same second. What else could cause such behaviour?

 I've seen that happen when you're placing (explicitly or
 *implicitly*) locks on the records you're trying to update/delete.

 If you're willing to investigate, `pg_locks' system view holds
 information about db locks.

  Hm. Yes. Number of locks varies quite alot (10-600). Now what to
investigate
further? We do not use explicit locks in our functions. We use quite simple
update/delete where key=something;
  Some sample (select * from pg_locks order by pid) is below.

  Thanks,

  Mindaugas

  |  |   584302172 | 11836 | ExclusiveLock| t
17236 |17230 | | 11836 | AccessShareLock  | t
17236 |17230 | | 11836 | RowExclusiveLock | t
   127103 |17230 | | 11836 | RowExclusiveLock | t
   127106 |17230 | | 11836 | RowExclusiveLock | t
   127109 |17230 | | 11836 | AccessShareLock  | t
   127109 |17230 | | 11836 | RowExclusiveLock | t
   127109 |17230 | | 11837 | AccessShareLock  | t
   127109 |17230 | | 11837 | RowExclusiveLock | t
17236 |17230 | | 11837 | AccessShareLock  | t
17236 |17230 | | 11837 | RowExclusiveLock | t
   127106 |17230 | | 11837 | RowExclusiveLock | t
   127103 |17230 | | 11837 | RowExclusiveLock | t
  |  |   584302173 | 11837 | ExclusiveLock| t
   127103 |17230 | | 11838 | RowExclusiveLock | t
17236 |17230 | | 11838 | RowExclusiveLock | t
   127109 |17230 | | 11838 | RowExclusiveLock | t
  |  |   584302174 | 11838 | ExclusiveLock| t
17285 |17230 | | 11838 | AccessShareLock  | t
17251 |17230 | | 11838 | AccessShareLock  | t
   130516 |17230 | | 11838 | AccessShareLock  | t
   127106 |17230 | | 11838 | RowExclusiveLock | t
17278 |17230 | | 11838 | AccessShareLock  | t


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Steinar H. Gunderson
On Fri, May 13, 2005 at 05:45:45PM +0300, Mindaugas Riauba wrote:
   But there's no checkpoint warnings in serverlog. And btw we are running
 with fsync=off (yes I know the consequences).

Just a note here; since you have battery-backed hardware cache, you
probably won't notice that much of a slowdown with fsync=on. However, you are
already pushed, so... :-)

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Recommendations for set statistics

2005-05-13 Thread Josh Berkus
Chris,

 It is widely believed that a somewhat larger default than 10 would be
 a good thing, as it seems to be fairly common for 10 to be too small
 to allow statistics to be stable.  But nobody has done any formal
 evaluation as to whether it would make sense to jump from 10 to:

  - 15?
  - 20?
  - 50?
  - 100?
  - More than that?

My anecdotal experience is that if more than 10 is required, you generally 
need to jump to at least 100, and more often 250.   On the other end, I've 
generally not found any difference between 400 and 1000 when it comes to 
bad queries.

I have an unfinished patch in the works which goes through and increases the 
stats_target for all *indexed* columns to 100 or so.   However, I've needed 
to work up a test case to prove the utility of it.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Tom Lane
Mindaugas Riauba [EMAIL PROTECTED] writes:
   Hm. Yes. Number of locks varies quite alot (10-600). Now what to
 investigate
 further? We do not use explicit locks in our functions. We use quite simple
 update/delete where key=something;
   Some sample (select * from pg_locks order by pid) is below.

The sample doesn't show any lock issues (there are no processes waiting
for ungranted locks).  The thing that typically burns people is foreign
key conflicts.  In current releases, if you have a foreign key reference
then an insert in the referencing table takes an exclusive row lock on
the referenced (master) row --- which means that two inserts using the
same foreign key value block each other.

You can alleviate the issue by making all your foreign key checks
deferred, but that just shortens the period of time the lock is held.
There will be a real solution in PG 8.1, which has sharable row locks.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] PostgreSQL strugling during high load

2005-05-13 Thread Mischa Sandberg
Quoting Tom Lane [EMAIL PROTECTED]:

 Mindaugas Riauba [EMAIL PROTECTED] writes:
  ... So contents of database changes very fast. Problem is that
 when
  pg_autovacuum does vacuum those changes slows down too much.
 
 The vacuum cost parameters can be adjusted to make vacuums fired
 by pg_autovacuum less of a burden.  I haven't got any specific
 numbers
 to suggest, but perhaps someone else does.

I solved one problem by cranking sleep scaling to -S 20.
It made pg_autovacuum back off longer during extended periods of heavy
disk-intensive query activity. Our update activity is near-constant
insert rate, then once or twice a day, massive deletes.
-- 
Dreams come true, not free.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Optimize complex join to use where condition before

2005-05-13 Thread John A Meinel
Greg Stark wrote:
Sebastian Hennebrueder [EMAIL PROTECTED] writes:

User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)
...
Nested Loop  (cost=1349.13..1435.29 rows=1 width=2541) (actual
time=1640.000..3687.000 rows=62 loops=1)
  Join Filter: (inner.fid = outer.faufgaben_id)
  -  Index Scan using idx_tauf_mit_mitid on taufgaben_mitarbeiter am
(cost=0.00..80.65 rows=35 width=4) (actual time=0.000..0.000 rows=765
loops=1)

Is it really Mozilla Thunderbird that's causing this new craptastic mangling
of plans in people's mails? I was assuming it was some new idea of how to mess
up people's mail coming out of Exchange or Lotus or some other such corporate
messaging software that only handled SMTP mail as an afterthought. This is,
uh, disappointing.
Are you talking about the quotes, or just the fact that it is wrapped?
I don't know where the quotes came from, but in Thunderbird if you are
writing in text mode (not html) it defaults to wrapping the text at
something like 78 characters. That includes copy/paste text.
If you want it to *not* wrap, it turns out that Paste as quotation
will not wrap, but then you have to remove the   from the beginning
of every line.
In html mode, it also defaults to wrapping, but if you switch to
PREFORMAT text first, it doesn't wrap.
At least, those are the tricks that I've found. Safest bet is to just
use an attachment, though.
John
=:-


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Bad plan after vacuum analyze

2005-05-13 Thread Markus Bertheau
 , 11/05/2005  22:59 +0200, Guillaume Smet :

 Anyway, I tried to work on the statistics as you told me and here are 
 the results:
 ccm_perf=# ALTER TABLE acs_objects ALTER COLUMN object_id SET STATISTICS 30;
 ALTER TABLE
 ccm_perf=# ANALYZE acs_objects;
 ANALYZE
 
 ccm_perf=# \i query_section.sql
 ... correct plan ...
   Total runtime: 0.555 ms

Given Tom's analysis, how can increasing the stats target change which
plan is chosen?

-- 
Markus Bertheau [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


[PERFORM] ok you all win what is best opteron (I dont want a hosed system again)

2005-05-13 Thread Joel Fradkin








We are up and somewhat happy.



I have been following threads (in case you dont know
I bought a 4 proc Dell recently) and the Opteron seems the way to go.

I just called HP for a quote, but dont want to make
any mistakes.



Is the battery backed cache good or bad for Postgres?



They are telling me I can only get a duel channel card if I
want hardware raid 10 on the 14 drives.

I can get two cards but it has to be 7 and 7 (software
raid?) which does not sound like it fixes my single point of failure (one of
the listers mentioned my current system has 3 such single points).



Any of you hardware gurus spell out the optimal machine (I
am hoping to be around 15K, might be able to go more if its a huge
difference, I spent 30k on the Dell).

I do not have to go HP, and after seeing the fail ratio from
Monarch from one lister I am bit scared shopping there.

Was there a conclusion on where is best to get one (I really
want two one for development too).





Joel Fradkin







Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305







[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology  advanced tools.
 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA,Inc
This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information. Any unauthorized review,
use, disclosure or distribution is prohibited. If you are not the
intended recipient, please contact the sender by reply email and delete and
destroy all copies of the original message, including attachments.



















Re: [PERFORM] ok you all win what is best opteron (I dont want a hosed system again)

2005-05-13 Thread Merlin Moncure
Joel wrote:
I have been following threads (in case you don't know I bought a 4 proc
Dell recently) and the Opteron seems the way to go.
I just called HP for a quote, but don't want to make any mistakes.
[snip]

At your level of play it's the DL585.
Have you checked out http://www.swt.com? 

Merlin

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Whence the Opterons?

2005-05-13 Thread Andrew Sullivan
On Sat, May 07, 2005 at 02:00:34PM -0700, Josh Berkus wrote:
 
 Question, though: is HP still using their proprietary RAID card?   And, if 
 so, 
 have they fixed its performance problems?

According to my folks here, we're using the CCISS controllers, so I
guess they are.  The systems are nevertheless performing very well --
we did a load test that was pretty impressive.  Also, Chris Browne
pointed me to this for the drivers:

http://sourceforge.net/projects/cciss/

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
--Alexander Hamilton

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Whence the Opterons?

2005-05-13 Thread Magnus Hagander

 Question, though: is HP still using their proprietary RAID 
card?   And, if so, 
 have they fixed its performance problems?

According to my folks here, we're using the CCISS controllers, so I
guess they are.  The systems are nevertheless performing very well --
we did a load test that was pretty impressive.  Also, Chris Browne
pointed me to this for the drivers:

http://sourceforge.net/projects/cciss/

That driver is for all the remotely modern HP cards. I think the one
with performance problems was the builtin one they user to have
(SmartArray 5i). AFAIK, the new builtins (6i) are a lot better. And the
high-end add-on cards I've never had any performance problems with -
linux and windows both.

//Magnus

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Optimize complex join to use where condition before

2005-05-13 Thread Sebastian Hennebrueder
I found a solution to improve my query. I do not know why but the 
statistics for all column has been 0.
I changed this to 10 for index columns and to 20 for all foreign key 
columns.
and to 100 for foreign key columns.
I set the random page cost to 2
and now the query runs as expected.

Many thanks to all of the posts in my and in other threads which helped 
a lot.

Sebastian
Merge Join (cost=1325.06..1329.96 rows=6 width=2558) (actual 
time=344.000..344.000 rows=6 loops=1)
 Merge Cond: (outer.fid = inner.faufgaben_id)
 - Sort (cost=1269.57..1271.91 rows=934 width=2541) (actual 
time=344.000..344.000 rows=773 loops=1)
  Sort Key: taufgaben.fid
  - Merge Join (cost=1205.09..1223.49 rows=934 width=2541) (actual 
time=219.000..313.000 rows=936 loops=1)
Merge Cond: (outer.fid = inner.fprojekt_id)
- Sort (cost=302.08..304.27 rows=876 width=1494) (actual 
time=156.000..156.000 rows=876 loops=1)
 Sort Key: tprojekte.fid
 - Merge Join (cost=237.42..259.27 rows=876 width=1494) (actual 
time=109.000..141.000 rows=876 loops=1)
   Merge Cond: (outer.fid = inner.fprojektleiter_id)
   - Index Scan using pk_tuser on tuser (cost=0.00..9.13 rows=109 
width=883) (actual time=0.000..0.000 rows=101 loops=1)
   - Sort (cost=237.42..239.61 rows=876 width=619) (actual 
time=109.000..109.000 rows=876 loops=1)
Sort Key: tprojekte.fprojektleiter_id
- Merge Join (cost=181.17..194.60 rows=876 width=619) (actual 
time=63.000..94.000 rows=876 loops=1)
  Merge Cond: (outer.fid = inner.fkunden_kst_id)
  - Sort (cost=9.51..9.66 rows=58 width=119) (actual 
time=0.000..0.000 rows=58 loops=1)
   Sort Key: tkunden_kst.fid
   - Merge Join (cost=6.74..7.81 rows=58 width=119) (actual 
time=0.000..0.000 rows=58 loops=1)
 Merge Cond: (outer.fid = inner.fkunden_id)
 - Sort (cost=3.46..3.56 rows=40 width=51) (actual 
time=0.000..0.000 rows=40 loops=1)
  Sort Key: tkunden.fid
  - Seq Scan on tkunden (cost=0.00..2.40 rows=40 width=51) 
(actual time=0.000..0.000 rows=40 loops=1)
 - Sort (cost=3.28..3.42 rows=58 width=80) (actual 
time=0.000..0.000 rows=58 loops=1)
  Sort Key: tkunden_kst.fkunden_id
  - Seq Scan on tkunden_kst (cost=0.00..1.58 rows=58 
width=80) (actual time=0.000..0.000 rows=58 loops=1)
  - Sort (cost=171.66..173.85 rows=876 width=508) (actual 
time=63.000..63.000 rows=876 loops=1)
   Sort Key: tprojekte.fkunden_kst_id
   - Merge Join (cost=114.91..128.85 rows=876 width=508) 
(actual time=31.000..47.000 rows=876 loops=1)
 Merge Cond: (outer.fid = inner.fkostentraeger_id)
 - Sort (cost=19.20..19.60 rows=158 width=162) (actual 
time=0.000..0.000 rows=158 loops=1)
  Sort Key: tkostentraeger.fid
  - Merge Join (cost=3.49..13.43 rows=158 width=162) 
(actual time=0.000..0.000 rows=158 loops=1)
Merge Cond: (outer.fkostenstellen_id = inner.fid)
- Index Scan using idx_kostenstellen_id on 
tkostentraeger (cost=0.00..7.18 rows=158 width=55) (actual 
time=0.000..0.000 rows=158 loops=1)
- Sort (cost=3.49..3.53 rows=19 width=119) (actual 
time=0.000..0.000 rows=158 loops=1)
 Sort Key: tkostenstellen.fid
 - Merge Join (cost=2.76..3.08 rows=19 width=119) 
(actual time=0.000..0.000 rows=19 loops=1)
   Merge Cond: (outer.fid = inner.fabteilungen_id)
   - Sort (cost=1.17..1.19 rows=7 width=76) (actual 
time=0.000..0.000 rows=7 loops=1)
Sort Key: tabteilungen.fid
- Seq Scan on tabteilungen (cost=0.00..1.07 rows=7 
width=76) (actual time=0.000..0.000 rows=7 loops=1)
   - Sort (cost=1.59..1.64 rows=19 width=55) (actual 
time=0.000..0.000 rows=19 loops=1)
Sort Key: tkostenstellen.fabteilungen_id
- Seq Scan on tkostenstellen (cost=0.00..1.19 
rows=19 width=55) (actual time=0.000..0.000 rows=19 loops=1)
 - Sort (cost=95.71..97.90 rows=878 width=354) (actual 
time=31.000..31.000 rows=877 loops=1)
  Sort Key: tprojekte.fkostentraeger_id
  - Seq Scan on tprojekte (cost=0.00..52.78 rows=878 
width=354) (actual time=0.000..31.000 rows=878 loops=1)
- Sort (cost=903.01..905.35 rows=936 width=1047) (actual 
time=63.000..63.000 rows=936 loops=1)
 Sort Key: taufgaben.fprojekt_id
 - Nested Loop Left Join (cost=0.28..856.82 rows=936 width=1047) 
(actual time=0.000..63.000 rows=936 loops=1)
   Join Filter: (outer.fid = inner.faufgaben_id)
   - Index Scan using idx_taufgaben_bstatus on taufgaben 
(cost=0.00..835.47 rows=936 width=1043) (actual time=0.000..0.000 
rows=936 loops=1)
Index Cond: (fbearbeitungsstatus  2)
   - Materialize (cost=0.28..0.29 rows=1 width=4) (actual 
time=0.000..0.000 rows=1 loops=936)
- Subquery Scan patchdaten (cost=0.00..0.28 rows=1 

Re: [PERFORM] Optimize complex join to use where condition before

2005-05-13 Thread John Arbash Meinel
Sebastian Hennebrueder wrote:

 I found a solution to improve my query. I do not know why but the
 statistics for all column has been 0.
 I changed this to 10 for index columns and to 20 for all foreign key
 columns.
 and to 100 for foreign key columns.
 I set the random page cost to 2
 and now the query runs as expected.

 Many thanks to all of the posts in my and in other threads which
 helped a lot.

 Sebastian


I think 0 = use default. But still, changing to 20 and 100 probably
fixes your problems.

John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] ok you all win what is best opteron (I dont want a

2005-05-13 Thread David Brown
Joel Fradkin wrote:
Is the battery backed cache good or bad for Postgres?
Battery-backed avoids corruption if you have an unexpected power loss. 
It's considered mandatory with large-cache write-back controllers if you 
can't afford to lose any data.

They are telling me I can only get a duel channel card if I want 
hardware raid 10 on the 14 drives.

I can get two cards but it has to be 7 and 7 (software raid?) which 
does not sound like it fixes my single point of failure (one of the 
listers mentioned my current system has 3 such single points).

Sounds like you need to try another vendor. Are you aiming for two RAID 
10 arrays or one RAID 10 and one RAID 5?

Any of you hardware gurus spell out the optimal machine (I am hoping 
to be around 15K, might be able to go more if its a huge difference, 
I spent 30k on the Dell).

I do not have to go HP, and after seeing the fail ratio from Monarch 
from one lister I am bit scared shopping there.

There's unlikely to be many common components between their workstation 
and server offerings. You would expect case, power, graphics, 
motherboard, storage controller and drives to all be different. But I'd 
challenge that 50% failure stat anyway. Which components exactly? Hard 
drives? Power supplies?

Was there a conclusion on where is best to get one (I really want two 
one for development too).

Almost anyone can build a workstation or tower server, and almost anyone 
else can service it for you. It gets trickier when you're talking 2U and 
especially 1U. But really, these too can be maintained by anyone 
competent. So I wonder about some people's obsession with 
vendor-provided service.

Realistically, most Opteron solutions will use a Tyan motherboard (no 
idea if this includes HP). For 4-way systems, there's currently only the 
S4882, which includes an LSI dual channel SCSI controller. Different 
vendors get to use different cases and cooling solutions and pick a 
different brand/model of hard drive, but that's about it.

Tyan now also sells complete servers - hardly a stretch seeing they 
already make the most important bit (after the CPU). Given the level of 
interest in this forum, here's their list of US resellers:

http://www.tyan.com/products/html/us_alwa.html
If it's a tower server, build it yourself or pay someone to do it. It 
really isn't challenging for anyone knowledgeable about hardware.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] Postgresql Performance via the LSI MegaRAID 2x Card

2005-05-13 Thread Josh Berkus
Steve,

 Past recommendations for a good RAID card (for SCSI) have been the LSI
 MegaRAID 2x. This unit comes with 128MB of RAM on-board. Has anyone
 found by increasing the on-board RAM, did Postgresql performed better?

My informal tests showed no difference between 64MB and 256MB.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org