Re: [PERFORM] Follow-Up: How to improve db performance with $7K?

2005-04-02 Thread Will LaShell
Vivek Khera wrote:
On Mar 31, 2005, at 9:01 PM, Steve Poe wrote:
Now, we need to purchase a good U320 RAID card now. Any suggestions 
for those  which run well under Linux?

Not sure if it works with linux, but under FreeBSD 5, the LSI MegaRAID 
cards are well supported.  You should be able to pick up a 320-2X with 
128Mb battery backed cache for about $1k.  Wicked fast... I'm suprized 
you didn't go for the 15k RPM drives for a small extra cost.

Wow, okay,  so  I'm not sure where everyone's email went,  but I got 
over a weeks worth of list emails at once. 

Several of you have sent me requests on where we purchased our systems 
at.  Compsource was the vendor,   www.c-source.com  or 
www.compsource.com.The sales rep   we have is  Steve Taylor  or you 
can talk to the sales manager  Tom.I've bought hardware from them 
for the last 2 years and I've been very pleased.  I'm sorry wasn't able 
to respond sooner.

Steve,  The LSI MegaRAID cards are where its at.  I've had -great- luck 
with them over the years. There were a few weird problems with a series 
awhile back where the linux driver needed tweaked by the developers  
along with a new bios update.  The 320 series is just as Vivek said, 
wicked fast. Very strong cards.  Be sure though when you order it to 
specificy the battery backup either with it,  or make sure you buy the 
right one for it. There are a couple of options with battery cache on 
the cards that can trip you up.

Good luck on your systems!   Now that I've got my email problems 
resolved  I'm definitely more than help to give any information you all 
need.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] Delete query takes exorbitant amount of time

2005-04-02 Thread Karim A Nassar
 Well, based upon the evidence so far, the Optimizer got it right:

Agreed. So, this means that the answer to my original question is that
delete gonna take a long time?

Seems that there is still something wrong. From what I can tell from
everyones questions, the FK constraint on measurement is causing multiple
seq scans for each value deleted from int_sensor_meas_type. However, when
deleting a single value, the FK check should use the index, so my ~190
deletes *should* be fast, no?

 IndexScan, value=1elapsed= 29ms   cost=883881

190 * 29ms is much less than 40 minutes. What am I missing here?


 Karim,
 Please do:

 select id_int_sensor_meas_type, count(*)
 from measurement
 group by id_int_sensor_meas_type
 order by count(*) desc;

id_int_sensor_meas_type | count  
-+
  31 | 509478
  30 | 509478
 206 | 509478
 205 | 509478
 204 | 509478
  40 | 509478
  39 | 509478
 197 | 509478
  35 | 509478
  34 | 509478
  33 | 509478
  32 | 509478
  41 | 509477

This sample dataset has 13 measurements from a weather station over 3
years, hence the even distribution.


Continued thanks,
--
Karim Nassar
Department of Computer Science
Box 15600, College of Engineering and Natural Sciences
Northern Arizona University,  Flagstaff, Arizona 86011
Office: (928) 523-5868 -=- Mobile: (928) 699-9221





---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Delete query takes exorbitant amount of time

2005-04-02 Thread Karim A Nassar
On Mon, 28 Mar 2005, Stephan Szabo wrote:
  On Mon, 28 Mar 2005, Simon Riggs wrote:
   run the EXPLAIN after doing
 SET enable_seqscan = off

...

 I think you have to prepare with enable_seqscan=off, because it
 effects how the query is planned and prepared.

orfs=# SET enable_seqscan = off;
SET
orfs=# PREPARE test2(int) AS SELECT 1 from measurement where
orfs-# id_int_sensor_meas_type = $1 FOR UPDATE;
PREPARE
orfs=#  EXPLAIN ANALYZE EXECUTE TEST2(1); -- non-existent

QUERY PLAN   
-
 Index Scan using measurement__id_int_sensor_meas_type_idx on measurement
(cost=0.00..883881.49 rows=509478 width=6) 
(actual time=29.207..29.207 rows=0 loops=1)
   Index Cond: (id_int_sensor_meas_type = $1)
 Total runtime: 29.277 ms
(3 rows)

orfs=#  EXPLAIN ANALYZE EXECUTE TEST2(197); -- existing value

QUERY PLAN   
-
 Index Scan using measurement__id_int_sensor_meas_type_idx on measurement
(cost=0.00..883881.49 rows=509478 width=6) 
(actual time=12.903..37478.167 rows=509478 loops=1)
   Index Cond: (id_int_sensor_meas_type = $1)
 Total runtime: 38113.338 ms
(3 rows)

--
Karim Nassar
Department of Computer Science
Box 15600, College of Engineering and Natural Sciences
Northern Arizona University,  Flagstaff, Arizona 86011
Office: (928) 523-5868 -=- Mobile: (928) 699-9221

 



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Query Optimizer Failure / Possible Bug

2005-04-02 Thread Hannes Dorbath
hm, a few days and not a single reply :|
any more information needed? test data? simplified test case? anything?
thanks
Hannes Dorbath wrote:
The query and the corresponding EXPLAIN is at
http://hannes.imos.net/query.txt
I'd like to use the column q.replaced_serials for multiple calculations
in the SELECT clause, but every time it is referenced there in some way
the whole query in the FROM clause returning q is executed again.
This doesn't make sense to me at all and eats performance.
If this wasn't clear enough, for every
q.replaced_serials insert_random_calculation AS some_column
in the SELECT clause there is new block of
---
-  Aggregate  (cost=884.23..884.23 rows=1 width=0)
  -  Nested Loop  (cost=0.00..884.23 rows=1 width=0)
-  Index Scan using ix_rma_ticket_serials_replace on
rma_ticket_serials rts  (cost=0.00..122.35
rows=190 width=4)
  Index Cond: (replace = false)
-  Index Scan using pk_serials on serials s
  (cost=0.00..3.51 rows=1 width=4)
  Index Cond: (s.serial_id = outer.serial_id)
  Filter: ((article_no = $0) AND (delivery_id = $1))
---
in the EXPLAIN result.
For those who wonder why I do this FROM (SELECT...). I was searching for
a way to use the result of an subselect for multiple calculations in the
SELECT clause and return that calculation results as individual columns.
I tested a bit further and found out that PG behaves the same in case q
is a view. This makes me wonder how efficient the optimizer can work
with views - or even worse - nested views.
Tested and reproduced on PG 7.4.1 linux and 8.0.0 win32.
Thanks in advance,
Hannes Dorbath
--
imos  Gesellschaft fuer Internet-Marketing und Online-Services mbH
Alfons-Feifel-Str. 9 // D-73037 Goeppingen  // Stauferpark Ost
Tel: 07161 93339-14 // Fax: 07161 93339-99 // Internet: www.imos.net
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[PERFORM] coalesce alternative

2005-04-02 Thread ALÝ ÇELÝK
I have used coalesce function for null fields but coalesce is too slow.
I need fast alternative for coalesce

ALÝ ÇELÝK 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] How to improve db performance with $7K?

2005-04-02 Thread Dave Cramer
Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is 
about 50Mb/sec, and striped is about 100

Dave
PFC wrote:

With hardware tuning, I am sure we can do better than 35Mb per sec. Also

WTF ?
My Laptop does 19 MB/s (reading 10 KB files, reiser4) !
A recent desktop 7200rpm IDE drive
# hdparm -t /dev/hdc1
/dev/hdc1:
 Timing buffered disk reads:  148 MB in  3.02 seconds =  49.01 MB/sec
# ll DragonBall 001.avi
-r--r--r--1 peufeu   users218M mar  9 20:07 DragonBall 
001.avi

# time cat DragonBall 001.avi /dev/null
real0m4.162s
user0m0.020s
sys 0m0.510s
(the file was not in the cache)
= about 52 MB/s (reiser3.6)
So, you have a problem with your hardware...
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org