Re: [PERFORM] Function scan/Index scan to nested loop

2010-05-11 Thread Scott Marlowe
On Mon, May 10, 2010 at 11:32 PM, Carlo Stonebanks stonec.regis...@sympatico.ca wrote: Hello all, A query ran twice in succession performs VERY poorly the first time as it iterates through the nested loop. The second time, it rips. Please see SQL, SLOW PLAN and FAST PLAN below. This is

Re: [PERFORM] Function scan/Index scan to nested loop

2010-05-11 Thread Craig Ringer
On 11/05/10 13:32, Carlo Stonebanks wrote: Hello all, A query ran twice in succession performs VERY poorly the first time as it iterates through the nested loop. The second time, it rips. Please see SQL, SLOW PLAN and FAST PLAN below. I haven't looked at the details, but the comment you

Re: [PERFORM] Function scan/Index scan to nested loop

2010-05-11 Thread Carlo Stonebanks
Thanks Scott, This is almost always due to caching. First time the data aren't in the cache, second time they are. I had assumed that it was caching, but I don't know from where because of the inexplicable delay. Hardware? O/S (Linux)? DB? From the function, which is IMMUTABLE? I am

[PERFORM] Slow Bulk Delete

2010-05-11 Thread thilo
Hi all! We moved from MySQL to Postgresql for some of our projects. So far we're very impressed with the performance (especially INSERTs and UPDATEs), except for a strange problem with the following bulk delete query: DELETE FROM table1 WHERE table2_id = ? I went through these Wiki pages,

Re: [PERFORM] 8K recordsize bad on ZFS?

2010-05-11 Thread Jignesh Shah
On Fri, May 7, 2010 at 8:09 PM, Josh Berkus j...@agliodbs.com wrote: Jignesh, All: Most of our Solaris users have been, I think, following Jignesh's advice from his benchmark tests to set ZFS page size to 8K for the data zpool.  However, I've discovered that this is sometimes a serious

Re: [PERFORM] Slow Bulk Delete

2010-05-11 Thread thilo
Hi Andy! Thanks a lot for your hints! Indeed the problem was on my side. Some Hibernate tuning solved the problem (and I was able to speedup the query using IN). The real problem was that Hibernate using unprepared queries if you create a native query, but prepares the query if you use JP-QL

[PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread venu madhav
Hi all, In my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread Kevin Grittner
venu madhav venutaurus...@gmail.com wrote: When I try to get the last twenty records from the database, it takes around 10-15 mins to complete the operation. Making this a little easier to read (for me, at least) I get this: select e.cid, timestamp, s.sig_class, s.sig_priority,

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread Shrirang Chitnis
Venu, For starters, 1) You have used the e.cid twice in ORDER BY clause. 2) If you want last twenty records in the table matching the criteria of timestamp, why do you need the offset? 3) Do you have indexes on sig_id, signature and timestamp fields? If you do not get a good response after

Re: [PERFORM] 8K recordsize bad on ZFS?

2010-05-11 Thread Josh Berkus
Sure, but bulk load + reandom selects is going to *guarentee* fragmentatioon on a COW system (like ZFS, BTRFS, etc) as the selects start to write out all the hint-bit-dirtied blocks in random orders... i.e. it doesn't take long to make an originally nicely continuous block random I'm

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread Josh Berkus
* select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1, e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE s.sig_id = e.signature AND e.timestamp = '1270449180' AND e.timestamp '1273473180' ORDER BY