Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-25 Thread Robert Haas
On Wed, May 12, 2010 at 1:45 AM, venu madhav venutaurus...@gmail.com wrote: [Venu] Yes, autovacuum is running every hour. I could see in the log messages. All the configurations for autovacuum are disabled except that it should run for every hour. This application runs on an embedded box, so

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
On Wed, May 12, 2010 at 3:17 AM, Jorge Montero jorge_mont...@homedecorators.com wrote: First, are you sure you are getting autovacuum to run hourly? Autovacuum will only vacuum when certain configuration thresholds are reached. You can set it to only check for those thresholds every so

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
On Wed, May 12, 2010 at 3:22 AM, Shrirang Chitnis shrirang.chit...@hovservices.com wrote: Venu, For starters, 1) You have used the e.cid twice in ORDER BY clause. [Venu] Actually the second cid acts as a secondary sort order if any other column in the table is used for sorting. In the

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
On Wed, May 12, 2010 at 5:25 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote: venu madhav wrote: AND e.timestamp = '1270449180' AND e.timestamp '1273473180' ORDER BY. e.cid DESC, e.cid DESC limit 21 offset 10539780 The second column acts as a secondary key for sorting

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-14 Thread venu madhav
On Wed, May 12, 2010 at 7:26 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote: venu madhav venutaurus...@gmail.com wrote: If the records are more in the interval, How do you know that before you run your query? I calculate the count first. This and other comments suggest that

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-13 Thread Kevin Grittner
venu madhav venutaurus...@gmail.com wrote: Kevin Grittner kevin.gritt...@wicourts.gov wrote: I calculate the count first. This and other comments suggest that the data is totally static while this application is running. Is that correct? No, the data gets added when the application is

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-12 Thread Kevin Grittner
venu madhav wrote: AND e.timestamp = '1270449180' AND e.timestamp '1273473180' ORDER BY. e.cid DESC, e.cid DESC limit 21 offset 10539780 The second column acts as a secondary key for sorting if the primary sorting key is a different column. For this query both of them are same.

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-12 Thread Kevin Grittner
venu madhav venutaurus...@gmail.com wrote: If the records are more in the interval, How do you know that before you run your query? I calculate the count first. This and other comments suggest that the data is totally static while this application is running. Is that correct? If

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-12 Thread Craig James
On 5/12/10 4:55 AM, Kevin Grittner wrote: venu madhav wrote: we display in sets of 20/30 etc. The user also has the option to browse through any of those records hence the limit and offset. Have you considered alternative techniques for paging? You might use values at the edges of the page

[PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread venu madhav
Hi all, In my database application, I've a table whose records can reach 10M and insertions can happen at a faster rate like 100 insertions per second in the peak times. I configured postgres to do auto vacuum on hourly basis. I have frontend GUI application in CGI which displays the data

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread Kevin Grittner
venu madhav venutaurus...@gmail.com wrote: When I try to get the last twenty records from the database, it takes around 10-15 mins to complete the operation. Making this a little easier to read (for me, at least) I get this: select e.cid, timestamp, s.sig_class, s.sig_priority,

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread Shrirang Chitnis
Venu, For starters, 1) You have used the e.cid twice in ORDER BY clause. 2) If you want last twenty records in the table matching the criteria of timestamp, why do you need the offset? 3) Do you have indexes on sig_id, signature and timestamp fields? If you do not get a good response after

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-11 Thread Josh Berkus
* select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name, e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1, e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE s.sig_id = e.signature AND e.timestamp = '1270449180' AND e.timestamp '1273473180' ORDER BY