Re: [PERFORM] Slow Bulk Delete

2010-05-12 Thread Bob Lunney
Thilo, Just a few of thoughts off the top of my head: 1. If you know the ids of the rows you want to delete beforhand, insert them in a table, then run the delete based on a join with this table. 2. Better yet, insert the ids into a table using COPY, then use a join to create a new table wit

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-12 Thread Craig James
On 5/12/10 4:55 AM, Kevin Grittner wrote: venu madhav wrote: we display in sets of 20/30 etc. The user also has the option to browse through any of those records hence the limit and offset. Have you considered alternative techniques for paging? You might use values at the edges of the page t

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-12 Thread Kevin Grittner
venu madhav wrote: >> > If the records are more in the interval, >> >> How do you know that before you run your query? >> > I calculate the count first. This and other comments suggest that the data is totally static while this application is running. Is that correct? > If generate all the

Re: [PERFORM] Performance issues when the number of records are around 10 Million

2010-05-12 Thread Kevin Grittner
venu madhav wrote: >>> AND e.timestamp >= '1270449180' >>> AND e.timestamp < '1273473180' >>> ORDER BY. >>> e.cid DESC, >>> e.cid DESC >>> limit 21 >>> offset 10539780 > The second column acts as a secondary key for sorting if the > primary sorting key is a different column. For this query bot