[PERFORM] Help with extracting large volumes of records across related tables

2004-09-13 Thread Damien Dougan
Hi All, I am having a performance problem extracting a large volume of data from Postgres 7.4.2, and was wondering if there was a more cunning way to get the data out of the DB... This isn't a performance problem with any particular PgSQL operation, its more a strategy for getting large volumes

Re: [PERFORM] Help with extracting large volumes of records across related tables

2004-09-13 Thread Damien Dougan
Pierre-Frederic, Paul, Thanks for your fast response (especially for the python code and performance figure) - I'll chase this up as a solution - looks most promising! Cheers, Damien ---(end of broadcast)--- TIP 6: Have you searched our list

[PERFORM] Index Performance Help

2004-02-05 Thread Damien Dougan
on mc_actor (cost=0.00..3.02 rows=1 width=39) (actual time=0.001..0.001 rows=0 loops=1) Index Cond: (outer.mc_parentactor_id = mc_actor.id) Total runtime: 0.428 ms (15 rows) Many thanks, Damien -- Damien Dougan

Re: [PERFORM] Index Performance Help

2004-02-05 Thread Damien Dougan
Thanks Richard. It certainly does appear to be memory related (on a smaller data set of 250K subscribers, all accesses are 1ms). We're going to play with increasing RAM on the machine, and applying the optimisation levels on the page you recommended. (We're also running on a hardware RAID

Re: [PERFORM] Very Poor Insert Performance

2003-10-29 Thread Damien Dougan
On Wednesday 29 October 2003 2:23 pm, Tom Lane wrote: Your initial message stated plainly that the problem was in INSERTs; it's not surprising that you got unhelpful advice. But perhaps my use of the term insert to describe upload was a very bad call given the domain of the list... I assure