Pierre-Frederic, Paul,
Thanks for your fast response (especially for the python code and
performance figure) - I'll chase this up as a solution - looks most
promising!
Cheers,
Damien
---(end of broadcast)---
TIP 6: Have you searched our list arch
Hi All,
I am having a performance problem extracting a large volume of data from
Postgres 7.4.2, and was wondering if there was a more cunning way to get
the data out of the DB...
This isn't a performance problem with any particular PgSQL operation,
its more a strategy for getting large volumes
Thanks Richard.
It certainly does appear to be memory related (on a smaller data set of
250K subscribers, all accesses are < 1ms).
We're going to play with increasing RAM on the machine, and applying the
optimisation levels on the page you recommended.
(We're also running on a hardware RAID con
ing mc_actor_id_idx on mc_actor (cost=0.00..3.02
rows=1 width=39) (actual time=0.001..0.001 rows=0 loops=1)
Index Cond: ("outer".mc_parentactor_id = mc_actor.id)
Total runtime: 0.428 ms
(15 rows)
Many thanks,
Damien
--
Damien
On Wednesday 29 October 2003 2:23 pm, Tom Lane wrote:
> Your initial message stated plainly that the problem was in INSERTs;
> it's not surprising that you got unhelpful advice.
But perhaps my use of the term "insert" to describe upload was a very bad call
given the domain of the list...
I assu
On Monday 27 October 2003 8:12 pm, Tom Lane wrote:
> Damien Dougan <[EMAIL PROTECTED]> writes:
> > Has anyone any ideas as to what could be causing the spiraling
> > performance?
>
> You really haven't provided any information that would allow anything
> but g
Hi All,
We've been experiencing extremely poor batch upload performance on our
Postgres 7.3 (and 7.3.4) database, and I've not been able to improve matters
significantly using any suggestions I've gleamed off the mailing list
archives ... so I was wondering if anyone with a bigger brain in this