Hi All,

We've been experiencing extremely poor batch upload performance on our 
Postgres 7.3 (and 7.3.4) database, and I've not been able to improve matters 
significantly using any suggestions I've gleamed off the mailing list 
archives ... so I was wondering if anyone with a bigger brain in this area 
could help :)

Our batch upload is performing a number of stored procedures to insert data on 
the database. Initially, this results in quite good performance, but rapidly 
spirals down to approximately 1 per second after some minutes.

I've got a script that generates stored procedure calls to upload test input 
data, and the script is capable of inserting BEGIN and END at different 
intervals, together with VACUUM ANALYZE commands as well.

I've tried varying the commit level from every operation, every 5, every 10, 
every 25, every 100 operations (again, each operation is 5 SP calls) without 
any noticeable improvement. Likewise, I've varied the VACUUM ANALYZE from 
every 50 to every 100 operations - again without any measurable improvement.

top reports that CPU usage is pretty constant at 99%, and there is 
approximately 1GB of free physical memory available to the OS (with 
approximately 1GB of physical memory in use).

I've have been running postmaster with switched fsync off.

I also tried running with backbuffers of default (64), 128, 256, 512 and even 
1024. Again, with no measurable change.

The typical metrics are (completed operations - each of these require 5 SP 

1 min: 1036 (1036 operations)
2 min: 1426 (390 operations)
3 min: 1756 (330 operations)
4 min: 2026 (270 operations)
5 min: 2266 (240 operations)

When left running, its not too long before the code snails to 1 operation per 

Has anyone any ideas as to what could be causing the spiraling performance?

With approximately 20,000 operations commited in the database, it takes about 
1 minute to upload a dump of the database - unfortunately we cannot use the 
COPY command to upload brand new data - it really has to go through the 
Stored Procedures to ensure relationships and data integrity across the 
schema (it would be very difficult to develop and maintain code to generate 
COPY commands for inserting new data). And whilst I appreciate INSERTs are 
inherently slower than COPY, I was hoping for something significantly faster 
than the 1 operation/second that things fairly quickly descend to...

Thanks for any advice!


---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to