Hi !

I need to insert 500.000 records on a table frequently. Itīs a bulk insertion from my applicatoin.
I am with a very poor performance. PostgreSQL insert very fast until the tuple 200.000 and after it the insertion starts to be really slow.
I am seeing on the log and there is a lot of transaction logs, something like :


2004-12-04 11:08:59 LOG:  recycled transaction log file "0000000600000012"
2004-12-04 11:08:59 LOG:  recycled transaction log file "0000000600000013"
2004-12-04 11:08:59 LOG:  recycled transaction log file "0000000600000011"
2004-12-04 11:14:04 LOG:  recycled transaction log file "0000000600000015"
2004-12-04 11:14:04 LOG:  recycled transaction log file "0000000600000014"
2004-12-04 11:19:08 LOG:  recycled transaction log file "0000000600000016"
2004-12-04 11:19:08 LOG:  recycled transaction log file "0000000600000017"
2004-12-04 11:24:10 LOG:  recycled transaction log file "0000000600000018"

How can I configure PostgreSQL to have a better performance on this bulk insertions ? I already increased the memory values.

My data:
Conectiva linux kernel 2.6.9
PostgreSQL 7.4.6 - 1,5gb memory
max_connections = 30
shared_buffers = 30000
sort_mem = 32768
vacuum_mem = 32768
max_fsm_pages = 30000
max_fsm_relations = 1500

The other configurations are default.


Cheers,

Rodrigo Carvalhaes



---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
     subscribe-nomail command to [EMAIL PROTECTED] so that your
     message can get through to the mailing list cleanly

Reply via email to