Hi,

I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a
massive update, about 400000 updates/inserts.
If I execute about 100000 it seems all ok, but when I execute 400000, I have
the same problem with or without a transaction (I need to do with a
transaction) increase memory usage and disk usage.
With a execution of 400.000 inserts/update server begin woring well, but
after 100 seconds of executions increase usage of RAM, and then Swap and
finally all RAM and swap are used and execution can't finish.
I have made some tuning in server, I have modified:
-shared_buffers 1024 Mb
-work_mem 512 Mb
-effective_cache_size 2048Mb
-random_page_cost 2.0
-checkpoint_segments 64
-wal_buffers 8Mb
-max_prepared_transaction 100
-synchronous_commit off

what is wrong in this configuration to executes this inserts/update?

Server has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5


Thanks

Reply via email to