Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm SCSI Disks, 4GB of memory.
Disks are pretty fast and memory should be more than enough. Currently we dont have many concurrent connections.
I run PG 8.0.1 on Fedora Core 3
When I now run the batch job, one CPU runs in the 80-90% the other in 5-10% max.
John A Meinel wrote:
Hi,For postgres, you get a max of 1 CPU per connection, so to use both, you need 2 CPU's.
we just got a new dual processor machine and I wonder if there is a way to utilize both processors.
Our DB server is basically fully dedicated to postgres. (its a dual amd with 4gb mem.)
I have a batch job that periodically loads about 8 million records into a table.
for this I drop the indices, truncate the table, use the copy to insert the data, recreate the indices (4 indices), vacuum the table.
That is all done through a perl batch job.
While I am doing this, I noticed that only one CPU is really used.
So here are my questions:
Is there a way to utilize both CPUs
You'd want to be a little careful. Postgres uses work_mem for vacuum and index creation, so if you have 2 processes doing it, just make sure you aren't running out of RAM and going to swap.Is it possible to split up the import file and run 2 copy processes
Is it possible to create 2 indices at the same time
Would I actually gain anything from that, or is the bottleneck somewhere else ?More likely, the bottleneck would be disk I/O. Simply because it is almost always disk I/O. However, without knowing your configuration, how much CPU is used during the operation, etc, it's hard to say.
Sorry I wasn't a lot of help. You should probably post your postgres version, and more information about how much CPU load there is while your load is running.(perl is a given here for the batch job)
If anyone has some experience or ideas... any hints or help on this would be appreciated.
---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend