Alex wrote:

we just got a new dual processor machine and I wonder if there is a way to utilize both processors.

Our DB server is basically fully dedicated to postgres. (its a dual amd with 4gb mem.)

I have a batch job that periodically loads about 8 million records into a table.
for this I drop the indices, truncate the table, use the copy to insert the data, recreate the indices (4 indices), vacuum the table.

That is all done through a perl batch job.

While I am doing this, I noticed that only one CPU is really used.

So here are my questions:

Is there a way to utilize both CPUs

For postgres, you get a max of 1 CPU per connection, so to use both, you need 2 CPU's.

Is it possible to split up the import file and run 2 copy processes

Is it possible to create 2 indices at the same time

You'd want to be a little careful. Postgres uses work_mem for vacuum and index creation, so if you have 2 processes doing it, just make sure you aren't running out of RAM and going to swap.

Would I actually gain anything from that, or is the bottleneck somewhere else ?

More likely, the bottleneck would be disk I/O. Simply because it is almost always disk I/O. However, without knowing your configuration, how much CPU is used during the operation, etc, it's hard to say.

(perl is a given here for the batch job)

If anyone has some experience or ideas... any hints or help on this would be appreciated.


Sorry I wasn't a lot of help. You should probably post your postgres version, and more information about how much CPU load there is while your load is running.


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to