You can wait for processes to finish as follows:

#launch 3 processes
sh -c './'  & PID1=$!
sh -c './'  & PID2=$!
sh -c './'  & PID3=$!
# then
wait $PID1
wait $PID2
wait $PID3

My feeling is that doing so should generally reduce the overall processing time, but if there are contention problems then it could conceivably get much worse.

----- Original Message ----- From: "Alex" <[EMAIL PROTECTED]>
To: "John A Meinel" <[EMAIL PROTECTED]>
Cc: <>
Sent: Thursday, February 10, 2005 12:00 AM
Subject: Re: [PERFORM] How can I make use of both CPUs in a dual processor

Thanks John.

Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm SCSI Disks, 4GB of memory.
Disks are pretty fast and memory should be more than enough. Currently we dont have many concurrent connections.

I run PG 8.0.1 on Fedora Core 3

When I now run the batch job, one CPU runs in the 80-90% the other in 5-10% max.

John A Meinel wrote:

Alex wrote:

we just got a new dual processor machine and I wonder if there is a way to utilize both processors.

Our DB server is basically fully dedicated to postgres. (its a dual amd with 4gb mem.)

I have a batch job that periodically loads about 8 million records into a table.
for this I drop the indices, truncate the table, use the copy to insert the data, recreate the indices (4 indices), vacuum the table.

That is all done through a perl batch job.

While I am doing this, I noticed that only one CPU is really used.

So here are my questions:

Is there a way to utilize both CPUs

For postgres, you get a max of 1 CPU per connection, so to use both, you need 2 CPU's.

Is it possible to split up the import file and run 2 copy processes

Is it possible to create 2 indices at the same time

You'd want to be a little careful. Postgres uses work_mem for vacuum and index creation, so if you have 2 processes doing it, just make sure you aren't running out of RAM and going to swap.

Would I actually gain anything from that, or is the bottleneck somewhere else ?

More likely, the bottleneck would be disk I/O. Simply because it is almost always disk I/O. However, without knowing your configuration, how much CPU is used during the operation, etc, it's hard to say.

(perl is a given here for the batch job)

If anyone has some experience or ideas... any hints or help on this would be appreciated.


Sorry I wasn't a lot of help. You should probably post your postgres version, and more information about how much CPU load there is while your load is running.


---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
     subscribe-nomail command to [EMAIL PROTECTED] so that your
     message can get through to the mailing list cleanly

Reply via email to