You can use mpirun.
On Mon, Jan 10, 2011 at 8:04 PM, Tena Sakai wrote:
> Hi,
>
> I am an mpi newbie. My open MPI is v 1.4.3, which I compiled
> on a linux machine.
>
> I am using a language called R, which has an mpi interface/package.
> It appears that it is happy, on
Hi Lewis,
On Thu, Sep 23, 2010 at 9:38 AM, Lewis, Ambrose J.
wrote:
> Hi All:
>
> I’ve written an openmpi program that “self schedules” the work.
>
> The master task is in a loop chunking up an input stream and handing off
> jobs to worker tasks. At first the master
Saygin,
You can use mpstat tool to see the load on each core at runtime.
Do you know exactly which particular calls are taking longer time ?
You can run just those two computations (one at a time) on a different
machine and check if the other machines have similar or lesser
computation time.
-
Hi All,
I have written a program where MPI master sends and receives large
amount of data i.e sending from 1KB to 1MB of data.
The amount of data to be sent with each call is different
The program runs well when running with 5 slaves, but when i try to
run the same program with 9 slaves,
/
and test it.
В Пнд, 26/04/2010 в 15:28 -0400, Pooja Varshneya пишет:
Hi All,
I am using OpenMPI 1.4 on a cluster of Intel quad-core processors
running Linux and connected by ethernet.
In an application, i m trying to send and receive large messages of
sizes ranging from 1 KB upto 500 MB
Hi All,
I am using OpenMPI 1.4 on a cluster of Intel quad-core processors
running Linux and connected by ethernet.
In an application, i m trying to send and receive large messages of
sizes ranging from 1 KB upto 500 MB.
The application works fine if the messages sizes are within 1 MB