On Fri, 29 Mar 2002, Senthil Kandasamy wrote:
> So I suppose that I should eave that lamboot commands inside every
> script I run and not worry about lamboot outside the pbs environment at
> all!!
Correct.
> I do have a few other questions that I hope you can answer. I have two
> slave nodes and four processors. PBS recognized all of them at setup.
> Now using the mpirun -np 2 option, i see that the process runs on two
> processors of the same node. How do I make the process run on both the
Also correct.
> slave nodes and on all four processors? I tried a variety of options but
> none of them seem to work. I am pretty sure that it is fairly easy but I
> don't seem to get it. All I need to do is control the number of
> processors the job runs on.
LAM's mpirun has a variety of options for placement of jobs on remote
nodes. See "man mpirun" or "mpirun -h" for lots of information on this.
Here's some examples. Let's assume that you have lambooted 2 nodes, each
of which with 2 CPUs:
- Option 1: "mpirun C myprogram". This will run one copy of "myprogram"
on each CPU than you lambooted. Hence, you'll get 4 CPUs. This is
generally the "best" way to do it under PBS, because you may submit the
same PBS script with different numbers of nodes/CPUs each time, and this
one "C" syntax method will always run one copy per CPU.
- Option 2: "mpirun -np 4 myprogram". This will run 4 copies of
myprogram, and will allocate CPU's in round-robin order. In this specific
case, this will exactly fill the number of CPUs that you have.
- Option 3: "mpirun c0-3 myprogram". This runs one copy of myprogram on
CPU's number 0 through 3.
- Option 4: "mpirun n0,n0,n1,n1 myprogram". This runs two copies of
myprogram on node 0 and two more copies on node 1.
....etc.
Although LAM's mpirun is quite flexible and offers about 20 gabillion ways
to do it, I'd recomend the "mpirun C myprogram" method -- the "C" syntax
is there almost specifically for the PBS/batch queue case.
{+} Jeff Squyres
{+} [EMAIL PROTECTED]
{+} http://www.lam-mpi.org/
_______________________________________________
Oscar-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/oscar-users