I think you actually have a few options:

1. If I'm reading the original mail right, I don't think you need mpirun/mpiexec at all. When you submit a scripted job to PBS, the script runs on the "mother superior" node -- the first node that was allocated to you. In this, case it's your only node. Since you're already running on the target node, there's nothing special that you need to do to launch your jobs; you can just do something like this:

# Launch app 1
program-executable <inputfile1 >outputfile1 &
# Launch app 2
program-executable <inputfile2 >outputfile2 &
# Wait for both to finish
wait

Note that the following is *not* sufficient, unless you know for absolutely sure that the 2nd executable will take longer than the first:

# Launch app 1
program-executable <inputfile1 >outputfile1 &
# Launch and wait for app 2
program-executable <inputfile2 >outputfile2

If the first app keeps running beyond the second, your script will end and PBS will kill the job (including the first app).

2. If you really want to use mpiexec, you can use a single command line to launch both executables, but note that we currently have mpiexec hard-coded to only redirect stdin to the *first* executable (this may change in future releases, but that's the way it is right now). So you'd probably want to change your executables to read from / write to a file directly instead of stdin/stdout. Incidentally, using a single command line is the *only* way to get OMPI to be aware of multiple executables and therefore launch in a non-oversubscribed situation. This is not really much of a factor since you're only running on a single node; it would be more of a factor if you had grabbed a bunch of nodes and were trying to launch individual jobs across them.

But for completeness, you could do the following (subject to the stdin / stdout limitation listed above):

        mpirun -np 1 executable1 [argv1] : -np 1 executable2 [argv2]

Hope that helps!


On Apr 24, 2007, at 9:36 AM, Edmund Sumbar wrote:

John Borchardt wrote:
I was hoping someone could help me with the following situation. I have a program which has no MPI support that I'd like to run "in parallel" by
running a portion of my total task on N CPUs of a PBS/Maui/Open-MPI
cluster. (The algorithm is such that there is no real need for MPI, I am just as well-off running N processes on N CPUs as I would be adding MPI
support to my program and then running on N CPUs.)

So it's easy enough to set up a Perl script to submit N jobs to the queue to
run on N nodes.  But, my cluster has two CPUs per node, and I am not
RAM-limited, so I'd like to run two serial jobs per node, one on each node CPU. From what my admin tells me, I must use the mpiexec command to run my program so that the scheduler knows to run my program on the nodes which it
has assigned to me.

In my PBS script (this is one of N/2 similar scripts),

#!/bin/bash
#PBS -l nodes=1:ppn=2
#PBS -l walltime=1:00:00:00
mpiexec -pernode program-executable<inputfile1>outputfile1
mpiexec -pernode program-executable<inputfile2>outputfile2

Hi,

To run two serial programs on one dual-cpu node, try

#!/bin/bash
#PBS -l nodes=1:ppn=2
#PBS -l walltime=1:00:00:00
program-executable<inputfile1>outputfile1 &
program-executable<inputfile2>outputfile2
wait

--
Ed[mund [Sumbar]]

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to