Steve,

You need to modify the SGE adapter.
I created a web page that describes this:

    http://www.grid.lrz.de/en/mware/globus/download_preamble.html

Let me know if you have questions.

Gabriel

On May 6, 2008, at 4:26 PM, Steve White wrote:
Gabriel,

I have had no luck with this.  Where is this "preamble" documented?
I did several web searches...

I tried the JDD below, where mpihello is a simple MPI job that prints
out the MPI rank of each host.  I get its output, but no output
from the "preamble".  What am I missing?

<job>
<executable>mpihello</executable>
<directory>${GLOBUS_USER_HOME}/mpitest</directory>

<stdout>${GLOBUS_USER_HOME}/doit.stdout</stdout>
<stderr>${GLOBUS_USER_HOME}/doit.stderr</stderr>

<count>8</count>
<jobType>mpi</jobType>

<extensions>
<preamble>
    echo "Running PREAMBLE"
</preamble>
</extensions>

</job>


On  6.05.08, Gabriel Mateescu wrote:
Hello,

It is not hard to accomplish that.
Think of it this way:  what you want to do is
kind of a "preamble", i.e., a set of commands to
precede the job execution.

This can be accomplished using the Extensions
elements of the jobs description.

For example, assume you want to load a certain
module, which will set things such as the path
to the MPI libraries.

Then you job description will look like this

<job>
<executable>/path/to/mpi/program</executable>
 ...
<extensions>
    <preamble>
    module load mpi-intel
    cd $HOME
 </preamble>
</extensions>
...
</job>


The changes required to the LRMS adapter are
not difficult to make. I will post the changes for
SGE and PBS.


Gabriel


On May 5, 2008, at 5:08 PM, Steve White wrote:
Hi,

We want to execute a script in a cluster job submission via
globusrun-ws,
but it should run exactly once, after stage-in, before processes are
started on the compute nodes.

The purpose is to do some pre-run housekeeping in the user's home
directory.

We thought we could do this by submitting a small script in the JDD
executable section, but it now appears that (for MPI jobs anyway)
this is
executed on each compute node (as if it were the argument to mpirun
in a
conventional cluster job submission).

So we are in a sense trying to put something between the Job
Description
document and the computation processes.

This seems like a very natural thing to do:  in conventional batch
systems
it is done in a batch script. One can think of messy solutions where
the script detects whether it has been run before.... but that
can't be
the right way.

What is the right way to do this in a Globus job submission?


Reply via email to