[EMAIL PROTECTED] schrieb am 05/05/2008 05:08:16 PM:

> Hi,
> 
> We want to execute a script in a cluster job submission via 
globusrun-ws, 
> but it should run exactly once, after stage-in, before processes are
> started on the compute nodes.
> 
> The purpose is to do some pre-run housekeeping in the user's home
> directory.
> 
> We thought we could do this by submitting a small script in the JDD
> executable section, but it now appears that (for MPI jobs anyway) this 
is
> executed on each compute node (as if it were the argument to mpirun in a
> conventional cluster job submission).
> 
> So we are in a sense trying to put something between the Job Description
> document and the computation processes.
> 
> This seems like a very natural thing to do:  in conventional batch 
systems
> it is done in a batch script.   One can think of messy solutions where 
> the script detects whether it has been run before.... but that can't be
> the right way.
> 
> What is the right way to do this in a Globus job submission?

I agree that what you want is very natural. I think there is no right way 
just yet, see my report at 
http://bugzilla.mcs.anl.gov/globus/show_bug.cgi?id=5698

Meanwhile, we have implemented a "messy solution" in our project:
https://bi.offis.de/wisent/tiki-index.php?page=Condor-GT4-BigJobs

We also have an updated version which relies on a UDP broadcast for 
interprocess synchronization rather than file locking. It improves startup 
time compared to the NFS-based implementation, especially when you have 
many processes. However, it's not published yet. Let me know if you are 
interested in trying it out.

Regards,
Jan Ploski

--
Dipl.-Inform. (FH) Jan Ploski
OFFIS
Betriebliches Informationsmanagement
Escherweg 2  - 26121 Oldenburg - Germany
Fon: +49 441 9722 - 184 Fax: +49 441 9722 - 202
E-Mail: [EMAIL PROTECTED] - URL: http://www.offis.de

Reply via email to