On 1/19/12 5:41 AM, "Reuti" <[email protected]> wrote:

>Hi,
>
>Am 19.01.2012 um 01:40 schrieb Fernanda Foertter:
>
>> Thanks guys.  I got this process to work using holds and dependencies
>>using a sample script... but:
>> 
>> The user's code used system calls to run executables, within main code.
>
>is the idea to spread (i.e. fork) all the necessary processes, so that in
>the essence the first job is like a starter method (controller) of all
>the others and you want to put this scheme into SGE?
>
>So, instead of using the -hold_jid you prefer to release the hold by
>hand, which means that the node must be a submit host as you state
>correctly.
>
>Does the first job run a long time? I could also imagine to use in a loop
>`qrsh -inherit ...` according to the list of granted slots (i.e. the
>first job is already submitted as a parallel one) and you start all the
>follow up tasks this way (replacing the system calls). This would fit
>perfectly into SGE and the nodes don't be submit hosts.
>
>-- Reuti

We do something similar, using qsub -sync y rather than qrsh. The downside
is you're stuck with a process that is controlling the qsub or qrsh
running on the submit node, and if the submit node goes down, then your
workflow breaks. Or am I missing something?

John

----------------------------------------- Confidentiality Notice:
The following mail message, including any attachments, is for the
sole use of the intended recipient(s) and may contain confidential
and privileged information. The recipient is responsible to
maintain the confidentiality of this information and to use the
information only for authorized purposes. If you are not the
intended recipient (or authorized to receive information for the
intended recipient), you are hereby notified that any review, use,
disclosure, distribution, copying, printing, or action taken in
reliance on the contents of this e-mail is strictly prohibited. If
you have received this communication in error, please notify us
immediately by reply e-mail and destroy all copies of the original
message. Thank you.

_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to