Hey all,
 A day late but here is my follow up as promised. I was able to get data
into the
EpilogSclurmctld script using spank_job_env as Mark mentioned. It turned
out to be
As simple as including <src/common/env.h> and adding this to my
job_desc_msg:

  env_array_overwrite( &job_desc_msg.spank_job_env, "SGI_SLURM_DONE_DIR",
doneDir );
  job_desc_msg.spank_job_env_size = 1;



Thanks for the help!
-Brandon



On 10/21/11 5:24 PM, "Evans, Brandon" <[email protected]> wrote:

>Initial attempts at using spank_job_env didn't work, but I didn't dig
>very deep.  When I was looking in the code for slurmctld
> it looked like there were only a few explicitly listed variable exported
>to the EpilogSlurmctld but I will have another look later.
>Patching slurm wan't necessarily at the top of my solution list , but if
>that is the proper solution, I'll give it a go. ;-)
>
>Thanks for the replies.  I'll follow up sometime Monday.
>-Brandon
>________________________________________
>From: Moe Jette [[email protected]]
>Sent: Friday, October 21, 2011 3:59 PM
>To: Evans, Brandon; [email protected]
>Subject: Re: [slurm-dev] Sending data from job to EpilogSlurmctld script?
>
>If you want to submit a SLURM patch to export more environment
>variables I'd be happy to include that in the next major release of
>SLURM and you could use it as a local patch until then.
>
>Quoting "Mark A. Grondona" <[email protected]>:
>
>> On Fri, 21 Oct 2011 15:17:41 -0700, "Evans, Brandon"
>> <[email protected]> wrote:
>>> Hey all,
>>>   I've written a small wrapper in C that submits jobs via
>>> slurm_submit_batch_job().  Every job I submit needs to run another
>>>script
>>> after the job is ran. Among other things this script needs to write
>>> variables such as SLURM_JOB_NAME, SLURM_JOB_DERIVED_EC and
>>> SLURM_JOB_PARTITION to a particular directory on a shared filesystem.
>>>Now
>>> those variables are only available in scripts launched by
>>>EpilogSlurmctld.
>>
>> I'm not sure if these variables are exported to the Slurmctld Epilog,
>> but there is a "job control" environment for the normal prolog/epilog,
>> which is a set of extra environment variables passed to these scripts
>> and not to the job itself.
>>
>> I think this environment can be set in the
>>
>>          char **spank_job_env;   /* environment variables for job
>> prolog/epilog
>>                                  * scripts as set by SPANK plugins */
>>         uint32_t spank_job_env_size; /* element count in spank_env */
>>
>> members of job_desc_msg_t. The variables in the prolog/epilog are
>> always prefixed with SPANK_, so if you set "SGI_DONE_DIR" in
>> spank_job_env it would appear as SPANK_SGI_DONE_DIR in the scripts.
>>
>> I have never tried setting these vars directly, but you might be able
>> to reuse the code in src/srun/opt.c:set_spank_job_env().
>>
>> Then you'd have to test to see if these vars appear in Slurmctld Epilog.
>>
>> mark
>>
>>
>>>    My problem is I need to pass additional data to the the script
>>>launched
>>> by EpilogSlurmctld.  I could use the shared filesystem to do this, but
>>> that just doesn't feel right. So what are my options?
>>>
>>> I've tried several things including:
>>> - exporting a variable in the command sent to slurm_submit_batch_job()
>>>(
>>> char *shebang = "#!/bin/bash\nexport SGI_SLURM_DONE_DIR=";). But the
>>> SGI_SLURM_DONE_DIR doesn't get passed to EpilogSlurmctld.
>>>
>>> - I've tried various methods of setting job_desc_msg.environment, but
>>>that
>>> doesn't seem to do anything.  Or, it's more likely I was doing it
>>>wrong.
>>>
>>>
>>>
>>>
>>> Here is the code I'm using to submit the job:
>>>http://pastebin.com/DThfeVZT
>>> Note: the doneDir data is what I'm trying to pass to the
>>>EpilogSlurmctld
>>> script.
>>>
>>>
>>> I'm open to any and all suggestions.
>>>
>>> Thanks,
>>> Brandon
>>>
>>>
>>
>
>
>
>


Reply via email to