Hey all,
  I've written a small wrapper in C that submits jobs via
slurm_submit_batch_job().  Every job I submit needs to run another script
after the job is ran. Among other things this script needs to write
variables such as SLURM_JOB_NAME, SLURM_JOB_DERIVED_EC and
SLURM_JOB_PARTITION to a particular directory on a shared filesystem.  Now
those variables are only available in scripts launched by EpilogSlurmctld.

   My problem is I need to pass additional data to the the script launched
by EpilogSlurmctld.  I could use the shared filesystem to do this, but
that just doesn't feel right. So what are my options?

I've tried several things including:
- exporting a variable in the command sent to slurm_submit_batch_job() (
char *shebang = "#!/bin/bash\nexport SGI_SLURM_DONE_DIR=";). But the
SGI_SLURM_DONE_DIR doesn't get passed to EpilogSlurmctld.

- I've tried various methods of setting job_desc_msg.environment, but that
doesn't seem to do anything.  Or, it's more likely I was doing it wrong.




Here is the code I'm using to submit the job: http://pastebin.com/DThfeVZT
Note: the doneDir data is what I'm trying to pass to the EpilogSlurmctld
script.


I'm open to any and all suggestions.

Thanks,
Brandon


Reply via email to