[slurm-dev] Re: inject arbitrary env variables in Slurm job

2017-01-26 Thread Douglas Jacobsen
Another way is to use a job_submit plugin, a lua-based one in particular,
then you have a great deal of control and it is performed at job submit
time.

You can modify job_request.env array to manipulate environment variables.


Doug Jacobsen, Ph.D.
NERSC Computer Systems Engineer
National Energy Research Scientific Computing Center <http://www.nersc.gov>
dmjacob...@lbl.gov

- __o
-- _ '\<,_
--(_)/  (_)__


On Thu, Jan 26, 2017 at 3:08 PM, Peter A Ruprecht <
peter.rupre...@colorado.edu> wrote:

> Ah, I can't believe I overthought this problem and overlooked just using
> the Prolog.  Thanks for the pointer.  Also thanks to the offline responders
> who suggested a spank plugin.
>
>
>
> This list is great!
>
>
>
> Pete
>
>
>
> *From: *Lyn Gerner <schedulerqu...@gmail.com>
> *Reply-To: *slurm-dev <slurm-dev@schedmd.com>
> *Date: *Thursday, January 26, 2017 at 4:05 PM
> *To: *slurm-dev <slurm-dev@schedmd.com>
> *Subject: *[slurm-dev] Re: inject arbitrary env variables in Slurm job
>
>
>
> Hi Pete,
>
>
>
> Follow the link from the Documentation page to the Prolog and Epilog Guide
> for how to inject a customized env variable, as well as the Environment
> Variables section of the sbatch man page, for the Slurm env vars relevant
> to #cores.
>
>
>
> Regards,
>
> Lyn
>
>
>
> On Thu, Jan 26, 2017 at 11:58 AM, Peter A Ruprecht <
> peter.rupre...@colorado.edu> wrote:
>
> Hi everyone,
>
>
>
> I'm trying to figure out a way to set environment variables into the
> environment that a Slurm job runs in, depending on the characteristics of
> the job.
>
>
>
> Here's the background:  our new cluster has Omni-Path interconnect, which
> uses hardware contexts that are associated with each MPI process or rank on
> the node.  We allow node sharing and in some cases when there are multiple
> MPI jobs on the same node (don't ask…) one job apparently uses up too many
> contexts and the other job crashes.
>
>
>
> So I'd like to set the PSM2_SHAREDCONTEXTS_MAX environment variable to an
> appropriate value for each job based on the number of cores or contexts
> available on the node and the number of cores requested by the job.
> Presumably the job_submit script would be the logical place to do this but
> I can't figure out how to set environment variables for the job in it.
>
>
>
> Any suggestions if this is the right track?  Other ideas?
>
>
>
> Thanks,
>
> Pete Ruprecht
>
> CU-Boulder Research Computing
>
>
>


[slurm-dev] Re: inject arbitrary env variables in Slurm job

2017-01-26 Thread Lyn Gerner
Hi Pete,

Follow the link from the Documentation page to the Prolog and Epilog Guide
for how to inject a customized env variable, as well as the Environment
Variables section of the sbatch man page, for the Slurm env vars relevant
to #cores.

Regards,
Lyn

On Thu, Jan 26, 2017 at 11:58 AM, Peter A Ruprecht <
peter.rupre...@colorado.edu> wrote:

> Hi everyone,
>
>
>
> I'm trying to figure out a way to set environment variables into the
> environment that a Slurm job runs in, depending on the characteristics of
> the job.
>
>
>
> Here's the background:  our new cluster has Omni-Path interconnect, which
> uses hardware contexts that are associated with each MPI process or rank on
> the node.  We allow node sharing and in some cases when there are multiple
> MPI jobs on the same node (don't ask…) one job apparently uses up too many
> contexts and the other job crashes.
>
>
>
> So I'd like to set the PSM2_SHAREDCONTEXTS_MAX environment variable to an
> appropriate value for each job based on the number of cores or contexts
> available on the node and the number of cores requested by the job.
> Presumably the job_submit script would be the logical place to do this but
> I can't figure out how to set environment variables for the job in it.
>
>
>
> Any suggestions if this is the right track?  Other ideas?
>
>
>
> Thanks,
>
> Pete Ruprecht
>
> CU-Boulder Research Computing
>