Hi,

I'm fairly new to Slurm and have a question I couldn't solve yet by
googling or searching through older posts in this mailing list:

We are already using Slurm for our cluster and now plan to add features,
which require root-access before and after a job allocation (like setting
special CPU flags, or initialize a RamFs).
Each user should be able to do something like 'srun
--someplugins=cpuflags,ramfs someprogram' which runs some bash scripts
cpuflags and initramfs before the start of someprogram.

Currently I have two ideas to implement this: Prolog/Epilog scripts or a
SPANK plugin. But both concepts only work to a certain degree:

1. SPANK plugin
I implemented a plugin which does nothing more than reading parameters
which are passed by --someplugins and then executes corresponding bash
scripts before and after the job allocation.
So this already matches the desired behavior, except that we want to drain
nodes on failure. Unfortunately, when I return an error from a SPANK
plugin, the node still runs the job and epilog and returns into the idle
state.


2. Prolog/Epilog
An even simpler concept would be to use a prolog/epilog script which reads
some environment vars set by the user, and starts the corresponding bash
scripts. The prolog already behaves in the desired way and stops / drains a
node on an error. But I don't know how to pass vars or parameters to the
prolog script (run by slurmd).

So is there a way to pass environment variables to the slurm.conf prolog?

I wondered how other people were solving similar problems, I'm certain
there are some creative solutions out there.

Thank you
Malte

Reply via email to