Zhang,

I understand what you mean, but the answer is I don't think there is a way
to conf slurm to use running node env rather than submit node - since you,
by definition, will not know which node your job is running on. Some slurm
clusters have 1000s of nodes.

I think that's another reason for the users to be sychronised across the
cluster.

While not the case in every set up, often these types of clusters use a
shared filesystem or partial filesystem - and /home is one of the shared
spaces for exactly this reason.

Have you tried adding the .bashrc to that user's homedir on each machine?
Also, it might not be a regular shell, so maybe start the sbatch script
with #!/bin/bash -l ?

I think it's important to understand that shells and environment - all of
them, as a concept - are harder than they seem. They are relatively easy to
explain, but get complex quickly, which is why you are seeing this issue.

Also, the reason for using sbatch with batch files is to take out some of
that pain.

Of course, there is another option - you can put things you would like to
be persistent in /etc/profile.d/env.sh or /etc/environment on all your
nodes - so that they are available to all envs.

Remember: this problem is unlikely to be a fault in SLURM and more likely
to be that environments and shells are hard.

Cheers
L.



------
"The antidote to apocalypticism is *apocalyptic civics*. Apocalyptic civics
is the insistence that we cannot ignore the truth, nor should we panic
about it. It is a shared consciousness that our institutions have failed
and our ecosystem is collapsing, yet we are still here — and we are
creative agents who can shape our destinies. Apocalyptic civics is the
conviction that the only way out is through, and the only way through is
together. "

*Greg Bloom* @greggish
https://twitter.com/greggish/status/873177525903609857

On 15 September 2017 at 11:39, Chaofeng Zhang <zhang...@lenovo.com> wrote:

> Hi Lachlan,
>
>
>
> I am ok if I need some additional variables, I can use the module load the
> load the module. But /etc/profile/ /home/user1/.bashrc has already define
> many variable, I think these are default variables, currently, every time,
> I also need to source them before using, it is not reasonable from my view.
>
> Whether there is  a way to configure slurm to use running node env, not
> submit node env?
>
>
>
> Thanks.
>
> *From:* Lachlan Musicman [mailto:data...@gmail.com]
> *Sent:* Friday, September 15, 2017 6:55 AM
> *To:* slurm-dev <slurm-dev@schedmd.com>
> *Subject:* [slurm-dev] Re: why the env is the env of submit node, not the
> env of job running node.
>
>
>
> On 14 September 2017 at 19:41, Chaofeng Zhang <zhang...@lenovo.com> wrote:
>
> On node A, I submit job file using sbatch command, the job is running on
> the node B, you will find that the output is not the env of node B, it is
> the env of node A.
>
>
>
> *#!/bin/bash*
>
> *#SBATCH --job-name=mnist10*
>
> *#SBATCH --partition=compute*
>
> *#SBATCH --workdir=/home/share*
>
> *#SBATCH --nodes=1*
>
> *#SBATCH --ntasks-per-node=1*
>
> *#SBATCH --cpus-per-task=1*
>
> *env*
>
>
>
> Zhang,
>
> That is how it's meant to work.
>
> If you need a special env, you are meant to set it up in the sbatch script.
>
> We (where I work) use Environment Modules to do this:
>
> module load python3
>
> module load java/java-1.8-jre
>
> module load samtools/1.5
>
>
>
> All of these are prepended to the PATH in the resulting env.
>
> But you could do anything you want - it is just a shell script after all
>
> SET_VAR="/path/"
>
> NEW_PATH=$SET_VAR:$PATH
>
> etc....
>
>
>
> Cheers
>
> L.
>
>
>

Reply via email to