Marcin,

Thanks for the bindtmp link. Reading the code, I note it looks for
/etc/passwd. We are using sssd for auth - I presume that means this plugin
will not work for us?

cheers
L.

------
The most dangerous phrase in the language is, "We've always done it this
way."

- Grace Hopper

On 27 June 2016 at 01:10, Marcin Stolarek <stolarek.mar...@gmail.com> wrote:

> This was discussed numbers of times before. You can check the list
> archive, or start for instance with:
> https://github.com/fafik23/slurm_plugins/tree/master/bindtmp
>
> cheers
> marcin
>
> 2016-06-24 7:22 GMT+02:00 Lachlan Musicman <data...@gmail.com>:
>
>> We are transitioning from Torque/Maui to SLURM and have only just noticed
>> that SLURM puts all files in /tmp and doesn't create a per job/user TMPDIR.
>>
>> On searching, we have found a number of options for creation of TMPDIR on
>> the fly using SPANK and lua and prolog/epilog.
>>
>> I am looking for something relatively benign, since this we are still
>> learning the new paradigm.
>>
>> One thing in particular: our /tmp files are SSD local to CPU rather than
>> on a shared filesystem for speed, so we will need to remove the tmps
>>
>> So I was looking at the --prolog and --task-prolog options, doing a
>> little testing on how I might export TMPDIR
>>
>> I had a very simple
>>
>> srun --prolog=/data/pro.sh --task-prolog=/data/t-pro.sh -l hostname
>>
>>  pro.sh
>>
>>
>>  #!/bin/bash
>>  echo "PROLOG: this is from the prologue. currently on `hostname`"
>>
>>  t-pro.sh
>>
>>
>>  #!/bin/bash
>>  echo "TASK-PROLOG: this is from the task-prologue. currently on
>> `hostname`"
>>
>> /data is a shared file system and is the WORKDIR
>>
>> I'm getting results from --prolog but not from --task-prolog.
>> Running this instead:
>>
>> srun --task-prolog=/data/t-pro.sh -l hostname
>>
>> I confirm still no output from task-prolog.
>>
>> What am I doing wrong?
>>
>> (both scripts have a+x)
>>
>> cheers
>> L.
>>
>> ------
>> The most dangerous phrase in the language is, "We've always done it this
>> way."
>>
>> - Grace Hopper
>>
>
>

Reply via email to