Trevor,
if you have the possibility: set up an extra machine that
a) manages users via LDAP
b) exports user homes via NFS
c) exports some scratch space (though that won't scale performance-wise)
and you ship around both the topics you asked on this list.
Regards,
Uwe
Am 07.05.2015 um 20:27 schrieb Trevor Gale:
>
> Thank you for your detailed response. I think my main issue is that I’m very
> new to Slurm, and clusters in general. I plan on setting up a global file
> system across my desktops, and was wondering what software you would
> recommend. I saw that the Slurm documentation mentions Lustre and NFS but was
> just curious because I have no experience with either.
>
> Thanks,
> Trevor
>
>> On May 7, 2015, at 7:28 PM, Uwe Sauter <[email protected]> wrote:
>>
>>
>> Trevor,
>>
>> I don't know what your intent is or the machine you are preparing yourself
>> for but in general login nodes and compute nodes share
>> a common filesystem, making the need to move data around (inside of the
>> cluster) unnecessary.
>>
>> If you really need to move data from node local space back to the login
>> node, there are several possibilities to do so:
>>
>> * Export some part of the login node's filesystem to your compute node.
>> * Put a SCP/RSYNC into your job script. (Make sure you're SSH keys are
>> placed in the authorized_keys file)
>> * Run a "data mover" job that depends on your compute job (and the node
>> where the compute job ran).
>>
>> Likely there are more solutions to your problem. But before you go any
>> further it'd be good if you put some thought into your
>> setup. Does it represent what you are trying to achieve?
>>
>> Regards,
>>
>> Uwe
>>
>>
>>
>> Am 07.05.2015 um 15:19 schrieb Trevor Gale:
>>>
>>> Hello,
>>>
>>> I’m currently running one desktop computer as a controller and one as a
>>> compute node for testing. I’m running a simple test script using salloc and
>>> then passing the script over sbcast to my node where i execute it calling
>>> srun. The problem I’m having is that I want the output of the programs I’m
>>> going to run to come back to the head node (or dump there in the first
>>> place) after execution, but all out my outputs are dropping onto the node
>>> that they execute on. Does slurm support any method of output collection?
>>> or is there some configuration I can change to move all the outputs to the
>>> head node? This seems like an issue that other users would encounter, does
>>> anyone have a good method for fixing this?
>>>
>>> Thanks,
>>> Trevor
>>>