Completely as an aside, the next question then is 'Aha - but what happens
when you have new users on the cluster'
I am currently working with sssd authentication and with the pam_mkhomedir
plugin.
I guess if an MPI job is launched using ssh then pam_mkhomedir would
automatically create the home
Well I DID say that you need 'what looks like a home directory'.
So yes indeed you prove, correctly, that this works just fine!
On 12 May 2018 at 20:17, Eric F. Alemany wrote:
>
> Hi John,
>
> No worries at all. I take all ideas, comments and advice with the greatest
>
Hi,
Home directory are also shared among all nodes (and head one as well). The
base system (that is, local drive, including os, system files, etc) are
identical and cloned based among all the hosts. If I need to install some
light package (dev version of a library for example or similar) I
._
Eric F. Alemany
System Administrator for Research
Division of Radiation & Cancer Biology
Department of Radiation Oncology
Stanford University School of Medicine
Stanford, California 94305
Hey folks,
I'm going to be unsubscribing from slurm-users for a while as I'll be
travelling to the US & UK for a number of weeks & I don't want to drown in
email.
I'll be back...
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
Eric, I'm sorry to be a little prickly here.
Each node has an independent home directory for the user?
How then do applications update dot files?
How then would as a for instance do the users edit the .bashrc file to
bring Anaconda into their paths?
Beofre anyone says it, a proper Modules system
On Friday, 11 May 2018 11:15:49 PM AEST Mahmood Naderan wrote:
> Excuse me... I see the output of squeue which says
> 170 IACTIVE bash mahmood PD 0:00 1 (AssocGrpMemLimit)
>
> I don't understand why the memory limit is reach?
That's based on what your job requests, not what is
Hey Prentice,
On Friday, 11 May 2018 6:23:06 AM AEST Prentice Bisbal wrote:
> They would like to have their submission framework automatically
> detect if there's a reservation that may interfere with their jobs, and
> act accordingly.
As an additional data point there is also srun's
On Saturday, 12 May 2018 3:35:39 PM AEST Mahmood Naderan wrote:
> Although I specified one compute node in an interactive partition, the
> salloc doesn't ssh to that node.
salloc doesn't do that.
We use a 2 line script called "sinteractive" to do this, it's really simple.
#!/bin/bash
exec srun