Hmm.. I will have to investigate pam_slurm_adopt .
David William Botsch
Programmer/Analyst
@CNFComputing
bot...@cnf.cornell.edu
On October 17, 2018 12:03:02 AM Chris Samuel wrote:
On Wednesday, 17 October 2018 12:04:05 AM AEDT
On Wednesday, 17 October 2018 12:04:05 AM AEDT Jeffrey Frey wrote:
> Make sure you're using RSA keys in users' accounts
We use SSH's host based authentication instead (along with pam_slurm_adopt on
compute nodes so users can only get into nodes they have a job on).
X11 forwarding works here.
-
Hi all,
I'm using slurm 17.02.8 and when I query pending jobs with sprio or sacct I
get different results for the slurm priority. Can someone explain to me why
the difference or at least point me to the document where this is discussed
because I must have missed it.
sprio example:
$ sprio -j 340
So I got what I want working with RSA keys (and making sure to put the
public rsa key in ~/.ssh/authorized_keys) .
and of course that prolog statment in slurm.conf .
What I ended up doing was just created my own separate script analogous
to cluter-env to create the rsa keys. I'm trying not to str
On Tuesday, 16 October 2018, at 09:30:13 (-0400),
Dave Botsch wrote:
> Hrm... it looks like the default install of OHPC went with DHA keys
> instead:
>
> .ssh]$ cat config
> # Added by Warewulf 2018-10-08
> Host *
>IdentityFile ~/.ssh/cluster
>StrictHostKeyChecking=no
> $ file cluster
>
Dave,
My platform is Rocks with Centos 7.0. It may not be exactly your case,
but it may help you with some ideas on what to do. I used
https://github.com/hautreux/slurm-spank-x11 and here is the guide
which Ian Mortimer told me:
There should be a binary slurm-spank-x11 and a library x11.so whic
Ok. Progress.
Per:
https://bugs.schedmd.com/show_bug.cgi?id=4721
I was missing PrologFlags=x11 in slurm.conf .
So now my issue is just that X11 forwarding doesn't work... slurmd.log:
[2018-10-16T11:38:34.136] [27.extern] error: ssh public key
authentication failure: Username/PublicKey combinat
Hmm..
my hostname is already set to the short hostname per the output of
"hostname" .
On Tue, Oct 16, 2018 at 03:58:03PM +0100, Tina Friedrich wrote:
> Regular ssh forwarding worked just fine with the long hostnames. It's just
> SLURMs version thereof that doesn't. (I think same for DSA/RSA ke
Regular ssh forwarding worked just fine with the long hostnames. It's just
SLURMs version thereof that doesn't. (I think same for DSA/RSA keys etc).
Tina
On Tuesday, 16 October 2018 09:31:17 BST Dave Botsch wrote:
> That's not the issue, here (though I have experienced that before).
> Regular s
I'm not aware of one. This may be worth a feature request to the devs
at bugs.schedmd.com
-Paul Edmon-
On 10/16/18 7:29 AM, Antony Cleave wrote:
Hi All
Yes, I realise this is almost certainly the intended outcome. I have
wondered this for a long time but only recently got round to testing
Sadly that did not make a difference.
On Mon, Oct 15, 2018 at 09:31:26PM -0400, R. Paul Wiegand wrote:
> I believe you also need:
>
> X11UseLocalhost no
>
>
>
> > On Oct 15, 2018, at 7:07 PM, Dave Botsch wrote:
> >
> > Hi.
> >
> > X11 forwarding is enabled and works for normal ssh.
> >
> >
At least by itself, switching to rsa keys did not fix it.
Used ssh-keygen to create an RSA key and edited .ssh/config to point to
that instead of to the dsa key. So unless srun is bypassing that
.ssh/config... nope.
On Tue, Oct 16, 2018 at 09:04:05AM -0400, Jeffrey Frey wrote:
> Make sure you're
Hi.
Reminder :)
On Tue, Oct 16, 2018 at 07:34:59AM +0330, Mahmood Naderan wrote:
> Dave,
> With previous versions, I followed some steps with the help of guys here.
> Don't know about newer versions.
>
> Please sent me a reminder in the next 24 hours and I will send you the
> instructions. At th
That's not the issue, here (though I have experienced that before).
Regular ssh forwarding works fine.
On Tue, Oct 16, 2018 at 09:47:21AM +0100, Tina Friedrich wrote:
> I had an issue getting x11 forwarding via SLURM (srun/sbatch) to work; ssh
> worked fine. Tracked it down to the host name setti
Hrm... it looks like the default install of OHPC went with DHA keys
instead:
.ssh]$ cat config
# Added by Warewulf 2018-10-08
Host *
IdentityFile ~/.ssh/cluster
StrictHostKeyChecking=no
$ file cluster
cluster: PEM DSA private key
Now I have to find where that's configured since it autocr
On 10/16/2018 03:04 PM, Jeffrey Frey wrote:
> Make sure you're using RSA keys in users' accounts -- we'd started setting-up
> ECDSA on-cluster keys as we built our latest cluster but libssh at that point
> didn't support them. And since the Slurm X11 plugin is hard-coded to only
> use ~/.ssh
Make sure you're using RSA keys in users' accounts -- we'd started setting-up
ECDSA on-cluster keys as we built our latest cluster but libssh at that point
didn't support them. And since the Slurm X11 plugin is hard-coded to only use
~/.ssh/id_rsa, that further tied us to RSA. It would be nice
Hi Roland,
That website is improperly configured. My Firefox browser says:
qlustar.com uses an invalid security certificate. The certificate is
only valid for the following names: docs.qlustar.com, www.qlustar.com
Error code: SSL_ERROR_BAD_CERT_DOMAIN
/Ole
On 10/16/2018 02:27 PM, Roland F
Hi all,
we just released Qlustar 10.1, the full-fledged Cluster OS for HPC
and storage with a Slurm GUI. This is our first release also supporting
CentOS with OpenHPC integration.
The download button is at https://qlustar.com/download
Enjoy,
Roland
---
https://www.q-leap.com / https://ql
Hi All
Yes, I realise this is almost certainly the intended outcome. I have
wondered this for a long time but only recently got round to testing it on
a safe system.
Process is simple run a lot of jobs
let decay take effect
change the setting
restart dbd and ctld
run another job with debug2 on th
Just a tip: Make sure that the kernel has support for constraining swap
space. I believe we once had to reinstall one of our clusters once
because we had forgotten to check that. You should get an error in
slurmd.log if it cannot set the swap limit (i.e., if the kernel does not
support it).
--
I had an issue getting x11 forwarding via SLURM (srun/sbatch) to work; ssh
worked fine. Tracked it down to the host name setting on the nodes; as per
RedHat/CentOS default, the hostname was set to the fully qualified. Turns out
SLURMs X11 forwarding doesn't work with that; setting the hostnames
Bill, you know this already. But permit me an observation from PPBpro.
Turn up the logging level to maximum on the nodes. Tail the slurm log and
start a job.
Look HARD at exactly what the log is telling you - and as Richard Feynman
says you are the easiest person to fool.
Dont take the log to say w
Rather dumb question from me - you have checked those processes are running
within a cgroup?
I have no experience in constraining the swap usage using cgroups, so sorry
if I am adding nothing to the debate here.
On Tue, 16 Oct 2018 at 04:49, Bill Broadley wrote:
>
> Greetings,
>
> I'm using ubun
On 10/16/2018 01:07 AM, Dave Botsch wrote:
> Hi.
>
> X11 forwarding is enabled and works for normal ssh.
I faced same issue, with ssh x11 working as expected on compute nodes,
but not with slurm -x11.
I patched slurm locally to make it work.
what you can try to see if it is the same issue:
25 matches
Mail list logo