Hello, slurm-users:
Has anyone encountered an error like me? I have just installed and
configured slurm with a slurm-control node and slurm-compute node, which
are both distributed with ubuntu system. When I was trying to run the
command below:
*./srun /bin/hostname*
An error happend as foll
You are invited to submit an abstract of a tutorial, technical
presentation or site report to be given at the Slurm User Group Meeting
2018. This event is sponsored and organized by CIEMAT and SchedMD. This
international event is opened to everyone who wants to:
- Learn more about Slurm, a hig
Yar, not AT ALL to pick holes in what you have done, please please.
The way to configure PAM modules these days should bepam-auth-config
http://manpages.ubuntu.com/manpages/artful/man8/pam-auth-update.8.html
The directory /usr/share/pam-configs contains files supplied by a package
when it is i
Hi,
We encountered this issue some time ago (see:
https://www.mail-archive.com/slurm-dev@schedmd.com/msg06628.html). You
need to add pam_systemd to the slurm pam file, but pam_systemd will
try to take over the slurm's cgroups. Our current solution is to add
pam_systemd to the slurm pam file, but i
Your problem is that you are listening to Lennart Poettering...
I cannot answer your question directly. However I am doing work at the
moment with PAM and sssd.
Have a look at the directory which contains the unit files. Go on
/lib/systemd/sysem
See that nice file named -.sliceYes that file is
Hi,
we're currently in the process of migrating from RHEL6 to 7, which also
brings us the benefit of having systemd. However, we are observing
problems with user applications that use e.g. XDG_RUNTIME_DIR, because
SLURM apparently does not really run the user application through the
PAM stack
Hi Juan,
"Juan A. Cordero Varelaq" writes:
> Dear Slurm users,
>
> Is it possible to allocate more resources for a current job on an interactive
> shell? I just allocate (by default) 1 core and 2Gb RAM:
>
> srun -I -p main --pty /bin/bash
>
> The node and queue where the job is located has 120 G
Pär Lindfors writes:
> Does anybody know what parallel debugging use case this refers to?
>
> I did a small test and stripped all files from Slurm packages on a few
> compute nodes, and could still successfully use Allinea DDT to launch
> and debug an MPI application using srun and PMI2.
Just a
Prentice Bisbal writes:
> if job_desc.pn_min_mem > 65536 then
> slurm.user_msg("NOTICE: Partition switched to mque due to memory
> requirements.")
> job_desc.partition = 'mque'
> job_desc.qos = 'mque'
> return slurm.SUCCESS
> end
Somewhat off-topic, but: So, does slurm.user_msg(
Dear Slurm users,
Is it possible to allocate more resources for a current job on an
interactive shell? I just allocate (by default) 1 core and 2Gb RAM:
srun -I -p main --pty /bin/bash
The node and queue where the job is located has 120 Gb and 4 cores
available.
I just want to use more core
10 matches
Mail list logo