Hey Sam,

as you can see, no cgroup_disable anywhere:
[root@frodo ~]# cat /proc/cmdline
init=/sbin/bootcpuset root=/dev/md0 rd.luks=0 rd.lvm=0 LANG=en_US.UTF-8 rd_MD_UUID=6fdc973b:49b47f48:e87de278:ae21f48c SYSFONT=latarcyrheb-sun16 crashkernel=512M KEYBOARDTYPE=pc KEYTABLE=us rd.dm=0 ipmi_si.trydefaults=0 ipmi_si.trydmi=0 rcutree.rcu_cpu_stall_suppress=1 add_efi_memmap nortsched processor.max_cstate=1 intel_idle.max_cstate=1 nmi_watchdog=0 ipmi_si.tryacpi=0 nobau log_buf_len=8M pci=hpiosize=0,hpmemsize=0,nobar earlyprintk=ttyS0,115200n8 relax_domain_level=2 cgroup_disable=memory nohz=off pcie_aspm=on highres=off rd.shell udev.children-max=64 console=ttyS0,115200n8

But thanks for your help so far :)

Adam

Am 19.07.2016 um 19:15 schrieb Sam Gallop (NBI):
Hi Adam,

Sorry for not replying sooner.  What does the output of 'cat /proc/cmdline' 
show?  Does it have cgroup_disable in there anywhere?

---
Sam Gallop


-----Original Message-----
From: A. Podstawka [mailto:adam.podsta...@dsmz.de]
Sent: 13 July 2016 12:24
To: slurm-dev <slurm-dev@schedmd.com>
Subject: [slurm-dev] RE: SGI UV2000 with SLURM


Hi Samuel,

[root@frodo ~]# slurmd -C
ClusterName=(null) NodeName=frodo CPUs=128 Boards=8 SocketsPerBoard=16
CoresPerSocket=8 ThreadsPerCore=1 RealMemory=1998969 TmpDisk=273505
UpTime=19-01:49:04

this seems equivalent to the "configline".

We are running RHEL7 with the /sbin/bootcpuset but i think i need to use the 
default init system from rhel7 and cgroups instead of the sgi own bootcpuset 
binary.
My problem is, with the bootcpcuset active, slurm can only start/run jobs 
inside of the (very small: cpu=0-7) bootcpuset.

any good advices or hints, links to a good cgroups manual?

Thanks
Adam

Am 08.07.2016 um 12:51 schrieb Sam Gallop (NBI):
Hi Adam,

What does 'slurmd -C' say?  If you don't have SLURM installed yet running the 
SGI command 'topology' or 'lscpu' may give you an insight of what should be 
specified.

If you are having trouble with cgroups have you checked that it's not been 
disabled as a kernel parameter?

---
Samuel Gallop
Computing infrastructure for Science
CiS Support & Development

NBI Partnership Ltd.
Norwich Research Park
Colney Lane, Norwich
NR4 7UH

The NBI Partnership Ltd provides non-scientific services to the
Earlham Institute, the Institute of Food Research, the John Innes
Centre and The Sainsbury Laboratory

-----Original Message-----
From: A. Podstawka [mailto:adam.podsta...@dsmz.de]
Sent: 08 July 2016 08:05
To: slurm-dev <slurm-dev@schedmd.com>
Subject: [slurm-dev] SGI UV2000 with SLURM


Hi,

just a small question, is here anyone using an SGI UV2000 with SLURM?

We want to migrate from SGE to SLURM, but have some trouble to get slurm with 
cgroups (cpusets) correctly running on our SGI UV2000.

Our Nodeline:
NodeName=frodo NodeAddr=172.18.250.11 CPUs=128 RealMemory=1998969
CoresPerSocket=8 ThreadsPerCore=1 State=DOWN

is this correct for an SGI UV with 8 "Nodes" with 2 sockets per node
and
8 cores per socket? As this is a aggregated system, i have a hard time to set 
it right.


Thanks in advance
Adam

--
Adam Podstawka
Senior Systemadministrator
Informatician - Bioinformatics
Leibniz Institute DSMZ-German Collection of Microorganisms and Cell
Cultures Inhoffenstr. 7 B
38124 Braunschweig
Germany
www.dsmz.de

Director: Prof. Dr. Jörg Overmann
Local court: Braunschweig HRB 2570
Chairman of the management board: RD Dr. David Schnieders

DSMZ - A member of the Leibniz Association www.leibniz-gemeinschaft.de
--
Adam Podstawka
Senior Systemadministrator
Informatician - Bioinformatics
Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures 
Inhoffenstr. 7 B
38124 Braunschweig
Germany
www.dsmz.de

Director: Prof. Dr. Jörg Overmann
Local court: Braunschweig HRB 2570
Chairman of the management board: RD Dr. David Schnieders

DSMZ - A member of the Leibniz Association www.leibniz-gemeinschaft.de

Reply via email to