The SLURM code as currently written can not do what you want today, but could
be modified to do so. The code that would need to change is in
src/plugins/select/cons_res/job_test.c in the function _select_nodes()
You want to limit the cpu count to 1 for the selected node. Something of this
sort
should work:
/* if successful, sync up the core_map with the node_map, and
* create a cpus array */
if (rc == SLURM_SUCCESS) {
cpus = xmalloc(bit_set_count(node_map) * sizeof(uint16_t));
start = 0;
a = 0;
for (n = 0; n < cr_node_cnt; n++) {
if (bit_test(node_map, n)) {
+ if (this is the node we want to restrict)
+ cpus[a++] = MIN(1, cpu_cnt[n]);
+ else
cpus[a++] = cpu_cnt[n];
if (cr_get_coremap_offset(n) != start) {
bit_nclear(core_map, start,
(cr_get_coremap_offset(n))-1);
}
start = cr_get_coremap_offset(n + 1);
}
}
if (cr_get_coremap_offset(n) != start) {
bit_nclear(core_map, start, cr_get_coremap_offset(n)-1);
}
}
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Morris "Moe" Jette [email protected] 925-423-4856
Integrated Computational Resource Management Group fax 925-423-6961
Livermore Computing Lawrence Livermore National Laboratory
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
________________________________________
From: [email protected] [[email protected]] On Behalf
Of Ramiro Alba [[email protected]]
Sent: Thursday, February 24, 2011 12:19 AM
To: [email protected]
Subject: [slurm-dev] Limiting core allocation at only one node
Hi all,
I wonder if it is possible to use 'salloc' to reserve cores from a
certain partition, BUT, imposing a certain node (from that partition) an
using ONLY one core from this node.
This 'strange' allocation is explained by the need of using one server
to open connections to desktop graphics clients (paraview) using a pool
of nodes to do rendering tasks.
I tried the following config:
NodeName=jff[201-299,300-328] Weight=10 Procs=8 RealMemory=16000
Sockets=2 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN
NodeName=jff Weight=1 Procs=1 RealMemory=8000 Sockets=1 CoresPerSocket=1
ThreadsPerCore=1 State=UNKNOWN
PartitionName=global Nodes=jff[201-299,300-328] Default=YES
MaxTime=INFINITE State=UP
PartitionName=paraview \
Nodes=jff,jff[203,217-221,223-231,252,253,264,302] Default=NO
MaxTime=INFINITE State=UP
Then at the command line:
NCORES=16
COMMAND="mpirun -np $NCORES pvserver --server-port=$PORT"
salloc --partition=paraview --nodelist=jff --ntasks=$NCORES
--immediate=5 $COMMAND
But it is always allocating 7-8 cores on 'jff' node. Ideally, I would
like to allocate from 'jff' ONE core at a time until a maximum of 4. Is
it possible to do that?
Thanks in advance
Regards
--
Ramiro Alba
Centre Tecnològic de Tranferència de Calor
http://www.cttc.upc.edu
Escola Tècnica Superior d'Enginyeries
Industrial i Aeronàutica de Terrassa
Colom 11, E-08222, Terrassa, Barcelona, Spain
Tel: (+34) 93 739 86 46
--
Aquest missatge ha estat analitzat per MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que est� net.