Moe helped offline in understanding some of the ways this could work.
By defining DefMemPerCPU= in slurm.conf then users who submit jobs would
get this value assigned. By also adding MaxMemPerCPU= then when a user
requested more than a core's associated memory, the actual core count
would be increased accordingly to cover the requested memory.
Internal discussions came to the conclusion that this may not be the
correct approach. In one example, suppose a user requested a single
core and half the memory. Since we share nodes, other jobs requesting
much less memory but require cores could also share this node. Foe a 12
core node that would leave 11 cores still available provided their
memory use would fit under the remaining 50%.
So this brings up the question, again, of exactly what constitutes a
node. I can count a 4 dimensional aspect: cores, memory, local disk, IB
bandwidth. Memory and core counts are easily determined. The other two
not so easily. So lets take the easy way out and just consider core and
memory.
On a machine with 20 core, to make the math easier, I can take the above
example:
job1 - 1 core, half the memory
This job uses 50% of the memory and 5% of the cores
job2 uses 19 cores and the other 50% of the memory
This jobs uses 50% of the memory and 95% of the cores.
Normalizing over the two values for cores and memory, I could calculate
that
job1=27.5% of the machine ((50+5)/2) and
job2=72.5%
so I believe we have a solution based on two parameters which will
satisfy at least a query of utilization of a node which is always the
question asked from the management above. I have no idea if anything
like this gets figured into the fairshare components of Slurm but
suspect that it is just using CPU time as the reference.
Bill
On 03/19/2014 02:37 PM, Bill Wichser wrote:
I have a cluster with three kinds of memory per node: 4G/core, 8G/core
and 6.5G/core.
In the slurm.conf file, the DefMemPerCPU=4000 to account for the worst
case. I define RealMemory to be the actual memory in my NodeName
definitions.
Also defined in slurm.conf are:
SelectType=select/cons_res
SelectTypeParameters=CR_Core_Memory,CR_CORE_DEFAULT_DIST_BLOCK
so we can allocate according to job script requirements.
So far, so good.
Lets suppose that a user requests a single core with a --mem=8000
requirement. There are lots of options about which node this might be
scheduled on so my question is how do you account for this? Should I
even bother?
In the past, using Torque, we would require users to request enough
cores to cover the memory usage plus add an attribute like :mem48 to
distinguish which nodes to choose from the pool. Naturally they would
either get this wrong, not allocate enough, or not care! But this was
important when it came to doing system accounting as we calculated this
value strictly from core usage.
With Slurm, the consumable resources seems to work just as expected.
Using cgroups limits users to exactly what they requested and is a
wonderful feature. But this changes the way that we will need to do
accounting and I am just looking for advice on guidance on how others
are doing so.
Thanks,
Bill