Err... Wrong...

On 2013-01-18 13:59, Magnus Jonsson wrote:
Hi!

I'm experimenting with CR_ALLOCATE_FULL_SOCKET and found some weird
behaviour.

Currently running git/master but have seen the same behaviour on 2.4.3
with the #define.

My slurm.conf:

SelectType=select/cons_res
SelectTypeParameters=CR_Socket_Memory,CR_CORE_DEFAULT_DIST_BLOCK,CR_ALLOCATE_FULL_SOCKET


This is my submitscript (the important parts):

#SBATCH -n1
#SBATCH --ntasks-per-socket=1

This gives me (from scontrol show job):

    NumNodes=1 NumCPUs=2 CPUs/Task=1 ReqS:C:T=*:*:*
      Nodes=t-cn1033 CPU_IDs=42-43 Mem=5000

This is the correct output (but wrong :-) Copy'n'paste is hard some times...

If I submit:

#SBATCH -n6
#SBATCH --ntasks-per-socket=3

it gives me (from scontrol show job):

    NumNodes=1 NumCPUs=6 CPUs/Task=1 ReqS:C:T=*:*:*
      Nodes=t-cn1033 CPU_IDs=36-38,42-44 Mem=15000

I think this is caused by how the ntasks-per-socket code is selecting
nodes in job_test.c of the cons_res-plugin.

I will look into the code and see if I can fix this some how otherwise I
can bug test patches.

I have a small part of our cluster available for testing right now
(2 nodes, 8 sockets/node, 6 cores/socket).

Best regards,
Magnus


--
Magnus Jonsson, Developer, HPC2N, UmeƄ Universitet

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to