To Rod Schultz.

I use sbatch. When I try

sbatch -a 0-19 -p mpi -n 20 -N 10 -ntasks-per-node=2 t40z0600s1.sh

or

sbatch -a 0-19 -p mpi -ntasks-per-node=2 t40z0600s1.sh

it says

sbatch: error: Invalid numeric value "tasks-per-node=2" for number of tasks.

Any suggestions?

Thanks anyway.


2014-07-29 2:11 GMT+11:00 Rod Schultz <[email protected]>:

>  Slurm’s default strategy is to fill entire nodes. That is why you are
> getting some nodes with 8 tasks and some idle nodes.
>
>
>
> Try srun –n 100 –N22 –ntasks-per-node=5 –l hostname
>
>
>
> -n 100 means start 100 tasks,
>
> -N 22 means all 22 nodes
>
> --ntasks-per-node=5 distributes the tasks.
>
> -l appends task number to the output, so you can see the distribution.
>
>
>
>
>
>
>
> *From:* Marcin Stolarek [mailto:[email protected]]
> *Sent:* Monday, July 28, 2014 7:17 AM
> *To:* slurm-dev
> *Subject:* [slurm-dev] Re: even CPU load
>
>
>
>
>
>
>
> 2014-07-28 8:00 GMT+02:00 Леонид Коньков <[email protected]>:
>
> Hi.
>
> I want my CPUs be loaded as even as possible. I have 22 nodes
> (motherboards) with 1 CPU each with 8 CPU cores each. ( My English is far
> from perfect, and I'm not shure what is what in you terminology.) I want to
> load 100 onethread tasks with 4-5 tasks on one CPU. But tasks always fill
> all 8 cores of CPU and leave some CPUs idle.
>
> you mean doesn't leave any idle cpu? You want node to be responsible and
> available for interactive work while tasks are running?
>
>
>
> <--distribution=cyclic> and <--hint=memory_bound> don't help.
>
> Leo.
>
> [image: Image removed by sender.]
>
>
>
> If I understood you correctly, you can limit the slurmd cpuset to a subset
> of your cores, so jobs running under this cpuset won't use more than whole
> cgroup/cpuset/slurm/.
>
> cheers,
>
> marcin
>
> [image: Image removed by sender.]
>

Reply via email to