Hi Rob,

On 2022-11-10 at 21:21 +01, Rob Sargent <robjsarg...@gmail.com> wrote:
> I do this, in slurm bash script, to get the number of jobs I want to
> run (turns out it's better for me to not load the full hyper-threaded
> count)
>
>    cores=`grep -c processor /proc/cpuinfo`
>    cores=$(( $cores / 2 ))
>
>    parallel --jobs $cores etc :::: <file with list of jobs>
>
> or sometimes the same jobs many times with
>
>    parallel --jobs $cores etc ::: {1..300}

I apologize if I am missing something, but I don't see how this solves 
distributing to different hosts (nodes), where each host may have a different 
number of CPUs or cores.

  -k.


Reply via email to