On 11/11/22 00:05, Ken Mankoff wrote:
Hi Rob,

On 2022-11-10 at 21:21 +01, Rob Sargent<[email protected]>  wrote:
I do this, in slurm bash script, to get the number of jobs I want to
run (turns out it's better for me to not load the full hyper-threaded
count)

    cores=`grep -c processor /proc/cpuinfo`
    cores=$(( $cores / 2 ))

    parallel --jobs $cores etc :::: <file with list of jobs>

or sometimes the same jobs many times with

    parallel --jobs $cores etc ::: {1..300}
I apologize if I am missing something, but I don't see how this solves 
distributing to different hosts (nodes), where each host may have a different 
number of CPUs or cores.

   -k.

Definition of "job" is part of the problem.  Mine either take over the host (internally multi-threaded) or I use the above to keep the machine busy with a list of jobs (or same job n times) I asked the local cluster folks if I could queue up all my jobs and request just on core for each job but they preferred I let parallel keep the machine busy:  one slurm job versus hundreds.

How do you mix slurm and parallel hostfile?

Reply via email to