Some operations in wb_command automatically use OpenMP without being
enabled by an argument, using however many cores it can find.  You can
change the number it uses by setting the environment variable
OMP_NUM_THREADS.  I'm not as sure about multithreading in other software
used in the scripts.

As I understand it, in our cluster, we use the job control system to
restrict the number of cores the process can see, in order to reduce things
like cross-socket memory access.  This won't help with a fixed number of
threads set in a script, but it does let us keep a single process
restricted to the amount of hardware we want it to use.

Tim


On Fri, Jun 19, 2015 at 1:04 PM, m s <mgstauff...@gmail.com> wrote:

> Version 3.4.0
>
> Hi,
>
> We're wondering if there's a clear global way to manage how many compute
> threads the HCP pipeline uses. We're running on a Rocks/SGE(OGE) cluster.
>
> My colleague recently ran the perfusion pre-processing pipeline with 9
> slots/cores allocated via SGE/OGE and we observed the job's cpu usage on
> the compute node to which the job was assigned fluctuate in steps between
> 200%, 800% and 900%.
>
> We found one spot in an HCP script where the -openmp argument is
> explicitly called and was hardcoded to 8. We've changed that to be set to
> the SGE/OGE NSLOTS env var. But it seems there may be other spots where
> this is hardcoded to 8, or other spots where other threading options are
> set?
>
> Thanks,
> Michael
>
> _______________________________________________
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

_______________________________________________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to