Gengyan,
 
My understanding is as follows.  (Any OpenMP expert who sees holes in my
understanding should feel free to correct me...please.)
 
If the compiled program/binary in use (e.g. eddy or wb_command) has been
compiled using the correct OpenMP related switches, then by default,
that program will use multi-threading in the places that multi-threading
was called for in the source code. It will use a maximum of as many
threads as there are are processing cores on the system on which the
program is running.
 
So, if the machine you are using has 8 cores, then a properly compiled
OpenMP program will use up to 8 threads (parallel executions).  But this
assumes that the code has been written with that many potential threads
of independent execution and compiled and linked with the correct OpenMP
switches and OpenMP libraries.
 
For programs like eddy and wb_command, this proper compiling and linking
to use OpenMP should already have been done for you.
 
The only other thing that I know of that can limit the number of threads
(besides the actual source code) is the setting of the environment
variable OMP_NUM_THREADS. If this variable is set to a numeric value
(e.g. 4), then the number of threads is limited to that maximum
regardless of how many threads the code is written to support.
 
In reality, I believe the behavior when the OMP_NUM_THREADS variable is
not set of running "as many threads as available cores" is dependent
upon the compiler used. But the GNU compiler collection and (I believe
the Intel compiler family) have this behavior.  The Visual Studio 2015
compilers have a similar behavior.
 
So...if the machine you are running on has multiple cores, and
OMP_NUM_THREADS is not set, the code should be automatically using multi-
threading for you.
 
There is another caveat here. If you are submitting jobs to a cluster
with a job scheduler (like a Sun Grid Engine or a PBS scheduler) you
should be careful to request multiple cores for your job.  If the
"hardware" you request for running your job is 1 node with 1 processor
(i.e. core), then even if the actual machine has multiple cores, only 1
of those cores will be allocated to your job.  So the running
environment will "look like" a single core processor. This would mean
that only 1 thread at a time could run.
 
As an example for PBS, if you were to specify the following in the PBS
header for your job:
 
#PBS -l nodes=1:ppn=1
 
Then you would only get single threading because Processors Per Node
(ppn) is specified as 1.
 
Whereas specifying
 
#PBS -l nodes=1:ppn=4
 
would allow up to 4 threads to run simultaneously.
 
I'm not as familiar with specifying the number of cores for an SGE
cluster job, but I *think* the -pe smp <num_slots> option for the qsub
command is how the number of cores is specified.

Tim
 
On Mon, May 23, 2016, at 14:15, Glasser, Matthew wrote:
> You could try running a wb_command like smoothing on a random dense
> timeseries dataset.  It should use multiple cores if everything is
> working correctly with that.
>
> Peace,
>
> Matt.
>
> From: <[email protected]> on behalf of Gengyan
> Zhao <[email protected]> Date: Monday, May 23, 2016 at 10:57 AM To: "hcp-
> [email protected]" <[email protected]> Subject:
> [HCP-Users] Questions about Run Diffusion Preprocessing Parallelly
>
> Hello HCP Masters,
>
> I have a question about the "DiffusionPreprocessingBatch.sh". I want
> to run it in a multi-thread manner and involve as many cores as
> possible. There is a line in the script saying:
>
> #Assume that submission nodes have OPENMP enabled (needed for eddy -
> at least 8 cores suggested for HCP data)
>
> What shall I do to enable OPENMP? or OPENMP is ready to go? My current
> state is that the pipeline has just been run with SGE on a Ubuntu
> 14.04 machine having 32 cores. Thank you very much.
>
> Best,
> Gengyan
>
> Research Assistant
> Medical Physics, UW-Madison
>
> _______________________________________________
>  HCP-Users mailing list [email protected]
>  http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>

> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If
> you are not the intended recipient, be advised that any unauthorized
> use, disclosure, copying or the taking of any action in reliance on
> the contents of this information is strictly prohibited. If you have
> received this email in error, please immediately notify the sender via
> telephone or return mail.
> _______________________________________________
>  HCP-Users mailing list [email protected]
>  http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>

> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If
> you are not the intended recipient, be advised that any unauthorized
> use, disclosure, copying or the taking of any action in reliance on
> the contents of this information is strictly prohibited. If you have
> received this email in error, please immediately notify the sender via
> telephone or return mail.
--
 Timothy B. Brown
 Business & Technology Application Analyst III
 Pipeline Developer (Human Connectome Project)
 tbbrown(at)wustl.edu
________________________________________

The material in this message is private and may contain Protected Healthcare 
Information (PHI). 
If you are not the intended recipient, be advised that any unauthorized use, 
disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. 
If you have received this email in error, please immediately notify the sender 
via telephone or 
return mail.


_______________________________________________
HCP-Users mailing list
[email protected]
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to