RE: [otb-users] using all cores with multithreaded filters

2018-09-14 Thread Poughon Victor
Hi Julien,

Did you try:

export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=8

This is documented in 
https://gitlab.orfeo-toolbox.org/orfeotoolbox/otb/blob/develop/Documentation/Cookbook/rst/AdvancedUse.rst
 along with other environment variables that affect OTB.

Victor Poughon

De : otb-users@googlegroups.com  De la part de 
Julien Radoux
Envoyé : jeudi 13 septembre 2018 11:09
À : otb-users 
Objet : [otb-users] using all cores with multithreaded filters

We have a few custom filters that use ThreadedGenerate data. We didn't realize 
immediately, but it seems that the parallel computing is not working properly 
with our new configuration :

openSuse 15.0, OTB 6.7 (git), gcc 7.3.1

When we launch a filter with  ThreadedGenerateData, the number of thread is 
well equal to the number of core (8 threads for  8 cores in my case), but it 
only uses 1. This is not the case when we use otbcli_applications.

Diagnostic:
$ taskset -p $(pgrep LWSmoothing)
pid 29938's current affinity mask: 1
$ htop # => process total: 100%, 8 sub-processes: 12%

If we specify before, it does not change anything
$ taskset ff /usr/local/lifewatch/tools/LWSmoothing ...
$ taskset -p $(pgrep LWSmoothing)
pid 30241's current affinity list: 0

If it specify after, it works as expected (800%)
$ taskset -p ff $(pgrep LWSmoothing)
pid 30312's current affinity mask: 1
pid 30312's new affinity mask: ff
$ htop # => ok, 800%

Does anyone have an idea about the reason for this difference ? Is there 
something to change in the cxx, the txx or in the compiler's configuration ?

Thanks,

Julien

--
--
Check the OTB FAQ at
http://www.orfeo-toolbox.org/FAQ.html

You received this message because you are subscribed to the Google
Groups "otb-users" group.
To post to this group, send email to 
otb-users@googlegroups.com<mailto:otb-users@googlegroups.com>
To unsubscribe from this group, send email to
otb-users+unsubscr...@googlegroups.com<mailto:otb-users+unsubscr...@googlegroups.com>
For more options, visit this group at
http://groups.google.com/group/otb-users?hl=en
---
You received this message because you are subscribed to the Google Groups 
"otb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
otb-users+unsubscr...@googlegroups.com<mailto:otb-users+unsubscr...@googlegroups.com>.
For more options, visit https://groups.google.com/d/optout.

-- 
-- 
Check the OTB FAQ at
http://www.orfeo-toolbox.org/FAQ.html

You received this message because you are subscribed to the Google
Groups "otb-users" group.
To post to this group, send email to otb-users@googlegroups.com
To unsubscribe from this group, send email to
otb-users+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/otb-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"otb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to otb-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[otb-users] using all cores with multithreaded filters

2018-09-13 Thread Julien Radoux
We have a few custom filters that use ThreadedGenerate data. We didn't 
realize immediately, but it seems that the parallel computing is not 
working properly with our new configuration :

openSuse 15.0, OTB 6.7 (git), gcc 7.3.1 

When we launch a filter with  ThreadedGenerateData, the number of thread is 
well equal to the number of core (8 threads for  8 cores in my case), but 
it only uses 1. This is not the case when we use otbcli_applications.

Diagnostic:
$ taskset -p $(pgrep LWSmoothing)
pid 29938's current affinity mask: 1
$ htop # => process total: 100%, 8 sub-processes: 12%

If we specify before, it does not change anything
$ taskset ff /usr/local/lifewatch/tools/LWSmoothing ...
$ taskset -p $(pgrep LWSmoothing)
pid 30241's current affinity list: 0

If it specify after, it works as expected (800%)
$ taskset -p ff $(pgrep LWSmoothing)
pid 30312's current affinity mask: 1
pid 30312's new affinity mask: ff
$ htop # => ok, 800% 

Does anyone have an idea about the reason for this difference ? Is there 
something to change in the cxx, the txx or in the compiler's configuration ?

Thanks,

Julien

-- 
-- 
Check the OTB FAQ at
http://www.orfeo-toolbox.org/FAQ.html

You received this message because you are subscribed to the Google
Groups "otb-users" group.
To post to this group, send email to otb-users@googlegroups.com
To unsubscribe from this group, send email to
otb-users+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/otb-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"otb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to otb-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.