Hi Stephen,

> Thanks, I have this more or less working. I have not set the env variable 
> ITK_GLOBAL_DEFAULT_NUMBER_OF_
> THREADS but I will try that, I seem to be getting about 4 times that many 
> threads running.
>
 
If you do not adjust the correct number of threads, you could have more 
threads than cpus, which will lead to poor performance (The goal is to have 
1 thread on 1 cpu, virtually speaking).



> Below are various problems I've run into. Some of these might be code 
> bugs, or config issues, or who knows what :)
>
> I get an error with --bind-to socket
>
> mpirun -np 4 --bind-to socket otbcli_MeanShiftSmoothing -in /u/ror/
> buildings/data/naip/doqqs/2014/33118/m_3311805_se_11_1_20140513.tif -fout 
> /u/ror/buildings/tmp/test1-smooth.tif -foutpos /u/ror/buildings/tmp/test1-
> smoothpos.tif -spatialr 24 -ranger 36 -ram 102400
> Unexpected end of /proc/mounts line `overlay / overlay 
> rw,seclabel,relatime,lowerdir=/var/lib/docker/overlay2/l/JPC7E5F4RB77LOK22ETL5FMEPN:/var/lib/docker/overlay2/l/DM3Q73J52BCAIEZVAQZGAMXLCX:/var/lib/docker/overlay2/l/WC5LQTPG4RBGOUEZ7KBJZLUB2R:/var/lib/docker/overlay2/l/BESSO2WOBICH2P4GSVX7VSCGG6:/var/lib/docker/overlay2/l/FMSJDZMFK67RHOIIZOLKOICAHI:/var/lib/docker/overlay2/l/U7AFHXIVI6KAKUO2VJMZWLQOHH:/var/lib/docker/overlay2/l/EIRHWP2GOK3F2PH7SHY4FK6J6P,upperdir=/var/lib/docker/overlay2/73d138b0a2dadf534a9d9c7d2ed894484515bfe3d2f1807a2b8'
> --------------------------------------------------------------------------
> WARNING: Open MPI tried to bind a process but failed.  This is a
> warning only; your job will continue, though performance may
> be degraded.
>
>   Local host:        optane30
>   Application name:  /usr/bin/otbcli_MeanShiftSmoothing
>   Error message:     failed to bind memory
>   Location:          odls_default_module.c:639
>
> --------------------------------------------------------------------------
>
>
>
>From your logs, I feel that you are using a virtual environment (docker?).
First impression is that MPI fails to bind processes in this environment. 
However I never used MPI in such configuration.
 

> But the job runs to completion. When I try to run otbcli_LSMSVectorization 
> under mpi it fails. The same command runs fine without mpi. If this 
> command shouldn't run under mpi, you  might want to add a check and report 
> to the user, or just internally disable mpi.
>

Indeed, LSMSVectorization does not currently support MPI.
You are absolutely right, we need to add something to prevent the use of 
applications which can't work with MPI.

Thank you for these useful feedbacks, we will take care of improving the 
MPI feature, plus providing more doc!
Rémi

-- 
-- 
Check the OTB FAQ at
http://www.orfeo-toolbox.org/FAQ.html

You received this message because you are subscribed to the Google
Groups "otb-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/otb-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"otb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to