Hi Stephen,

I would check the MPI speedup by measuring the elapsed time for each try, 
using the same otbcli parameters (nb. of iterations, radius, ...)
mpirun -np 1 --bind-to-socket ... 
mpirun -np 2 --bind-to-socket ...
mpirun -np 3 --bind-to-socket ...
mpirun -np 4 --bind-to-socket ...
The ideal speedup should be linear (ie. elapsed time is divided by the 
number of MPI processes).
Another advise is to let the ram at the default value for now.

Your issue seems related to parallel application execution with MPI rather 
than otb. I would find mpi-helloWorlds and diagnosis tools over the web to 
check the proper resource allocation (Personally I use slurm on clusters to 
submit my jobs, but I've never deploy any scheduler myself)
Keep us updated,

Rémi

-- 
-- 
Check the OTB FAQ at
http://www.orfeo-toolbox.org/FAQ.html

You received this message because you are subscribed to the Google
Groups "otb-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/otb-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"otb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to