> Am 10.04.2018 um 01:04 schrieb Noam Bernstein <noam.bernst...@nrl.navy.mil>:
> 
>> On Apr 9, 2018, at 6:36 PM, George Bosilca <bosi...@icl.utk.edu> wrote:
>> 
>> Noam,
>> 
>> I have few questions for you. According to your original email you are using 
>> OMPI 3.0.1 (but the hang can also be reproduced with the 3.0.0).
> 
> Correct.
> 
>> Also according to your stacktrace I assume it is an x86_64, compiled with 
>> icc.
> 
> x86_64, yes, but, gcc + ifort.  I can test with gcc+gfortran if that’s 
> helpful.

Was there any reason not to choose icc + ifort?

-- Reuti


> 
>> Is your application multithreaded ? How did you initialized MPI (which level 
>> of threading) ? Can you send us the opal_config.h file please.
> 
> No, no multithreading, at least not intentionally.  I can run with 
> OMP_NUM_THREADS explicitly 1 if you’d like to exclude that as a possibility.  
> opal_config.h is attached, from ./opal/include/opal_config.h in the build 
> directory.
> 
>                                                                       Noam
> 
> 
> 
> ____________
> ||
> |U.S. NAVAL|
> |_RESEARCH_|
> LABORATORY
> 
> Noam Bernstein, Ph.D.
> Center for Materials Physics and Technology
> U.S. Naval Research Laboratory
> T +1 202 404 8628  F +1 202 404 7546
> https://www.nrl.navy.mil
> <opal_config.h>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to