Fwiw,you might want to try compare sm and vader
mpirun --mca btl self,sm ...
And with and without knem
(modprobe knem should do the trick)

 Cheers,

Gilles

Vincent Diepeveen <d...@xs4all.nl> wrote:
>
>You're trying to read absurd huge message sizes considering you're busy 
>testing the memory bandwidth of your system in this manner.
>
>As soon as the message gets larger than your CPU's caching 
>system it has to copy the message several times via your RAM, falls 
>outside CPU's L2 or L3 cache and bandwidth drops.
>
>This has nothing to do with OpenMPI i'd say.
>
>
>
>
>On Thu, 10 Mar 2016, BRADLEY, PETER C PW wrote:
>
>> 
>> I’m curious what causes the hump in the pingpong bandwidth curve when 
>> running on shared memory.  Here’s an example running on a fairly antiquated
>> single-socket 4 core laptop with linux (2.6.32 kernel).  Is this a cache 
>> effect?  Something in OpenMPI itself, or a combination?
>> 
>>  
>> 
>>  
>> 
>> Macintosh HD:Users:up:Pictures:bandwidth_onepair_onenode.png
>> 
>>  
>> 
>> Pete
>> 
>>  
>> 
>> 
>>

Reply via email to