r).
>
> /* feel free to reboot your nodes and see if ibstat still shows the
> adapters as active */
>
>
> Note you might also use --mca pml ob1 in order to make sure mxm nor ucx
> are used
>
>
> Cheers,
>
>
> Gilles
>
>
>
> On 5/15/2018 10:45 A
ter than IB. That being said, it should not matter because
> shared memory is there to cover this case.
>
> Add "--map-by node" to your mpirun command to measure the bandwidth
> between nodes.
>
> George.
>
>
>
> On Mon, May 14, 2018 at 5:04 AM, Blade Shie
the IMB test gives good results for IB, so you must
> have IB working properly.
> Therefore I am an idiot...
>
>
>
> On 14 May 2018 at 11:04, Blade Shieh <bladesh...@gmail.com> wrote:
>
>>
>> Hi, Nathan:
>> Thanks for you reply.
>> 1) It was
Hi, Nathan:
Thanks for you reply.
1) It was my mistake not to notice usage of osu_latency. Now it worked
well, but still poorer in openib.
2) I did not use sm or vader because I wanted to check performance between
tcp and openib. Besides, I will run the application in cluster, so vader is
not
/** The problem ***/
I have a cluster with 10GE ethernet and 100Gb infiniband. While running my
application - CAMx, I found that the performance with IB is not as good as
ethernet. That is confusing because IB latency and bandwith is
undoubtablely better than ethernet, which is