Hello Cooper
Could you rerun your test with the following env. variable set
export OMPI_MCA_coll=self,basic,libnbc
and see if that helps?
Also, what type of interconnect are you using - ethernet, IB, ...?
Howard
2017-09-19 8:56 GMT-06:00 Cooper Burns :
> Hello,
>
> I have been running some
. . . My sincerest apologies, I have gotten utterly mixed up; this is all
new territory for me.
On Tue, Sep 19, 2017 at 10:47 PM, r...@open-mpi.org wrote:
> Err...you might want to ask the MPICH folks. This is the Open MPI mailing
> list :-)
>
> On Sep 19, 2017, at 7:38 AM, Aragorn Inocencio <
Hello,
I have been running some simple benchmarks and saw some strange behaviour:
All tests are done on 4 nodes with 24 cores each (total of 96 mpi processes)
When I run MPI_Allreduce() I see the run time spike up (about 10x) when I
go from reducing a total of 4096KB to 8192KB for example, when c
Err...you might want to ask the MPICH folks. This is the Open MPI mailing list
:-)
> On Sep 19, 2017, at 7:38 AM, Aragorn Inocencio
> wrote:
>
> Good evening,
>
> Thank you for taking the time to develop and assist in the use of this tool.
>
> I am trying to install the latest mpich-3.2 vers