Hello, Alina!
I use "OSU MPI Multiple Bandwidth / Message Rate Test v4.4.1".
I downloaded it from the website: http://mvapich.cse.ohio-state.edu/benchmarks/
I have attached "osu_mbw_mr.c" to this letter.
Best regards,
Timur
Четверг, 18 июня 2015, 18:23 +03:00 от Alina Sklarevich
With ' --bind-to socket' i get the same results as '--bind-to-core' : 3813 MB/s.
I have attached ompi_yalla_socket.out and ompi_yalla_socket.err files to this
letter.
Вторник, 16 июня 2015, 18:15 +03:00 от Alina Sklarevich
:
>Hi Timur,
>
>Can you please try running
Hello, Alina!
If I use --map-by node I will get only intranode communications on osu_mbw_mr.
I use --map-by core instead.
I have 2 nodes, each node has 2 sockets with 8 cores per socket.
When I run osu_mbw_mr on 2 nodes with 32 MPI procs (command see below), I
expect to see the
Hello, Alina.
1. Here is my
ompi_yalla command line:
$HPCX_MPI_DIR/bin/mpirun -mca coll_hcoll_enable 1 -x HCOLL_MAIN_IB=mlx4_0:1 -x
MXM_IB_PORTS=mlx4_0:1 -x MXM_SHM_KCOPY_MODE=off --mca pml yalla --hostfile
hostlist $@
echo $HPCX_MPI_DIR
Hi, Mike!
I have impi v 4.1.2 (- impi)
I build ompi 1.8.5 with MXM and hcoll (- ompi_yalla)
I build ompi 1.8.5 without MXM and hcoll (- ompi_clear)
I start osu p2p: osu_mbr_mr test with this MPIs.
You can find the result of benchmark in attached file(mvs10p_mpi.xls: list
osu_mbr_mr)
On 64 nodes