Lenny Verkhovsky wrote:

Sorry, guys, I tested it on 1.3 branch, trunk version(1.4a1r20980) seems to be fixed.

Great.

BUT,

the default value of mpool_sm_min_size in 1.4a1r20980 is 67108864

when I set it to 0, there is a performance degradation, is it OK ?

Depends on what matters for you!  :^)

Anyhow:

1) I think many bandwidth tests won't see this problem, but osu_bw is different since it pumps so many messages into the system concurrently.

2)  For the sake of osu_bw, I think leaving the default at 64M is good.

$LD_LIBRARY_PATH=~/work/svn/ompi/trunk/build_x86-64/install/lib/ install/bin/mpirun -np 2 -mca btl sm,self -mca mpool_sm_min_size 0 ~/work/svn/hpc/tools/benchmarks/OMB-3.1.1/osu_bw
# OSU MPI Bandwidth Test v3.1.1
# Size Bandwidth (MB/s)
1 1.20
2 3.39
4 6.93
8 14.09
16 27.80
32 50.58
64 101.08
128 173.23
256 257.81
512 436.86
1024 674.51
2048 856.80
4096 573.87
8192 607.55
16384 660.58
32768 685.23
65536 687.45
131072 690.52
262144 687.48
524288 676.77
1048576 675.74
2097152 676.89
4194304 677.28
lennyb@dellix7 ~/work/svn/ompi/trunk/build_x86-64 $LD_LIBRARY_PATH=~/work/svn/ompi/trunk/build_x86-64/install/lib/ install/bin/mpirun -np 2 -mca btl sm,self ~/work/svn/hpc/tools/benchmarks/OMB-3.1.1/osu_bw
# OSU MPI Bandwidth Test v3.1.1
# Size Bandwidth (MB/s)
1 1.72
2 3.70
4 7.43
8 13.45
16 29.83
32 52.66
64 105.08
128 181.16
256 288.16
512 426.83
1024 690.21
2048 867.00
4096 567.53
8192 667.35
16384 806.97
32768 892.95
65536 989.62
131072 1009.25
262144 1018.35
524288 1037.32
1048576 1048.75
2097152 1057.51
4194304 1062.16

Reply via email to