Re: [OMPI devel] trunk build failed when configured with --disable-hwloc

2012-02-14 Thread Paul H. Hargrove
On 2/14/2012 5:10 PM, Paul H. Hargrove wrote: I have configured the ompi-trunk (from last night's tarball: 1.7a1r25913) with --without-hwloc. Having done so, I see the following failure at build time: CC rmaps_rank_file_component.lo

[OMPI devel] trunk build failed when configured with --disable-hwloc

2012-02-14 Thread Paul H. Hargrove
I have configured the ompi-trunk (from last night's tarball: 1.7a1r25913) with --without-hwloc. Having done so, I see the following failure at build time: CC rmaps_rank_file_component.lo

[OMPI devel] the dangers of configure probing argument counts

2012-02-14 Thread Paul H. Hargrove
There was recently a fair amount of work done in hwloc to get configure to work correctly for a probe that was intended to determine how many arguments appear in a specific function prototype. The "issue" was that the C spec doesn't require that the C compiler issue an error for either

Re: [OMPI devel] MVAPICH2 vs Open-MPI

2012-02-14 Thread Rolf vandeVaart
There are several things going on here that make their library perform better. With respect to inter-node performance, both MVAPICH2 and Open MPI copy the GPU memory into host memory first. However, they are using special host buffers that and a code path that allows them to copy the data

[OMPI devel] MVAPICH2 vs Open-MPI

2012-02-14 Thread Rayson Ho
See P. 38 - 40, MVAPICH2 outperforms Open-MPI for each test, so is it something that they are doing to optimize for CUDA & GPUs and those optimizations are not in OMPI, or did they specifically tune MVAPICH2 to make it shine??

Re: [OMPI devel] poor btl sm latency

2012-02-14 Thread Matthias Jurenz
I've built Open MPI 1.5.5rc1 (tarball from Web) with CFLAGS=-O3. Unfortunately, also without any effect. Here some results with enabled binding reports: $ mpirun *--bind-to-core* --report-bindings -np 2 ./all2all_ompi1.5.5 [n043:61313] [[56788,0],0] odls:default:fork binding child [[56788,1],1]

Re: [OMPI devel] Question about opal/mca/memory/linux licensing

2012-02-14 Thread Denis Nagorny
2012/2/14 Jeff Squyres > On Feb 14, 2012, at 6:09 AM, Denis Nagorny wrote: > > I assume you're referring to the ptmalloc implementation under > opal/mca/memory/linux, right? > Yes, you are right. > Specifically, see opal/mca/memory/linux/README-ptmalloc.txt > It seems that

Re: [OMPI devel] Question about opal/mca/memory/linux licensing

2012-02-14 Thread Jeff Squyres
On Feb 14, 2012, at 6:09 AM, Denis Nagorny wrote: > Investigating memory management implementation in OpenMPI I found that opal's > memory module licensed under Lesser GPL terms. I assume you're referring to the ptmalloc implementation under opal/mca/memory/linux, right? If, so, please read

[OMPI devel] Question about opal/mca/memory/linux licensing

2012-02-14 Thread Denis Nagorny
Hello, Investigating memory management implementation in OpenMPI I found that opal's memory module licensed under Lesser GPL terms. This subsystem is linked into openMPI library. As far as I know this fact should enforce Lesser GPL license on libopen-rte.so and libopen-pal.so. Could anybody

Re: [hwloc-devel] hwloc 1.3.2rc2 released

2012-02-14 Thread Paul H. Hargrove
On 2/13/2012 1:30 PM, Jeff Squyres wrote: Due to the volume of off-list emails, I'm kinda expecting this rc to be good / final. However, please do at least some cursory testing so that we can be sure. I disregarded the "cursory" and ran on 61 arch/os/compiler combinations. I can see only 2