Re: [OMPI users] EXTERNAL: Re: Trouble compiling 1.4.3 with PGI 10.9 compilers

2011-09-26 Thread Jeff Squyres
On Sep 26, 2011, at 6:53 PM, Blosch, Edwin L wrote: > Actually I can download OpenMPI 1.5.4, 1.4.4rc3 or 1.4.3 - and ALL of them > build just fine. > > Apparently what isn't working is the version of 1.4.3 that I have downloaded > and copied from place to place, i.e. timestamps on files may

Re: [OMPI users] EXTERNAL: Re: Trouble compiling 1.4.3 with PGI 10.9 compilers

2011-09-26 Thread Blosch, Edwin L
Actually I can download OpenMPI 1.5.4, 1.4.4rc3 or 1.4.3 - and ALL of them build just fine. Apparently what isn't working is the version of 1.4.3 that I have downloaded and copied from place to place, i.e. timestamps on files may have changed (otherwise the files are the same). It seems

Re: [OMPI users] Segfault on any MPI communication on head node

2011-09-26 Thread Phillip Vassenkov
Yep, Fedora Core 14 and OpenMPI 1.4.3 On 9/24/11 7:02 AM, Jeff Squyres wrote: Are you running the same OS version and Open MPI version between the head node and regular nodes? On Sep 23, 2011, at 5:27 PM, Vassenkov, Phillip wrote: Hey all, I’ve been racking my brains over this for several

[OMPI users] VampirTrace integration with VT_GNU_NMFILE environment variable

2011-09-26 Thread Rocky Dunlap
According to the VampirTrace documentation, it is possible to create a symbol list file in advance and set the name of the file in the environment variable VT_GNU_NMFILE. For example, you might do this: $ nm hello > hello.nm $ export VT_GNU_NMFILE="hello.nm" I have set up a symbol file list as

Re: [OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-26 Thread Yevgeny Kliteynik
On 26-Sep-11 11:27 AM, Yevgeny Kliteynik wrote: > On 22-Sep-11 12:09 AM, Jeff Squyres wrote: >> On Sep 21, 2011, at 4:24 PM, Sébastien Boisvert wrote: >> What happens if you run 2 ibv_rc_pingpong's on each node? Or N ibv_rc_pingpongs? >>> >>> With 11 ibv_rc_pingpong's >>> >>>

Re: [OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-26 Thread Yevgeny Kliteynik
On 22-Sep-11 12:09 AM, Jeff Squyres wrote: > On Sep 21, 2011, at 4:24 PM, Sébastien Boisvert wrote: > >>> What happens if you run 2 ibv_rc_pingpong's on each node? Or N >>> ibv_rc_pingpongs? >> >> With 11 ibv_rc_pingpong's >> >> http://pastebin.com/85sPcA47 >> >> Code to do that =>