Hello again OpenMPI folk, been a while.
Have just come to build OpenMPI 1.8.1 within a PkgSrc environment for
our ArchLinux machines (yes, we used to be NetBSD, yes).
Latest PkgSrc build was for 1.6.4.
The 1.6.4 PkgSrc build required 4 patches, 3 of which were PkgSrc-specific
and just defined a
Kevin,
thanks for providing the patch.
i pushed it into the trunk :
https://svn.open-mpi.org/trac/ompi/changeset/32253
and made a CMR so it can be available in v1.8.2 :
https://svn.open-mpi.org/trac/ompi/ticket/4793
Thanks,
Gilles
On 2014/07/17 13:32, Kevin Buckley wrote:
> I have been informe
On 07/17/2014 06:32 AM, Kevin Buckley wrote:
=> Checking for portability problems in extracted files
ERROR: [check-portability.awk] => Found test ... == ...:
ERROR: [check-portability.awk] configure: if test "$enable_oshmem" ==
"yes" -a "$ompi_fortran_happy" == "1" -a \
Autoconf also avoids th
Rolf,
i commited r2389.
MPI_Win_allocate_shared is now invoked on a single node communicator
Cheers,
Gilles
On 2014/07/16 22:59, Rolf vandeVaart wrote:
> Sounds like a good plan. Thanks for looking into this Gilles!
>
> From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Gilles
> GO
Are these also called for shared libraries?
George.
On Wed, Jul 16, 2014 at 3:36 PM, Paul Hargrove wrote:
>
> On Wed, Jul 16, 2014 at 7:36 AM, Nathan Hjelm wrote:
>
>> Correction. xlc does support the destructor function attribute. The odd
>> one out is PGI.
>>
>
> Are the Solaris Studio c
On Thu, Jul 17, 2014 at 5:55 PM, George Bosilca wrote:
> Are these also called for shared libraries?
>
George.
>
If you are asking specifically about Solaris w/ the vendor compilers, then
apparently Yes:
-bash-3.00$ cat test.c
#include
int X = 0;
__attribute__((__constructor__)) void hello(v
I think Case #1 is only a partial solution, as it only solves the example
attached to the ticket. Based on my reading the the tool chapter calling
MPI_T_init after MPI_Finalize is legit, and this case is not covered by the
patch. But this is not the major issue I have with this patch. From a
coding
As I said, I don't know which solution is the one to follow - they both have
significant "ick" factors, though I wouldn't go so far as to characterize
either of them as "horrible". Not being "clean" after calling MPI_Finalize
seems just as strange.
Nathan and I did discuss the init-after-finali