Re: [OMPI users] OpenMPI 1.8.4rc3, 1.6.5 and 1.6.3: segmentation violation in mca_io_romio_dist_MPI_File_close

2015-01-14 Thread Rob Latham



On 12/17/2014 07:04 PM, Eric Chamberland wrote:

Hi!

Here is a "poor man's fix" that works for me (the idea is not from me,
thanks to Thomas H.):

#1- char* lCwd = getcwd(0,0);
#2- chdir(lPathToFile);
#3- MPI_File_open(...,lFileNameWithoutTooLongPath,...);
#4- chdir(lCwd);
#5- ...

I think there are some limitations but it works very well for our
uses... and until a "real" fix is proposed...


Thanks for the bug report and test cases.  I just pushed two fixes for 
master that fix the problem you were seeing:


http://git.mpich.org/mpich.git/commit/ed39c901
http://git.mpich.org/mpich.git/commit/a30a4721a2

==rob

--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA


Re: [OMPI users] Problems compiling OpenMPI 1.8.4 with GCC 4.9.2

2015-01-14 Thread Novosielski, Ryan
It's worth noting that the solution solved the problem.

I'm running on RHEL5, which seems like it's actually where the problem comes 
from.

--
 *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
 || \\UTGERS  |-*O*-
 ||_// Biomedical | Ryan Novosielski - Senior Technologist
 || \\ and Health | novos...@rutgers.edu - 973/972.0922 (2x0922)
 ||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
  `'

From: users [users-boun...@open-mpi.org] On Behalf Of Ray Sheppard 
[rshep...@iu.edu]
Sent: Wednesday, January 14, 2015 1:44 PM
To: Open MPI Users
Subject: Re: [OMPI users] Problems compiling OpenMPI 1.8.4 with GCC 4.9.2

Gilles,
  The issue you pointed Ryan to was with GCC 4.8.2 not 4.9.2.  I just built 
version 1.8.4 on a RHEL6 machine yesterday without special switches but with 
GCC 4.9.2.
Ray

On 1/14/2015 11:13 AM, Novosielski, Ryan wrote:
Thank you. I did a search, but somehow did not turn that up. I guess I had 
looked for GCC 4.9.

 *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
|| \\UTGERS  |-*O*-
||_// Biomedical | Ryan Novosielski - Senior Technologist
|| \\ and Health | novos...@rutgers.edu- 
973/972.0922 (2x0922)
||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
`'

On Jan 14, 2015, at 03:20, Gilles Gouaillardet 
mailto:gilles.gouaillar...@iferc.org>> wrote:

Ryan,

this issue has already been reported.

please refer to http://www.open-mpi.org/community/lists/users/2015/01/26134.php 
for a workaround

Cheers,

Gilles

On 2015/01/14 16:35, Novosielski, Ryan wrote:

OpenMPI 1.8.4 does not appear to be buildable with GCC 4.9.2. The output, as 
requested by the Getting Help page, is attached.

I believe I tried GCC 4.9.0 too and it didn't work.

I did successfully build it with Intel's compiler suite v15.0.1, so I do appear 
to know what I'm doing.

Thanks in advance for your help.

--
 *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
 || \\UTGERS  |-*O*-
 ||_// Biomedical | Ryan Novosielski - Senior Technologist
 || \\ and Health | novos...@rutgers.edu - 
973/972.0922 (2x0922)
 ||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
  `'



___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26173.php

___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26174.php



___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26177.php


--
 Respectfully,
   Ray Sheppard
   rshep...@iu.edu
   http://rt.uits.iu.edu/systems/SciAPT
   317-274-0016

   Principal Analyst
   Senior Technical Lead
   Scientific Applications and Performance Tuning
   Research Technologies
   University Information Technological Services
   IUPUI campus
   Indiana University

   My "pithy" saying:  Science is the art of translating the world
   into language. Unfortunately, that language is mathematics.
   Bumper sticker wisdom: Make it idiot-proof and they will make a
   better idiot.




Re: [OMPI users] Problems compiling OpenMPI 1.8.4 with GCC 4.9.2

2015-01-14 Thread Ray Sheppard

Gilles,
  The issue you pointed Ryan to was with GCC 4.8.2 not 4.9.2.  I just 
built version 1.8.4 on a RHEL6 machine yesterday without special 
switches but with GCC 4.9.2.

Ray

On 1/14/2015 11:13 AM, Novosielski, Ryan wrote:
Thank you. I did a search, but somehow did not turn that up. I guess I 
had looked for GCC 4.9.


 *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
|| \\UTGERS  |-*O*-
||_// Biomedical | Ryan Novosielski - Senior Technologist
|| \\ and Health | novos...@rutgers.edu - 
973/972.0922 (2x0922)

||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
`'

On Jan 14, 2015, at 03:20, Gilles Gouaillardet 
mailto:gilles.gouaillar...@iferc.org>> 
wrote:



Ryan,

this issue has already been reported.

please refer to 
http://www.open-mpi.org/community/lists/users/2015/01/26134.php for a 
workaround


Cheers,

Gilles

On 2015/01/14 16:35, Novosielski, Ryan wrote:

OpenMPI 1.8.4 does not appear to be buildable with GCC 4.9.2. The output, as 
requested by the Getting Help page, is attached.

I believe I tried GCC 4.9.0 too and it didn't work.

I did successfully build it with Intel's compiler suite v15.0.1, so I do appear 
to know what I'm doing.

Thanks in advance for your help.

--
 *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
  || \\UTGERS  |-*O*-
  ||_// Biomedical | Ryan Novosielski - Senior Technologist
  || \\ and Health |novos...@rutgers.edu  - 973/972.0922 (2x0922)
  ||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
   `'


___
users mailing list
us...@open-mpi.org
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this 
post:http://www.open-mpi.org/community/lists/users/2015/01/26173.php


___
users mailing list
us...@open-mpi.org 
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26174.php



___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26177.php


--
 Respectfully,
   Ray Sheppard
   rshep...@iu.edu
   http://rt.uits.iu.edu/systems/SciAPT
   317-274-0016

   Principal Analyst
   Senior Technical Lead
   Scientific Applications and Performance Tuning
   Research Technologies
   University Information Technological Services
   IUPUI campus
   Indiana University

   My "pithy" saying:  Science is the art of translating the world
   into language. Unfortunately, that language is mathematics.
   Bumper sticker wisdom: Make it idiot-proof and they will make a
   better idiot.



Re: [OMPI users] Problems compiling OpenMPI 1.8.4 with GCC 4.9.2

2015-01-14 Thread Novosielski, Ryan
Thank you. I did a search, but somehow did not turn that up. I guess I had 
looked for GCC 4.9.

 *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
|| \\UTGERS  |-*O*-
||_// Biomedical | Ryan Novosielski - Senior Technologist
|| \\ and Health | novos...@rutgers.edu- 
973/972.0922 (2x0922)
||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
`'

On Jan 14, 2015, at 03:20, Gilles Gouaillardet 
mailto:gilles.gouaillar...@iferc.org>> wrote:

Ryan,

this issue has already been reported.

please refer to http://www.open-mpi.org/community/lists/users/2015/01/26134.php 
for a workaround

Cheers,

Gilles

On 2015/01/14 16:35, Novosielski, Ryan wrote:

OpenMPI 1.8.4 does not appear to be buildable with GCC 4.9.2. The output, as 
requested by the Getting Help page, is attached.

I believe I tried GCC 4.9.0 too and it didn't work.

I did successfully build it with Intel's compiler suite v15.0.1, so I do appear 
to know what I'm doing.

Thanks in advance for your help.

--
 *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
 || \\UTGERS  |-*O*-
 ||_// Biomedical | Ryan Novosielski - Senior Technologist
 || \\ and Health | novos...@rutgers.edu - 
973/972.0922 (2x0922)
 ||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
  `'



___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26173.php

___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26174.php


Re: [OMPI users] Valgrind reports a plenty of Invalid read's in osc_rdma_data_move.c

2015-01-14 Thread Nathan Hjelm

Have you turned on valgrind support in Open MPI. That is required to
quite these bogus warnings.

-Nathan

On Wed, Jan 14, 2015 at 10:17:50AM +, Victor Vysotskiy wrote:
> Hi, 
> 
> Our parallel applications behaves strange when it is compiled with Openmpi 
> v1.8.4 on both Linux and Mac OS X platforms.  The Valgrind reports memory 
> problems in OpenMPI rather than in our code:
> 
> =4440== Invalid read of size 1
> ==4440==at 0xCAD6D37: ompi_osc_rdma_callback (osc_rdma_data_move.c:1650)
> ==4440==by 0xC05E87F: ompi_request_complete (request.h:402)
> ==4440==by 0xC05F1F6: recv_request_pml_complete (pml_ob1_recvreq.h:181)
> ==4440==by 0xC060476: mca_pml_ob1_recv_frag_callback_match 
> (pml_ob1_recvfrag.c:243)
> ==4440==by 0xB9F9D4E: mca_btl_vader_check_fboxes (btl_vader_fbox.h:220)
> ==4440==by 0xB9FC23C: mca_btl_vader_component_progress 
> (btl_vader_component.c:695)
> ==4440==by 0x606C1C7: opal_progress (opal_progress.c:187)
> ==4440==by 0x50E7A22: opal_condition_wait (condition.h:78)
> ==4440==by 0x50E8360: ompi_request_default_wait_all (req_wait.c:281)
> ==4440==by 0xD1578C3: ompi_coll_tuned_sendrecv_zero 
> (coll_tuned_barrier.c:77)
> ==4440==by 0xD157FC1: ompi_coll_tuned_barrier_intra_two_procs 
> (coll_tuned_barrier.c:318)
> ==4440==by 0xD149BCB: ompi_coll_tuned_barrier_intra_dec_fixed 
> (coll_tuned_decision_fixed.c:194)
> ==4440==  Address 0xd6d5d80 is 0 bytes inside a block of size 8,208 alloc'd
> ==4440==at 0x4C2CD7B: malloc (in 
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==4440==by 0x60BAD50: opal_malloc (malloc.c:101)
> ==4440==by 0xCAD07DF: component_select (osc_rdma_component.c:462)
> ==4440==by 0x51A2253: ompi_osc_base_select (osc_base_init.c:73)
> ==4440==by 0x50EF1E7: ompi_win_create (win.c:152)
> ==4440==by 0x51625AB: PMPI_Win_create (pwin_create.c:79)
> ==4440==by 0x5B3647: gtsk_setup_ (gtsk_nxtval.c:94)
> 
> ==4440== Invalid read of size 2
> ==4440==at 0xCAD68C4: process_frag (osc_rdma_data_move.c:1554)
> ==4440==by 0xCAD6DBB: ompi_osc_rdma_callback (osc_rdma_data_move.c:1656)
> ==4440==by 0xC05E87F: ompi_request_complete (request.h:402)
> ==4440==by 0xC05F1F6: recv_request_pml_complete (pml_ob1_recvreq.h:181)
> ==4440==by 0xC060476: mca_pml_ob1_recv_frag_callback_match 
> (pml_ob1_recvfrag.c:243)
> ==4440==by 0xB9F9D4E: mca_btl_vader_check_fboxes (btl_vader_fbox.h:220)
> ==4440==by 0xB9FC23C: mca_btl_vader_component_progress 
> (btl_vader_component.c:695)
> ==4440==by 0x606C1C7: opal_progress (opal_progress.c:187)
> ==4440==by 0x50E7A22: opal_condition_wait (condition.h:78)
> ==4440==by 0x50E8360: ompi_request_default_wait_all (req_wait.c:281)
> ==4440==by 0xD1578C3: ompi_coll_tuned_sendrecv_zero 
> (coll_tuned_barrier.c:77)
> ==4440==by 0xD157FC1: ompi_coll_tuned_barrier_intra_two_procs 
> (coll_tuned_barrier.c:318)
> ==4440==  Address 0xd6d5d88 is 8 bytes inside a block of size 8,208 alloc'd
> ==4440==at 0x4C2CD7B: malloc (in 
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==4440==by 0x60BAD50: opal_malloc (malloc.c:101)
> ==4440==by 0xCAD07DF: component_select (osc_rdma_component.c:462)
> ==4440==by 0x51A2253: ompi_osc_base_select (osc_base_init.c:73)
> ==4440==by 0x50EF1E7: ompi_win_create (win.c:152)  
> ==4440==by 0x51625AB: PMPI_Win_create (pwin_create.c:79)
> ==4440==by 0x5B3647: gtsk_setup_ (gtsk_nxtval.c:94)
> ...
> 
> Enclosed please find the complete report for the master processes.  Could it 
> be that these invalid memory operations are caused by our code?  The line 94 
> in our code looks like:
> 
> MPI_Win_create(buff,size,sizeof(long int),MPI_INFO_NULL,MPI_COMM_WORLD,&twin);
> 
> /* char *buff;
> MPI_Aint size;
> MPI_Win twin;
> */
> 
> I would greatly appreciate any help you can give me in working this problem.
> 
> With best regards,
> Victor.
> 
> P.s. The output of "ompi_info -- all" is  also attached. 

> ==4440== Memcheck, a memory error detector
> ==4440== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
> ==4440== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
> ==4440== 
> ==4440== Warning: set address range perms: large range [0x1617c028, 
> 0x2617c058) (noaccess)
> ==4440== Invalid read of size 1
> ==4440==at 0xCAD6D37: ompi_osc_rdma_callback (osc_rdma_data_move.c:1650)
> ==4440==by 0xC05E87F: ompi_request_complete (request.h:402)
> ==4440==by 0xC05F1F6: recv_request_pml_complete (pml_ob1_recvreq.h:181)
> ==4440==by 0xC060476: mca_pml_ob1_recv_frag_callback_match 
> (pml_ob1_recvfrag.c:243)
> ==4440==by 0xB9F9D4E: mca_btl_vader_check_fboxes (btl_vader_fbox.h:220)
> ==4440==by 0xB9FC23C: mca_btl_vader_component_progress 
> (btl_vader_component.c:695)
> ==4440==by 0x606C1C7: opal_progress (opal_progress.c:187)
> ==4440==by 0x50E7A22: opal_condition_wait (condition.h:78)
> ==4440==by 0x50E8360: ompi_request_

[OMPI users] Valgrind reports a plenty of Invalid read's in osc_rdma_data_move.c

2015-01-14 Thread Victor Vysotskiy
Hi, 

Our parallel applications behaves strange when it is compiled with Openmpi 
v1.8.4 on both Linux and Mac OS X platforms.  The Valgrind reports memory 
problems in OpenMPI rather than in our code:

=4440== Invalid read of size 1
==4440==at 0xCAD6D37: ompi_osc_rdma_callback (osc_rdma_data_move.c:1650)
==4440==by 0xC05E87F: ompi_request_complete (request.h:402)
==4440==by 0xC05F1F6: recv_request_pml_complete (pml_ob1_recvreq.h:181)
==4440==by 0xC060476: mca_pml_ob1_recv_frag_callback_match 
(pml_ob1_recvfrag.c:243)
==4440==by 0xB9F9D4E: mca_btl_vader_check_fboxes (btl_vader_fbox.h:220)
==4440==by 0xB9FC23C: mca_btl_vader_component_progress 
(btl_vader_component.c:695)
==4440==by 0x606C1C7: opal_progress (opal_progress.c:187)
==4440==by 0x50E7A22: opal_condition_wait (condition.h:78)
==4440==by 0x50E8360: ompi_request_default_wait_all (req_wait.c:281)
==4440==by 0xD1578C3: ompi_coll_tuned_sendrecv_zero 
(coll_tuned_barrier.c:77)
==4440==by 0xD157FC1: ompi_coll_tuned_barrier_intra_two_procs 
(coll_tuned_barrier.c:318)
==4440==by 0xD149BCB: ompi_coll_tuned_barrier_intra_dec_fixed 
(coll_tuned_decision_fixed.c:194)
==4440==  Address 0xd6d5d80 is 0 bytes inside a block of size 8,208 alloc'd
==4440==at 0x4C2CD7B: malloc (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==4440==by 0x60BAD50: opal_malloc (malloc.c:101)
==4440==by 0xCAD07DF: component_select (osc_rdma_component.c:462)
==4440==by 0x51A2253: ompi_osc_base_select (osc_base_init.c:73)
==4440==by 0x50EF1E7: ompi_win_create (win.c:152)
==4440==by 0x51625AB: PMPI_Win_create (pwin_create.c:79)
==4440==by 0x5B3647: gtsk_setup_ (gtsk_nxtval.c:94)

==4440== Invalid read of size 2
==4440==at 0xCAD68C4: process_frag (osc_rdma_data_move.c:1554)
==4440==by 0xCAD6DBB: ompi_osc_rdma_callback (osc_rdma_data_move.c:1656)
==4440==by 0xC05E87F: ompi_request_complete (request.h:402)
==4440==by 0xC05F1F6: recv_request_pml_complete (pml_ob1_recvreq.h:181)
==4440==by 0xC060476: mca_pml_ob1_recv_frag_callback_match 
(pml_ob1_recvfrag.c:243)
==4440==by 0xB9F9D4E: mca_btl_vader_check_fboxes (btl_vader_fbox.h:220)
==4440==by 0xB9FC23C: mca_btl_vader_component_progress 
(btl_vader_component.c:695)
==4440==by 0x606C1C7: opal_progress (opal_progress.c:187)
==4440==by 0x50E7A22: opal_condition_wait (condition.h:78)
==4440==by 0x50E8360: ompi_request_default_wait_all (req_wait.c:281)
==4440==by 0xD1578C3: ompi_coll_tuned_sendrecv_zero 
(coll_tuned_barrier.c:77)
==4440==by 0xD157FC1: ompi_coll_tuned_barrier_intra_two_procs 
(coll_tuned_barrier.c:318)
==4440==  Address 0xd6d5d88 is 8 bytes inside a block of size 8,208 alloc'd
==4440==at 0x4C2CD7B: malloc (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==4440==by 0x60BAD50: opal_malloc (malloc.c:101)
==4440==by 0xCAD07DF: component_select (osc_rdma_component.c:462)
==4440==by 0x51A2253: ompi_osc_base_select (osc_base_init.c:73)
==4440==by 0x50EF1E7: ompi_win_create (win.c:152)  
==4440==by 0x51625AB: PMPI_Win_create (pwin_create.c:79)
==4440==by 0x5B3647: gtsk_setup_ (gtsk_nxtval.c:94)
...

Enclosed please find the complete report for the master processes.  Could it be 
that these invalid memory operations are caused by our code?  The line 94 in 
our code looks like:

MPI_Win_create(buff,size,sizeof(long int),MPI_INFO_NULL,MPI_COMM_WORLD,&twin);

/* char *buff;
MPI_Aint size;
MPI_Win twin;
*/

I would greatly appreciate any help you can give me in working this problem.

With best regards,
Victor.

P.s. The output of "ompi_info -- all" is  also attached. 
==4440== Memcheck, a memory error detector
==4440== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==4440== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==4440== 
==4440== Warning: set address range perms: large range [0x1617c028, 0x2617c058) 
(noaccess)
==4440== Invalid read of size 1
==4440==at 0xCAD6D37: ompi_osc_rdma_callback (osc_rdma_data_move.c:1650)
==4440==by 0xC05E87F: ompi_request_complete (request.h:402)
==4440==by 0xC05F1F6: recv_request_pml_complete (pml_ob1_recvreq.h:181)
==4440==by 0xC060476: mca_pml_ob1_recv_frag_callback_match 
(pml_ob1_recvfrag.c:243)
==4440==by 0xB9F9D4E: mca_btl_vader_check_fboxes (btl_vader_fbox.h:220)
==4440==by 0xB9FC23C: mca_btl_vader_component_progress 
(btl_vader_component.c:695)
==4440==by 0x606C1C7: opal_progress (opal_progress.c:187)
==4440==by 0x50E7A22: opal_condition_wait (condition.h:78)
==4440==by 0x50E8360: ompi_request_default_wait_all (req_wait.c:281)
==4440==by 0xD1578C3: ompi_coll_tuned_sendrecv_zero 
(coll_tuned_barrier.c:77)
==4440==by 0xD157FC1: ompi_coll_tuned_barrier_intra_two_procs 
(coll_tuned_barrier.c:318)
==4440==by 0xD149BCB: ompi_coll_tuned_barrier_intra_dec_fixed 
(coll_tuned_decision_fixed.c:194)
==4440==  Address 0xd6d5d80 is 0 bytes inside a block of 

Re: [OMPI users] Problems compiling OpenMPI 1.8.4 with GCC 4.9.2

2015-01-14 Thread Gilles Gouaillardet
Ryan,

this issue has already been reported.

please refer to
http://www.open-mpi.org/community/lists/users/2015/01/26134.php for a
workaround

Cheers,

Gilles

On 2015/01/14 16:35, Novosielski, Ryan wrote:
> OpenMPI 1.8.4 does not appear to be buildable with GCC 4.9.2. The output, as 
> requested by the Getting Help page, is attached.
>
> I believe I tried GCC 4.9.0 too and it didn't work.
>
> I did successfully build it with Intel's compiler suite v15.0.1, so I do 
> appear to know what I'm doing.
>
> Thanks in advance for your help.
>
> --
>  *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
>  || \\UTGERS  |-*O*-
>  ||_// Biomedical | Ryan Novosielski - Senior Technologist
>  || \\ and Health | novos...@rutgers.edu - 973/972.0922 (2x0922)
>  ||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
>   `'
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/01/26173.php



[OMPI users] Problems compiling OpenMPI 1.8.4 with GCC 4.9.2

2015-01-14 Thread Novosielski, Ryan
OpenMPI 1.8.4 does not appear to be buildable with GCC 4.9.2. The output, as 
requested by the Getting Help page, is attached.

I believe I tried GCC 4.9.0 too and it didn't work.

I did successfully build it with Intel's compiler suite v15.0.1, so I do appear 
to know what I'm doing.

Thanks in advance for your help.

--
 *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences*
 || \\UTGERS  |-*O*-
 ||_// Biomedical | Ryan Novosielski - Senior Technologist
 || \\ and Health | novos...@rutgers.edu - 973/972.0922 (2x0922)
 ||  \\  Sciences | OIRT/High Perf & Res Comp - MSB C630, Newark
  `'

ompi-out.tar.bz2
Description: ompi-out.tar.bz2