Bummer -- I thought I had replied to that one (perhaps I'm thinking that multiple people have posted this and I've replied to some but not all of them).

Brock is correct that using "-fpic" to compile your MPI C++ app should solve the problem. This information *used* to be posted on the PGI web site in their support section, but I can't seem to find it any more.

As far as I understand the issue, it's a PGI compiler issue, not an OMPI issue.



On Jun 13, 2007, at 12:38 AM, Julian Cummings wrote:

Hello,

This is a follow up to a message originally posted by Andrew J Caird on 2006-08-16. No one ever replied to Andrew's message, and I am experiencing exactly the same problem with a more recent version of OpenMPI (1.2.1) and the PGI compiler (7.0). Essentially, the problem is that if you link an MPI application against the mpi_cxx library, at run time you will get a failure
with each process giving the following message:

C++ runtime abort: internal error: static object marked for destruction more
than once

If your MPI application does not utilize the MPI C++ bindings, you can link
without this library and the runtime errors will go away.

Since this problem was reported long ago and no one ever replied to the report, I would assume that this is a bug either in the mpi_cxx library or in the way it is built under the PGI compiler. I could not figure out how to submit a bug report to the open-mpi bug tracking system, so I hope that this message to the users list will suffice. I am attaching my ompi_info
--all output to this message.  I am running on a Myrinet-based Linux
cluster, but the particulars are not relevant for this problem. You can replicate the problem with any trivial MPI application code, such as the standard "hello" program using the standard C interface. I am attaching my hello.c source code. Compile with "mpicxx -o hello hello.c" and run with "mpirun -np 1 ./hello". The runtime error disappears if you compile with
"mpicc -o hello hello.c" to avoid linking against the mpi_cxx library.

Please let me know if there is any fix available for this problem.

Regards, Julian C.
<ompi_info.txt.gz>
<hello.c>
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to