I'm quite sure that we have since fixed the command line parsing problem, and I *think* we fixed the mmap problem.

Is there any way that you can upgrade to v1.1.3?


On Jan 29, 2007, at 3:24 PM, Avishay Traeger wrote:

Hello,

I have just installed Open MPI 1.1 on a 64-bit FC6 machine using yum.
The packages that were installed are:
openmpi-devel-1.1-7.fc6
openmpi-libs-1.1-7.fc6
openmpi-1.1-7.fc6

I tried running ompi_info, but it results in a segmentation fault.
Running strace shows this at the end:

mmap(NULL, 4294967296, PROT_READ|PROT_WRITE, MAP_PRIVATE| MAP_ANONYMOUS,
-1, 0) = -1 ENOMEM (Cannot allocate memory)
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++

The full output of ompi_info is:
# ompi_info
                Open MPI: 1.1
   Open MPI SVN revision: r10477
                Open RTE: 1.1
   Open RTE SVN revision: r10477
                    OPAL: 1.1
       OPAL SVN revision: r10477
                  Prefix: /usr
 Configured architecture: x86_64-redhat-linux-gnu
           Configured by: brewbuilder
           Configured on: Fri Oct 13 14:34:07 EDT 2006
          Configure host: hs20-bc1-7.build.redhat.com
                Built by: brewbuilder
                Built on: Fri Oct 13 14:44:39 EDT 2006
              Built host: hs20-bc1-7.build.redhat.com
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (single underscore)
      Fortran90 bindings: yes
 Fortran90 bindings size: small
              C compiler: gcc
     C compiler absolute: /usr/bin/gcc
            C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
      Fortran77 compiler: gfortran
  Fortran77 compiler abs: /usr/bin/gfortran
      Fortran90 compiler: gfortran
  Fortran90 compiler abs: /usr/bin/gfortran
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: yes
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
Segmentation fault

It seems that at this point in the program, it tries to map 4GB of
memory, which results in ENOMEM. I'm guessing that the return value of
mmap isn't checked, which results in this segmentation fault.

Also, I tried running "mpirun", and the output was:
# mpirun
*** buffer overflow detected ***: mpirun terminated
======= Backtrace: =========
/lib64/libc.so.6(__chk_fail+0x2f)[0x3f59ce0dff]
/lib64/libc.so.6[0x3f59ce065b]
/lib64/libc.so.6(__snprintf_chk+0x7b)[0x3f59ce052b]
/usr/lib64/openmpi/libopal.so.0(opal_cmd_line_get_usage_msg
+0x20a)[0x304901963a]
mpirun[0x403c7c]
mpirun(orterun+0xa4)[0x40260c]
mpirun(main+0x1b)[0x402563]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3f59c1da44]
mpirun[0x4024b9]

It also included a "Memory map", which I left out.

Any suggestions?

Thanks in advance,
Avishay

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

Reply via email to