Hey Gotz,

I have not seen this mpirun error with the OpenMPI version I have built
with Intel 12.1 and the mpicc fix:

openmpi-1.5.5rc1.tar.bz2

and from the looks of things, I wonder if your problem is related.  The
solution in the original case was to conditionally dial-down optimization
when using the 12.1 compiler to prevent the compiler itself from crashing
during a compile.  What you present is a failure during execution.  Such
failures might be due to over zealous optimization, but there seems to be
little reason on the face of it to believe that there is a connection between
the former and the latter.

Does this failure occur with all attempts to use 'mpirun' whatever the source?
My 'mpicc' problem did.  If this is true and If you believe it is an 
optimization
level issue you could try turning it off in the failing routine and see if that
produces a remedy.  I would also try things with the very latest release.

Those are my thoughts ... good luck.

rbw

Richard Walsh
Parallel Applications and Systems Manager
CUNY HPC Center, Staten Island, NY
W: 718-982-3319
M: 612-382-4620

Miracles are delivered to order by great intelligence, or when it is
absent, through the passage of time and a series of mere chance
events. -- Max Headroom

________________________________________
From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of Götz 
Waschk [goetz.was...@gmail.com]
Sent: Monday, January 30, 2012 10:48 AM
To: Open MPI Users
Subject: Re: [OMPI users] Latest Intel Compilers (ICS, version 12.1.0.233 Build 
20110811) issues ...

Hi Richard,


On Wed, Jan 4, 2012 at 4:06 PM, Richard Walsh
<richard.wa...@csi.cuny.edu> wrote:
> Moreover, this problem has been addressed with the following go-around
> in the 1.5.5 OpenMPI release with the following fix in 
> opal/mca/memory/linux/malloc.c:

> #ifdef __INTEL_COMPILER_BUILD_DATE
> #  if __INTEL_COMPILER_BUILD_DATE == 20110811
> #    pragma GCC optimization_level 1
> #  endif
> #endif

I have added this patch to openmpi 1.5.3. Previously, every mpicc
would crash, now mpicc is fine. However, mpirun still crashes like
this:
% mpirun -np 8 cpi-openmpi
[pax8e:13662] *** Process received signal ***
[pax8e:13662] Signal: Segmentation fault (11)
[pax8e:13662] Signal code: Address not mapped (1)
[pax8e:13662] Failing at address: 0x10
[pax8e:13662] [ 0] /lib64/libpthread.so.0(+0xf4a0) [0x7f348be7b4a0]
[pax8e:13662] [ 1]
/usr/lib64/openmpi-intel/lib/libmpi.so.1(opal_memory_ptmalloc2_int_malloc+0x4b3)
[0x7f348c817193]
[pax8e:13662] [ 2] /usr/lib64/openmpi-intel/lib/libmpi.so.1(+0xefdd9)
[0x7f348c815dd9]
[pax8e:13662] [ 3]
/usr/lib64/openmpi-intel/lib/libmpi.so.1(opal_class_initialize+0xaa)
[0x7f348c8278aa]
[pax8e:13662] [ 4]
/usr/lib64/openmpi-intel/lib/openmpi/mca_btl_openib.so(+0x1d0af)
[0x7f34874350af]
[pax8e:13662] [ 5] /lib64/libpthread.so.0(+0x77f1) [0x7f348be737f1]
[pax8e:13662] [ 6] /lib64/libc.so.6(clone+0x6d) [0x7f348bbb070d]
[pax8e:13662] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 6 with PID 13662 on node pax8e.ifh.de
exited on signal 11 (Segmentation fault).

I am using RHEL6.1 and the affected Intel 12.1 compiler.

Regards, Götz Waschk

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

________________________________

Change is in the Air - Smoking in Designated Areas Only in 
effect.<http://www.csi.cuny.edu/tobaccofree>
Tobacco-Free Campus as of July 1, 2012.

Reply via email to