Hi,
The above issue fixed w/ this patch I used:
https://raw.github.com/sebhtml/patches/master/openmpi/Raspberry-Pi-openmpi-1.6.2.patch
Is that possible OpenMPI can contain this patch in the future?
Thanks.
On Sun, Jan 20, 2013 at 3:13 AM, Lee Eric wrote:
> Hi,
>
> I just use --disable-mpif77 a
Hi,
I just use --disable-mpif77 and --disable-mpif90 to let configure run
well. However, I know it's only tough workround. After configured
well, I encounter following error when run make:
Making all in config
make[1]: Entering directory `/home/huli/Projects/openmpi-1.6.3/config'
make[1]: Nothing
Also, what MOFED/OFED version do you have?
MXM is compiled per OFED/MOFED version, is there match between active ofed
and mxm.rpm selected?
On Thu, Jan 17, 2013 at 4:09 PM, Francesco Simula <
francesco.sim...@roma1.infn.it> wrote:
> I tried building from OMPI 1.6.3 tarball with the following ./co
Hi Francesco,
Can you please provide complete output from ibv_devinfo -v command?
Also, it seems that you have Centos 5.8 with mxm/centos5.7 installed, will
check if there is a distro version incompatibilities which may cause it and
update you.
Alina/Josh - please follow.
Regards
M
On Thu, Jan 1
Hi,
The cross-compile issue I fixed. Check following source code:
opal_config_asm.m4:897: [AC_MSG_ERROR([No atomic primitives available
for $host])])
It seems that checks the toolchain's tuple is one of: armv7* or armv6*
or armv5*. I have recompiled my toolchain and no such error occurred.
Howeve
Ah - cool! Thanks!
On Jan 19, 2013, at 7:19 AM, George Bosilca wrote:
> On Jan 19, 2013, at 15:44 , Ralph Castain wrote:
>
>> I used your test code to confirm it also fails on our trunk - it looks like
>> someone got the reference count wrong when creating/destructing groups.
>
> No, the cod
On Jan 19, 2013, at 15:44 , Ralph Castain wrote:
> I used your test code to confirm it also fails on our trunk - it looks like
> someone got the reference count wrong when creating/destructing groups.
No, the code is not MPI compliant.
The culprit is line 254 in the test code where Siegmar man
I'll look into that next week.
Edgar
On 1/19/2013 8:44 AM, Ralph Castain wrote:
> I used your test code to confirm it also fails on our trunk - it looks like
> someone got the reference count wrong when creating/destructing groups.
>
> Afraid I'll have to defer to the authors of that code area..
I used your test code to confirm it also fails on our trunk - it looks like
someone got the reference count wrong when creating/destructing groups.
Afraid I'll have to defer to the authors of that code area...
On Jan 19, 2013, at 1:27 AM, Siegmar Gross
wrote:
> Hi
>
> I have installed openm
Any heads up? Thanks.
On Fri, Jan 18, 2013 at 5:28 AM, Jeff Squyres (jsquyres)
wrote:
> On Jan 16, 2013, at 6:41 AM, Leif Lindholm wrote:
>
>> That isn't, technically speaking, correct for the Raspberry Pi - but it is a
>> workaround if you know you will never actually use the asm implementatio
Hi
I have installed openmpi-1.6.4rc2 and have still a problem with my
rankfile.
linpc1 rankfiles 113 ompi_info | grep "Open MPI:"
Open MPI: 1.6.4rc2r27861
linpc1 rankfiles 114 cat rf_linpc1
rank 0=linpc1 slot=0:0-1,1:0-1
linpc1 rankfiles 115 mpiexec -report-bindings -np 1 \
-
Hi
I have installed openmpi-1.6.4rc2 and have the following problem.
tyr strided_vector 110 ompi_info | grep "Open MPI:"
Open MPI: 1.6.4rc2r27861
tyr strided_vector 111 mpicc -showme
gcc -I/usr/local/openmpi-1.6.4_64_gcc/include -fexceptions -pthread -m64
-L/usr/local/openmpi-1.6
12 matches
Mail list logo