hi
i am trying to compile openmpi and when I make the "make all install" I have an
error that I cant figure out. Any feedback would be appreciated.
Thanks!
ompi-output.tar.bz2
Description: BZip2 compressed data
Note that this mailing list is for Open MPI -- which is a different
implementation of MPI than MPICH.
Ralph's answer might be correct, but we can't say for sure because we can't
answer questions about MPICH here. :-)
On Jun 5, 2013, at 11:03 AM, Florian Beutler wrote:
> Hi
> I just installe
I would guess that adding the -lmpi might create an issue as the "mpif90"
wrapper already has it in there. Are you adding it for some reason?
On Jun 5, 2013, at 11:03 AM, Florian Beutler wrote:
> Hi
> I just installed openMPI and the installation works without any trouble. But
> when I want t
Hi
I just installed openMPI and the installation works without any trouble.
But when I want to use the mpif90 compiler, it gives me the following error
bash-3.2$ mpif90 -lmpi
ld: library not found for -lmpi
I was wondering whether there is a configure flag which I forgot to set? My
configure comm
com]
Sent: Wednesday, May 29, 2013 3:31 PM
To: Open MPI Users
Subject: EXTERNAL: Re: [OMPI users] Problem building OpenMPI 1.6.4 with PGI 13.4
Edwin --
Can you ask PGI support about this? I swear that the PGI compiler suite has
supported offsetof before.
On May 29, 2013, at 5:26 PM, &quo
Edwin,
I just built Open MPI 1.6.4 with PGI 13.5 today and had no problem. How did you
configure your build?
Matt
On May 29, 2013, at 5:26 PM, "Blosch, Edwin L"
mailto:edwin.l.blo...@lmco.com>> wrote:
I’m having trouble building OpenMPI 1.6.4 with PGI 13.4. Suggestions?
checking alignment of
It works with PGI 12.x and it better work with newer versions since offsetof is
ISOC89/ANSIC.
-Nathan
On Wed, May 29, 2013 at 09:31:58PM +, Jeff Squyres (jsquyres) wrote:
> Edwin --
>
> Can you ask PGI support about this? I swear that the PGI compiler suite has
> supported offsetof before
Edwin --
Can you ask PGI support about this? I swear that the PGI compiler suite has
supported offsetof before.
On May 29, 2013, at 5:26 PM, "Blosch, Edwin L" wrote:
> I’m having trouble building OpenMPI 1.6.4 with PGI 13.4. Suggestions?
>
> checking alignment of double... 8
> checking ali
I'm having trouble building OpenMPI 1.6.4 with PGI 13.4. Suggestions?
checking alignment of double... 8
checking alignment of long double... 8
checking alignment of float _Complex... 4
checking alignment of double _Complex... 8
checking alignment of long double _Complex... 8
checking alignment of
Hi
I installed openmpi-1.7.2rc3r28550 on "openSuSE Linux 12.1", "Solaris 10
x86_64", and "Solaris 10 sparc" with "Sun C 5.12" in 32- and 64-bit
versions. Unfortunately "rank_files" don't work as expected.
sunpc1 rankfiles 109 more rf_ex_sunpc_linpc
# mpiexec -report-bindings -rf rf_ex_sunpc_linp
Padma Pavani writes:
> Hi Team,
>
> I am facing some problem while running HPL benchmark.
>
>
>
> I am using Intel mpi -4.0.1 with Qlogic-OFED-1.5.4.1 to run benchmark and
> also tried with openmpi-1.4.0 but getting same error.
>
>
> Error File :
>
> [compute-0-1.local:06936] [[14544,1],25] ORTE
I'm guessing you're the alter ego of
http://www.open-mpi.org/community/lists/devel/2013/04/12309.php? :-)
My first suggestion to you is to upgrade your version of Open MPI -- 1.4.0 is
ancient. Can you upgrade to 1.6.4?
On Apr 25, 2013, at 2:08 PM, Padma Pavani wrote:
> Hi Team,
>
> I am f
Hi Team,
I am facing some problem while running HPL benchmark.
I am using Intel mpi -4.0.1 with Qlogic-OFED-1.5.4.1 to run benchmark and
also tried with openmpi-1.4.0 but getting same error.
Error File :
[compute-0-1.local:06936] [[14544,1],25] ORTE_ERROR_LOG: A message is
attempting to be
Sorry for the delay in replying.
If you have non-contiguous buffers, you might want to investigate using MPI
datatypes to describe the memory that you want to send/receive. Google around;
you'll find bunches of tutorials on these kinds of things.
However, be aware that for a large 2D array, th
Dear all,
I am new in OpenMPI and writing a parallel processing program using openmpi in
C++ language. I would like to use the function MPI_Allreduce() but my sendbuf
and recvbuf datatype are pointers/arrays (2D).
Is it possible to pass in and out the pointers/arrays using the MPI_Allreduce()
Thank you, Ralph!
Gus Correa
On 03/29/2013 09:33 AM, Ralph Castain wrote:
Just an update: I have this fixed in the OMPI trunk. It didn't make 1.7.0, but
will be in 1.7.1 and beyond.
On Mar 21, 2013, at 2:09 PM, Gus Correa wrote:
Thank you, Ralph.
I will try to use a rankfile.
In any case
Just an update: I have this fixed in the OMPI trunk. It didn't make 1.7.0, but
will be in 1.7.1 and beyond.
On Mar 21, 2013, at 2:09 PM, Gus Correa wrote:
> Thank you, Ralph.
>
> I will try to use a rankfile.
>
> In any case, the --cpus-per-proc option is a very useful feature:
> for hybrid
Thank you, Ralph.
I will try to use a rankfile.
In any case, the --cpus-per-proc option is a very useful feature:
for hybrid MPI+OpenMP programs, for these processors with one FPU
shared by two cores, etc.
If it gets fixed in a later release of OMPI that would be great.
Thank you,
Gus Correa
I've heard this from a couple of other sources - it looks like there is a
problem on the daemons when they compute the location for -cpus-per-proc. I'm
not entirely sure why that would be as the code is supposed to be common with
mpirun, but there are a few differences.
I will take a look at it
On 03/21/2013 03:12 PM, Reuti wrote:
Am 21.03.2013 um 20:01 schrieb Gus Correa:
Dear Open MPI Pros
I am having trouble using mpiexec with --cpus-per-proc
on multiple nodes in OMPI 1.6.4.
I know there is an ongoing thread on similar runtime issues
of OMPI 1.7.
By no means I am trying to hijack
Am 21.03.2013 um 20:01 schrieb Gus Correa:
> Dear Open MPI Pros
>
> I am having trouble using mpiexec with --cpus-per-proc
> on multiple nodes in OMPI 1.6.4.
>
> I know there is an ongoing thread on similar runtime issues
> of OMPI 1.7.
> By no means I am trying to hijack T. Mishima's questions.
Dear Open MPI Pros
I am having trouble using mpiexec with --cpus-per-proc
on multiple nodes in OMPI 1.6.4.
I know there is an ongoing thread on similar runtime issues
of OMPI 1.7.
By no means I am trying to hijack T. Mishima's questions.
My question is genuine, though, and perhaps related to his
Weird. This particular code hasn't changed in a *long* time.
Do you have successful oSUSE 12.1 and Sol x86_64 builds on this platform?
On Jan 30, 2013, at 1:27 PM, Siegmar Gross
wrote:
> Hi
>
> today I tried to install openmpi-1.9a1r2797 on SunOS 10 Sparc,
> SunOS 10 x86_64, and Linux x86_6
Hi
today I tried to install openmpi-1.9a1r2797 on SunOS 10 Sparc,
SunOS 10 x86_64, and Linux x86_64 with Sun C 5.12. I succeeded
with all 64-bit systems and the 32-bit system on Solaris Sparc.
On Linux (openSUSE 12.1) and Solaris x86_64 I got the following
errors.
tyr openmpi-1.9 245 tail
openm
Aha - I'm able to replicate it, will fix.
On Jan 29, 2013, at 11:57 AM, Ralph Castain wrote:
> Using an svn checkout of the current 1.6 branch, if works fine for me:
>
> [rhc@odin ~/v1.6]$ cat rf
> rank 0=odin127 slot=0:0-1,1:0-1
> rank 1=odin128 slot=1
>
> [rhc@odin ~/v1.6]$ mpirun -n 2 -rf .
Using an svn checkout of the current 1.6 branch, if works fine for me:
[rhc@odin ~/v1.6]$ cat rf
rank 0=odin127 slot=0:0-1,1:0-1
rank 1=odin128 slot=1
[rhc@odin ~/v1.6]$ mpirun -n 2 -rf ./rf --report-bindings hostname
[odin127.cs.indiana.edu:12078] MCW rank 0 bound to socket 0[core 0-1] socket
1
Hi
today I have installed openmpi-1.6.4rc3r27923. Unfortunately I
still have a problem with rankfiles, if I start a process on a
remote machine.
tyr rankfiles 114 ssh linpc1 ompi_info | grep "Open MPI:"
Open MPI: 1.6.4rc3r27923
tyr rankfiles 115 cat rf_linpc1
rank 0=linpc1 slot
Found it! A trivial error (missing a break in a switch statement) that only
impacts things if multiple sockets are specified in the slot_list. CMR filed to
include the fix in 1.6.4
Thanks for your patience
Ralph
On Jan 24, 2013, at 7:50 PM, Ralph Castain wrote:
> I built the current 1.6 branc
I built the current 1.6 branch (which hasn't seen any changes that would impact
this function) and was able to execute it just fine on a single socket machine.
I then gave it your slot-list, which of course failed as I don't have two
active sockets (one is empty), but it appeared to parse the li
Hi
> > I used your test code to confirm it also fails on our trunk -
> > it looks like someone got the reference count wrong when
> > creating/destructing groups.
>
> No, the code is not MPI compliant.
>
> The culprit is line 254 in the test code where Siegmar manually
> copied the group_comm_wo
Ah - cool! Thanks!
On Jan 19, 2013, at 7:19 AM, George Bosilca wrote:
> On Jan 19, 2013, at 15:44 , Ralph Castain wrote:
>
>> I used your test code to confirm it also fails on our trunk - it looks like
>> someone got the reference count wrong when creating/destructing groups.
>
> No, the cod
On Jan 19, 2013, at 15:44 , Ralph Castain wrote:
> I used your test code to confirm it also fails on our trunk - it looks like
> someone got the reference count wrong when creating/destructing groups.
No, the code is not MPI compliant.
The culprit is line 254 in the test code where Siegmar man
I'll look into that next week.
Edgar
On 1/19/2013 8:44 AM, Ralph Castain wrote:
> I used your test code to confirm it also fails on our trunk - it looks like
> someone got the reference count wrong when creating/destructing groups.
>
> Afraid I'll have to defer to the authors of that code area..
I used your test code to confirm it also fails on our trunk - it looks like
someone got the reference count wrong when creating/destructing groups.
Afraid I'll have to defer to the authors of that code area...
On Jan 19, 2013, at 1:27 AM, Siegmar Gross
wrote:
> Hi
>
> I have installed openm
Hi
I have installed openmpi-1.6.4rc2 and have still a problem with my
rankfile.
linpc1 rankfiles 113 ompi_info | grep "Open MPI:"
Open MPI: 1.6.4rc2r27861
linpc1 rankfiles 114 cat rf_linpc1
rank 0=linpc1 slot=0:0-1,1:0-1
linpc1 rankfiles 115 mpiexec -report-bindings -np 1 \
-
Hi
I have installed openmpi-1.6.4rc2 and have the following problem.
tyr strided_vector 110 ompi_info | grep "Open MPI:"
Open MPI: 1.6.4rc2r27861
tyr strided_vector 111 mpicc -showme
gcc -I/usr/local/openmpi-1.6.4_64_gcc/include -fexceptions -pthread -m64
-L/usr/local/openmpi-1.6
ble to make and run the Java examples in the
>>> MPI_ROOT/examples directory ?
>>>
>>> I started with those after similar hiccups trying to get things up and
>>> running.
>>>
>>> Chuck Mosher
>>> JavaSeis.org
>>>
>>> Fro
er
>> JavaSeis.org
>>
>> From: Ralph Castain
>> To: Open MPI Users
>> Sent: Thursday, January 17, 2013 2:27 PM
>> Subject: Re: [OMPI users] Problem with mpirun for java codes
>>
>> Just as an FYI: we have removed the Java bindings from the 1.7.0 releas
gt; running.
>
> Chuck Mosher
> JavaSeis.org
>
> From: Ralph Castain
> To: Open MPI Users
> Sent: Thursday, January 17, 2013 2:27 PM
> Subject: Re: [OMPI users] Problem with mpirun for java codes
>
> Just as an FYI: we have removed the Java bindings from the 1.7.0 releas
nd running.
Chuck Mosher
JavaSeis.org
From: Ralph Castain
To: Open MPI Users
Sent: Thursday, January 17, 2013 2:27 PM
Subject: Re: [OMPI users] Problem with mpirun for java codes
Just as an FYI: we have removed the Java bindings from the 1.7.0 release due t
Just as an FYI: we have removed the Java bindings from the 1.7.0 release due to
all the reported errors - looks like that code just isn't ready yet for
release. It remains available on the nightly snapshots of the developer's trunk
while we continue to debug it.
With that said, I tried your exa
Hi,
The version that I am using is
1.7rc6 (pre-release)
Regards,
Karos
On 16 Jan 2013, at 21:07, Ralph Castain wrote:
> Which version of OMPI are you using?
>
>
> On Jan 16, 2013, at 11:43 AM, Karos Lotfifar wrote:
>
>> Hi,
>>
>> I am still struggling with the installation problems! I
Which version of OMPI are you using?
On Jan 16, 2013, at 11:43 AM, Karos Lotfifar wrote:
> Hi,
>
> I am still struggling with the installation problems! I get very strange
> errors. everything is fine when I run OpenMPI for C codes, but when I try to
> run a simple java code I get very stran
Hi,
I am still struggling with the installation problems! I get very strange
errors. everything is fine when I run OpenMPI for C codes, but when I try
to run a simple java code I get very strange error. The code is as simple
as the following and I can not get it running:
import mpi.*;
class Java
Hi
I have a problem with groups and communicators in openmpi-1.9a1r27787
with Java. I want to multiply two matrices with any number of
processes. I build a new group, if I start more than n processes
and I use all processes, if I start at most n processes.
My program contains the following code.
Hi
do you know when you will have time to solve the problem with a
rankfile? In the past you told me that my rankfile is correct.
linpc1 rankfiles 120 ompi_info | grep "Open MPI:"
Open MPI: 1.6.4a1r27766
linpc1 rankfiles 121 mpiex
What is even stranger is that the error occurs when attempting to launch a
daemon! Does your program do a series of comm_spawns?
Sent from my iPad
On Jan 10, 2013, at 7:28 AM, "Jeff Squyres (jsquyres)"
wrote:
> That's a weird one -- it looks like having too many open files on your system
> i
That's a weird one -- it looks like having too many open files on your system
is causing a cascading set of failures.
Are you saying that your program runs for a while and then on iteration 32, it
fails with errors like this? If so, I'd like for a file descriptor leak in
your program.
On J
On Jan 8, 2013, at 5:49 PM, Crni Gorac
wrote:
> Most MPI implementations (MPICH, Intel MPI) are defining MPI datatypes
> (MPI_INT, MPI_FLOAT etc.) as constants; in OpenMPI, these are practically
> pointers to corresponding internal structures (for example MPI_FLOAT is
> defined as pointer to
Most MPI implementations (MPICH, Intel MPI) are defining MPI datatypes
(MPI_INT, MPI_FLOAT etc.) as constants; in OpenMPI, these are practically
pointers to corresponding internal structures (for example MPI_FLOAT is
defined as pointer to mpi_float structure, etc.). In trying to employ some
C++ te
Hallo Siegmar,
thanks for your report! The build issue should be fixed in revision 27770, so
just give it a try.
With regards,
Matthias Jurenz
> From: Siegmar Gross
> Subject: [OMPI users] problem building openmpi-1.9a1r27751 on Solaris 10
> Date: January 6, 2013 11:54:26 PM PST
Hi,
today I tried to build openmpi-1.9a1r27751 on "Solaris 10 Sparc"
and "Solaris 10 x86_64" with "Sun C 5.12" and got the following
errors on both platforms.
...
Making all in vtlib
make[5]: Entering directory `.../ompi/contrib/vt/vt/vtlib'
CC vt_comp_phat.lo
CC vt_execwrap.lo
".
Hello open MPI users:
I was just running a program that usually works well in the cluster and
suddenly in the 32 iteration I get this strange set of errors associated with.
I will appreciate if someone could give me some hint of the problem and how to
solve
Thanks!
Mariana
/usr/bin/ssh: err
On Dec 13, 2012, at 2:39 AM, Siegmar Gross wrote:
> I found the error with your hint. For Open MPI 1.6.x I must also
> specify "F77" and "FFLAGS" for the Fortran 77 compiler. Otherwise
> it uses "gfortran" from the GNU package. "gfortran" worked for the
> 64 bit version and didn't work for the 32
Disturbing, but I don't know if/when someone will address it. The problem
really is that few, if any, of the developers have access to hetero systems. So
developing and testing hetero support is difficult to impossible.
I'll file a ticket about it and direct it to the attention of the person who
Hi,
some weeks ago I reported a problem with my matrix multiplication
program in a heterogeneous environment (little endian and big endian
machines). The problem occurs in openmpi-1.6.x, openmpi-1.7, and
openmpi-1.9. Now I implemented a small program which only scatters
the columns of an integer m
Hi,
> Can you send the config.log for the platform where it failed?
>
> I'd like to see the specific compiler error that occurred.
I found the error with your hint. For Open MPI 1.6.x I must also
specify "F77" and "FFLAGS" for the Fortran 77 compiler. Otherwise
it uses "gfortran" from the GNU pa
Can you send the config.log for the platform where it failed?
I'd like to see the specific compiler error that occurred.
On Dec 12, 2012, at 10:33 AM, Siegmar Gross wrote:
> Hi,
>
> I tried to build openmpi-1.6.4a1r27643 on several platforms
> (Solaris Sparc, Solaris x86_64, and Linux x86_64)
Hi,
I tried to build openmpi-1.6.4a1r27643 on several platforms
(Solaris Sparc, Solaris x86_64, and Linux x86_64) with Solaris
Studio C (Sun C 5.12) in 32 and 64 bit mode. "configure" broke
on Linux (openSuSE Linux 12.1) for the 32 bit version with the
following error:
...
checking if Fortran 77
That did the trick. Added it to ~/.bashrc and everything is flawless.
Thanks a million,Rafael
Try adding path to openmpi libraries to LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=/home/ras536/lib/openmpi/lib:$LD_LIBRARY_PATH
Regards, Pavel Mezentsev
2012/10/12 Rafael Antonio Soler-Crespo
> Hello everyone,
>
> I'm a new student at my university, and I need to install LAMMPS software
> to pe
Hello everyone,
I'm a new student at my university, and I need to install LAMMPS software to
perform some molecular dynamic simulations for my work. The cluster I am
working on has no root access for me (obviously) and I am installing everything
on my local account. I'm having some difficulty
I filed a bug fix for this one. However, something you should note.
If you fail to provide a "-np N" argument to mpiexec, we assume you want ALL
all available slots filled. The rankfile will contain only those procs that you
want specifically bound. The remaining procs will be unbound.
So with
I saw your earlier note about this too. Just a little busy right now, but hope
to look at it soon.
Your rankfile looks fine, so undoubtedly a bug has crept into this rarely-used
code path.
On Oct 3, 2012, at 3:03 AM, Siegmar Gross
wrote:
> Hi,
>
> I want to test process bindings with a ran
Hi,
I want to test process bindings with a rankfile in openmpi-1.6.2. Both
machines are dual-processor dual-core machines running Solaris 10 x86_64.
tyr fd1026 138 cat host_sunpc0_1
sunpc0 slots=4
sunpc1 slots=4
tyr fd1026 139 cat rankfile
rank 0=sunpc0 slot=0:0-1,1:0-1
rank 1=sunpc1 slot=0:0-
Hi,
I installed openmpi-1.6.2 on our heterogeneous platform (Solaris 10
Sparc, Solaris 10 x86_84, and Linux x86_64).
tyr small_prog 125 mpiexec -report-bindings -np 4 -host sunpc0,sunpc1 \
-bysocket -bind-to-core date
Mon Oct 1 07:53:15 CEST 2012
[sunpc0:02084] MCW rank 0 bound to socket 0[co
Hi,
> Does the behavior only occur with Java applications, as your subject
> implies? I thought this was a more general behavior based on prior notes?
It is a general problem as you can see in the older email below. I
didn't change the header because I detected this behaviour when I
tried out mpi
Does the behavior only occur with Java applications, as your subject
implies? I thought this was a more general behavior based on prior notes?
As I said back then, I have no earthly idea why your local machine is being
ignored, and I cannot replicate that behavior on any system available to me.
W
Hi,
yesterday I have installed openmpi-1.9a1r27362 and I still have a
problem with "-host". My local machine will not be used, if I try
to start processes on three hosts.
tyr:Solaris 10, Sparc
sunpc4: Solaris 10 , x86_64
linpc4: openSUSE-Linux 12.1, x86_64
tyr mpi_classfiles 175 javac Hello
The mpi4py web site appears to be down right now, so I can't check, but don't
you need to call MPI_Finalize somehow?
Maybe you need to explicitly close the MPI module (which then implicitly calls
MPI_Finalize)? I'm afraid I don't know much about mpi4py, so I can't offer
specific advice.
Tha
Well, not sure what I can advise. Check to ensure that your LD_LIBRARY_PATH
is pointing to the same installation where your mpirun is located. For
whatever reason, the processes think they are singletons - i.e., that they
were not actually started by mpirun.
You might also want to ask the mpi4py f
Yes I am sure I read from a mpi4py guide I already check the examples if fact
this an example extracted from a guide…!! Evenmore this example if I use with
mpich2 it runs very nicely, even though for the other code I need openmpi
working =s
Mariana
On Sep 25, 2012, at 8:00 PM, Ralph Castain
I don't think that is true, but I suggest you check the mpi4py examples. I
believe all import does is import function definitions - it doesn't execute
anything.
Sent from my iPad
On Sep 25, 2012, at 2:41 PM, mariana Vargas wrote:
> MPI_init() is actually called when import MPI module from MPi
MPI_init() is actually called when import MPI module from MPi package...
On Sep 25, 2012, at 5:17 PM, Ralph Castain wrote:
You forgot to call MPI_Init at the beginning of your program.
On Sep 25, 2012, at 2:08 PM, Mariana Vargas Magana > wrote:
Hi
I think I'am not understanding what you sa
You forgot to call MPI_Init at the beginning of your program.
On Sep 25, 2012, at 2:08 PM, Mariana Vargas Magana
wrote:
> Hi
> I think I'am not understanding what you said , here is the hello.py and next
> the command mpirun…
>
> Thanks!
>
> #!/usr/bin/env python
> """
> Parallel Hello World
Hi
I think I'am not understanding what you said , here is the hello.py and next
the command mpirun…
Thanks!
#!/usr/bin/env python
"""
Parallel Hello World
"""
from mpi4py import MPI
import sys
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
name = MPI.Get_processor_name()
sy
The usual reason for this is that you aren't launching these processes
correctly. How are you starting your job? Are you using mpirun?
On Sep 25, 2012, at 1:43 PM, mariana Vargas wrote:
> Hi
>
> I fact I found what is the origin of this problem and it is because all
> processes have rank 0,
Hi
I fact I found what is the origin of this problem and it is because
all processes have rank 0, so I tested and in effect even when I send
the clasical Hello.py give the same, how can I solved this?? Do I re
installed every again???
Help please...
Mariana
On Sep 24, 2012, at 9:13 P
On Sep 25, 2012, at 5:59 PM, Siegmar Gross wrote:
> I have had "--enable-orterun-prefix-by-default" in my configure
> command. I removed it and rebuilt the package and now the environment
> is OK. Tommorrow I will run some tests and also try to get the
> information about the topology for our M400
Hi,
the environment is OK now (see below). Thank you very much for your
help.
> >>> I tried mpiJava on a 32-bit installation of openmpi-1.9a1r27361.
> >>> Why doesn't "mpiexec" start a process on my local machine (it
> >>> is not a matter of Java, because I have the same behaviour when
> >>> I us
On Sep 25, 2012, at 6:45 AM, Siegmar Gross
wrote:
> Hi,
>
>>> I tried mpiJava on a 32-bit installation of openmpi-1.9a1r27361.
>>> Why doesn't "mpiexec" start a process on my local machine (it
>>> is not a matter of Java, because I have the same behaviour when
>>> I use "hostname")?
>>>
>>> t
Hi,
> > I tried mpiJava on a 32-bit installation of openmpi-1.9a1r27361.
> > Why doesn't "mpiexec" start a process on my local machine (it
> > is not a matter of Java, because I have the same behaviour when
> > I use "hostname")?
> >
> > tyr java 133 mpiexec -np 3 -host tyr,sunpc4,sunpc1 \
> > j
On Sep 24, 2012, at 6:13 PM, Mariana Vargas Magana
wrote:
>
>
> Yes you are right this is what it says but if fact the weird thing is that
> not all times the error message appears….I send to 20 nodes and only one
> gives this message, is this normal…
Yes - that is precisely the behavior y
Yes you are right this is what it says but if fact the weird thing is that not
all times the error message appears….I send to 20 nodes and only one gives this
message, is this normal…
On Sep 24, 2012, at 8:00 PM, Ralph Castain wrote:
> Well, as it says, your processes called MPI_Init, but
Well, as it says, your processes called MPI_Init, but at least one of them
exited without calling MPI_Finalize. That violates the MPI rules and we
therefore terminate the remaining processes.
Check your code and see how/why you are doing that - you probably have a code
path whereby a process ex
Hi all
I get this error when I run a paralelized python code in a cluster,
could anyone give me an idea of what is happening? I'am new in this
Thanks...
mpirun has exited due to process rank 2 with PID 10259 on
node f01 exiting improperly. There are two reasons this could occur:
1. this
On Sep 24, 2012, at 4:35 AM, Siegmar Gross
wrote:
> Hi,
>
> I tried mpiJava on a 32-bit installation of openmpi-1.9a1r27361.
> Why doesn't "mpiexec" start a process on my local machine (it
> is not a matter of Java, because I have the same behaviour when
> I use "hostname")?
>
> tyr java 133
Hi,
I tried mpiJava on a 32-bit installation of openmpi-1.9a1r27361.
Why doesn't "mpiexec" start a process on my local machine (it
is not a matter of Java, because I have the same behaviour when
I use "hostname")?
tyr java 133 mpiexec -np 3 -host tyr,sunpc4,sunpc1 \
java -cp $HOME/mpi_classfile
Looks to me like we had a bug in the configure code - when we set the path for
the javac/h tests, we put your specified jdk-bindir at the *end* instead of at
the beginning. So if you had javac in your path, we picked it up instead of the
one you specified.
Should now be fixed in r27360.
Thanks
Hi,
I just installed openmpi-1.9a1r27359 in 64-bit mode. I used the following
command to configure the package. Unfortunately it doesn't work as expected,
because it still uses the 32-bit javac from /usr/local/jdk1.7.0_07/bin
so that I get an error when I try to run a Java program.
../openmpi-1.9
Looks like a CMR was missing a couple of changesets - this should be fixed now.
Thanks!
On Sep 14, 2012, at 5:32 AM, Siegmar Gross
wrote:
> Hi,
>
> I just installed openmpi-1.7a1r27338 without errors in my log-files.
>
> tyr small_prog 115 mpicc -showme
> cc -I/usr/local/openmpi-1.7_64_cc/in
Hi,
I just installed openmpi-1.7a1r27338 without errors in my log-files.
tyr small_prog 115 mpicc -showme
cc -I/usr/local/openmpi-1.7_64_cc/include -mt -m64
-L/usr/local/openmpi-1.7_64_cc/lib64 -lmpi -lpicl -lm -lkstat -llgrp
-lsocket -lnsl -lrt -lm
"mpiexec" works without options.
tyr small
We actually include hwloc v1.3.2 in the OMPI v1.6 series.
Can you download and try that on your machines?
http://www.open-mpi.org/software/hwloc/v1.3/
In particular try the hwloc-bind executable (outside of OMPI), and see if
binding works properly on your machines. I typically run a te
Hmmm...well, let's try to isolate this a little. Would you mind installing a
copy of the current trunk on this machine and trying it?
I ask because I'd like to better understand if the problem is in the actual
binding mechanism (i.e., hwloc), or in the code that computes where to bind the
proce
Hi,
> > are the following outputs helpful to find the error with
> > a rankfile on Solaris?
>
> If you can't bind on the new Solaris machine, then the rankfile
> won't do you any good. It looks like we are getting the incorrect
> number of cores on that machine - is it possible that it has
> hard
On Sep 7, 2012, at 5:41 AM, Siegmar Gross
wrote:
> Hi,
>
> are the following outputs helpful to find the error with
> a rankfile on Solaris?
If you can't bind on the new Solaris machine, then the rankfile won't do you
any good. It looks like we are getting the incorrect number of cores on th
Hi,
are the following outputs helpful to find the error with
a rankfile on Solaris? I wrapped long lines so that they
are easier to read. Have you had time to look at the
segmentation fault with a rankfile which I reported in my
last email (see below)?
"tyr" is a two processor single core machine
I couldn't really say for certain - I don't see anything obviously wrong with
your syntax, and the code appears to be working or else it would fail on the
other nodes as well. The fact that it fails solely on that machine seems
suspect.
Set aside the rankfile for the moment and try to just bind
Hi,
I'm new to rankfiles so that I played a little bit with different
options. I thought that the following entry would be similar to an
entry in an appfile and that MPI could place the process with rank 0
on any core of any processor.
rank 0=tyr.informatik.hs-fulda.de
Unfortunately it's not all
Hi,
> Are *all* the machines Sparc? Or just the 3rd one (rs0)?
Yes, both machines are Sparc. I tried first in a homogeneous
environment.
tyr fd1026 106 psrinfo -v
Status of virtual processor 0 as of: 09/04/2012 07:32:14
on-line since 08/31/2012 15:44:42.
The sparcv9 processor operates at 160
301 - 400 of 1200 matches
Mail list logo