Hi Josh!
First of all, thanks a lot for replying. :-)
When executing this checkpoint command, the running application
directly aborts, even though I did not specify the "--term" option:
--
mpirun noticed that process ran
Hi,
> Yuo must close the File using
> MPI_File_close(MPI_File *fh)
> before calling MPI_Finalize.
Newbie question... newbie problem ! UHiauhiauh... Thanks !!!
> By the way i think you shouldn't do
> strcat(argv[1], ".bz2");
> This would overwrite any following arguments.
I know... I was just t
ok i looked at the errors closely, it looks like that the problem is from the
"namespace MPI{.." in line 136 of "mpicxx.h" and every where that this
namespace (MPI) is used. here are the errors:
In file incl
ok but the problme is that I have another type of mpi from Scali, and when I
put in my make file "mpicc" and "mpic++" it goes and uses the Scali MPI's
compilers which have exactly the same names "mpicc and mpic++"...So It did not
give me any error, but i felt that it used the Scali stuff and not
You shouldn't need to add any -I's or -L's or -l's for Open MPI. Just
use mpic++ and mpicc (per my first note, notice that "mpicc" (lower
case) is the C compiler -- mpiCC is a synonym for the C++ compiler --
this could be your problem). Those wrappers add all the compiler /
linker flags t
The openmpi is installed in the following path: /opt/openmpi/1.2.7 so should i
replce what you told me about /usr/lib with /opt/openmpi/1.2.7 ??
--- On Wed, 9/17/08, Jeff Squyres wrote:
From: Jeff Squyres
Subject: Re: [OMPI users] errors returned from openmpi-1.2.7 source code
To: "Open MPI Us
The patch is in 1.2.6 and beyond.
It's not really a serialization issue -- it's an "early completion"
optimization, meaning that as soon as the underlying network stack
indicates that the buffer has been copied, OMPI marks the request as
complete and returns. But the data may not actually
Wow. I am indeed on IB.
So a program that calls an MPI_Bcast, then does a bunch of setup work that
should be done in parallel before re-synchronizing, in fact serializes the
setup work? I see its not quite that bad - If I run my little program on 5
nodes, I get 0 immediately, 1,2 and 4 after 5
I don't quite understand the format of this file, but at first glance,
you shouldn't need the following lines:
export LIBMPI = -lmpi
export MPIDIR=/nfs/sjafer/phd/openMPI/installed
export LDFLAGS +=-L$(MPIDIR)/lib
export INCLUDES_CPP += -I$(MPIDIR)/include
It also doesn't seem like the last 2
I guess this must depend on what BTL you're using. If I run all
processes on the same node, I get the behavior you expect. So, are you
running processes on the same node, or different nodes and, if
different, via TCP or IB?
Gregory D Abram wrote:
I have a little program which initializes,
Additionally, since you technically have a heterogeneous situation
(different OS versions on each node), you might want to:
- compile and install OMPI separately on each node (preferably in the
same filesystem location, though)
- compile and install your MPI app separately on each node (prefe
On Sep 17, 2008, at 9:49 AM, Paul Kapinos wrote:
If we add an " -x OPAL_PREFIX " flag, and through forces explicitly
forwarding of this environment variable, the error was not occured.
So we mean that this variable is needed to be exported across *all*
systhems in cluster.
It seems, the v
Are you using IB, perchance?
We have an "early completion" optimization in the 1.2 series that can
cause this kind of behavior. For apps that dip into the MPI layer
frequently, it doesn't matter. But for those that do not dip into the
MPI layer frequently, it can cause delays like this.
Date: Wed, 17 Sep 2008 16:23:59 +0200
From: "Sofia Aparicio Secanellas"
Subject: Re: [OMPI users] Problem with MPI_Send and MPI_Recv
To: "Open MPI Users"
Message-ID: <0625EEFB84E04647A1930A963A8DF7E3@aparicio1>
Content-Type: text/plain; format=flowed; charset="iso-8859-1";
reply-type=r
Hello Terry,
I was trying to do the debug. I was setting all the debugging parameters for
the MPI layer. Only with 1 parameter I obtain something different. I enclose
the result of the following command:
mpirun --mca mpi_show_mca_params 1 -np 2 --host
10.4.5.123,edu@10.4.5.126 --prefix /usr/
Hi Rolf,
Rolf vandeVaart wrote:
I don't know -- this sounds like an issue with the Sun CT 8 build
process. It could also be a by-product of using the combined 32/64
feature...? I haven't used that in forever and I don't remember the
restrictions. Terry/Rolf -- can you comment?
I will wr
I have a little program which initializes, calls MPI_Bcast, prints a
message, waits five seconds, and finalized. I sure thought that each
participating process would print the message immediately, then all would
wait and exit - thats what happens with mvapich 1.0.0. On OpenMPI 1.2.5,
though, I
Hello Terry,
Thank you very much for your help.
Sofia,
I took your program and actually ran it successfully on my systems using
Open MPI r19400. A couple questions:
1. Have you tried to run the program on a single node?
mpirun -np 2 --host 10.4.5.123 --prefix /usr/local
./PruebaSumaP
Paul Kapinos wrote:
Hi Jeff again!
(update) it works with "truly" OpenMPI, but it works *not* with SUN
Cluster Tools 8.0 (which is also an OpenMPI). So, it seems be an SUN
problem and not general problem of openMPI. Sorry for false relating
the problem.
Ah, gotcha. I guess my Sun colleagu
Hi Jeff again!
(update) it works with "truly" OpenMPI, but it works *not* with SUN
Cluster Tools 8.0 (which is also an OpenMPI). So, it seems be an SUN
problem and not general problem of openMPI. Sorry for false relating
the problem.
Ah, gotcha. I guess my Sun colleagues on this list will
Thank you very much. That was it. I didn't know that by default it was any
firewall running on the default Yellow Dog Linux installations since nothing
was asked about this issue during the installation.
You really saved my day George.
Regards,
Chris
On Wed, Sep 17, 2008 at 2:24 PM, George Bosilc
It looks like the configure script is picking up the wrong lib-
directory (/home/osa/blcr/lib64 instead of /home/osa/blcr/lib):
gcc -o conftest -O3 -DNDEBUG -finline-functions -fno-strict-
aliasing -pthread \
-I/home/osa/blcr/include -L/home/osa/blcr/lib64 \
conftest.c -lcr -lnsl -l
On Sep 17, 2008, at 5:49 AM, Paul Kapinos wrote:
But the setting of the environtemt variable OPAL_PREFIX to an
appropriate value (assuming PATH and LD_LIBRARY_PATH are setted
too) is not enough to let the OpenMPI rock&roll from the new
lokation.
Hmm. It should be.
(update) it works with
Christophe,
Looks like a firewall problem. Please check the mailing list archives
for the proper fix.
Thanks,
george.
On Sep 17, 2008, at 6:53 AM, Christophe Spaggiari wrote:
Hi,
I am new to MPI and try to get my Open MPI environment up and
running. I have two machines Alpha and B
This is the zipped file config.log
2008/9/17 Josh Hursey
> Can you send me a zip'ed up version of the config.log from your build? That
> will help in determining what went wrong with configure.
>
> Cheers,
> Josh
>
>
> On Sep 17, 2008, at 6:09 AM, Santolo Felaco wrote:
>
> Hi, I want to install
Can you send me a zip'ed up version of the config.log from your
build? That will help in determining what went wrong with configure.
Cheers,
Josh
On Sep 17, 2008, at 6:09 AM, Santolo Felaco wrote:
Hi, I want to install openmpi-1.3. I have invoked ./configure --
with-ft=cr --enable-ft-thread
On Sep 16, 2008, at 11:18 PM, Matthias Hovestadt wrote:
Hi!
Since I am interested in fault tolerance, checkpointing and
restart of OMPI is an intersting feature for me. So I installed
BLCR 0.7.3 as well as OMPI from SVN (rev. 19553). For OMPI
I followed the instructions in the "Fault Tolerance
Sofia,
I took your program and actually ran it successfully on my systems using
Open MPI r19400. A couple questions:
1. Have you tried to run the program on a single node?
mpirun -np 2 --host 10.4.5.123 --prefix /usr/local
./PruebaSumaParalela.out
2. Can you try and run the code the
Hi,
I am new to MPI and try to get my Open MPI environment up and running. I
have two machines Alpha and Beta, on which I have successfully installed
Open MPI in /usr/local/openmpi. I have made the ssh setting to not have to
enter password manually (using rsa keys), and I have modified the .rc file
Hi, I want to install openmpi-1.3. I have invoked ./configure --with-ft=cr
--enable-ft-thread --enable-mpi-threads --with-blcr=/home/osa/blcr/
--enable-progress-threads
This is error message that show:
BLCR support requested but not found. Perhaps you need to specify the
location of the BLCR libr
Thanks
2008/9/17 Matthias Hovestadt
> Hi!
>
>
> Hi, I have installed openmpi-1.2.7 with following instructions:
>> ./configure --with-ft=cr --enable-ft-enable-thread --enable-mpi-thread
>> --with-blcr=$HOME/blcr --prefix=$HOME/openmpi
>> make all install
>> In directory bin of directory $HOME/o
Hi Jeff again!
But the setting of the environtemt variable OPAL_PREFIX to an
appropriate value (assuming PATH and LD_LIBRARY_PATH are setted too)
is not enough to let the OpenMPI rock&roll from the new lokation.
Hmm. It should be.
(update) it works with "truly" OpenMPI, but it works *not*
Hi!
Hi, I have installed openmpi-1.2.7 with following instructions:
./configure --with-ft=cr --enable-ft-enable-thread --enable-mpi-thread
--with-blcr=$HOME/blcr --prefix=$HOME/openmpi
make all install
In directory bin of directory $HOME/openmpi there is not ompi-checkpoint and
ompi-restart.
A
Hi, I have installed openmpi-1.2.7 with following instructions:
./configure --with-ft=cr --enable-ft-enable-thread --enable-mpi-thread
--with-blcr=$HOME/blcr --prefix=$HOME/openmpi
make all install
In directory bin of directory $HOME/openmpi there is not ompi-checkpoint and
ompi-restart.
Help me, p
Hello Gus,
Thank you very much for your answer but I do not think that this is the
problem.I have changed everything in a C program and I obtain the same
result.
Does anyone have any idea about the problem?
Sofia
- Original Message -
From: "Gus Correa"
To: "Open MPI Users"
Sent:
Hi
Yuo must close the File using
MPI_File_close(MPI_File *fh)
before calling MPI_Finalize.
By the way i think you shouldn't do
strcat(argv[1], ".bz2");
This would overwrite any following arguments.
Jody
On Wed, Sep 17, 2008 at 5:13 AM, Davi Vercillo C. Garcia (デビッド)
wrote:
> Hi,
>
> I'm sta
36 matches
Mail list logo