Hi Luis,
Luis Vitorio Cargnini wrote:
Your suggestion is a great and interesting idea. I only have the fear to
get used to the Boost and could not get rid of Boost anymore, because
one thing is sure the abstraction added by Boost is impressive, it turn
I should add that I fully understand
Hi Luis,
Luis Vitorio Cargnini wrote:
Thanks, but I really do not want to use Boost.
Is easier ? certainly is, but I want to make it using only MPI itself
and not been dependent of a Library, or templates like the majority of
boost a huge set of templates and wrappers for different libraries
Hi Ashika,
Ashika Umanga Umagiliya wrote:
In my MPI environment I have 3 Debian machines all setup openMPI in
/usr/local/openMPI,
configured PATH and LD_LIBRARY_PATH correctly.
And I have also configured passwordless SSH login in each node.
But when I execute my application , it gives follow
as no firewall to begin with and I didn't know about it. Alas,
it's the latter...I better look into it as I was basically oblivious to the
lack of a firewall...
Ray
Bogdan Costescu wrote:
On Wed, 18 Mar 2009, Raymond Wan wrote:
Perhaps it has something to do with RH's defaults for th
Hi Ron,
Ron Babich wrote:
Thanks for your response. I had noticed your thread, which is why I'm
embarrassed (but happy) to say that it looks like my problem was the
same as yours. I mentioned in my original email that there was no
firewall running, which it turns out was a lie. I think th
Hi Ron,
Ron Babich wrote:
Hi Everyone,
I'm having a very basic problem getting an MPI job to run on multiple
nodes. My setup consists of two identically configured nodes, called
node01 and node02, connected via ethernet and infiniband. They are
running CentOS 5.2 and the bundled OMPI, ver
Hi Prentice/Jeff,
Prentice Bisbal wrote:
In an earlier e-mail in this thread, I theorized that this might be a
problem with your name service. This latest information seems to support
that theory.
Thank you very much for the suggestions and help! After discussing with our system administra
pen-mpi.org/community/help/
(including the network information)
Thanks!
On Mar 13, 2009, at 9:12 PM, Raymond Wan wrote:
Hi Jeff,
Jeff Squyres wrote:
> On Mar 13, 2009, at 6:17 AM, Raymond Wan wrote:
>
>> What doesn't work is:
>>
>> [On Y] mpirun --host Y,Z --np
Hi Ben,
ben rodriguez wrote:
I have compiled ompi and another program for use on another rhel5/x86_64
machine, after transfering the binaries and setting up environment variables is
there anything else I need to do for ompi to run properly? When executing my
prog I get:
Hi Jeff,
Jeff Squyres wrote:
On Mar 13, 2009, at 6:17 AM, Raymond Wan wrote:
What doesn't work is:
[On Y] mpirun --host Y,Z --np 2 uname -a
[On Y] mpirun --host X,Y,Z --np 3 uname -a
...and similarly for machine Z. I can confirm that from any of the 3
Do you see "rsh"
Hi all,
I'm having a problem running mpirun and I was wondering if there are
suggestions on how to find out the cause. I have 3 machines that I can use:
X, Y, and Z. The important thing is that X is different from Y and Z (the
software installed, version of Linux, etc. Y and Z are identic
Hi Ralph,
Ralph Castain wrote:
...
The man page will describe all the various options. Which one is best
for your app really depends on what the app is doing, the capabilities
and topology of your cluster, etc. A little experimentation can help you
get a feel for when to use which one.
Th
Hi Ralph,
Thank you very much for your explanation!
Ralph Castain wrote:
It is a little bit of both:
* historical, because most MPI's default to mapping by slot, and
* performance, because procs that share a node can communicate via
shared memory, which is faster than sending messages over
Hi all,
According to FAQ 14 (How do I control how my processes are scheduled across nodes?) [http://www.open-mpi.org/faq/?category=running#mpirun-scheduling], it says that the default scheduling policy is by slot and not by node. I'm curious why the default is "by slot" since I am thinking of e
Hi Ramya,
Ramya Narasimhan wrote:
Hi,
I have installed openmpi-1.3. When I checked for the example programs,
the output shows only rank 0 of size 1 for 2 processors. When I gave the
command: *mpirun -hostfile node -np 2 hello_c*
the output is
Hello, world, I am 0 of 1
Hello, world, I am 0
Hi Amos,
Amos Leffler wrote:
I want to compile Open-mpi using intel compilers.
Unfortunately the Series 10 C compiler(icc) license has expired. I
downloaded and looked at the Series 11 C++ compiler (no C compiler listed)
and would like to know if you can use this together with a
Hi Chong,
Chong Su wrote:
Now the mpi can be used normally . But we need let the non-root user run the
MPI program too, HOW can we do ?
What type of operating system?
Generally, anyone can run mpirun and mpic/mpic++/etc. Are you unable to
do that? What kind of error message are you g
Hi Heitor,
Heitor Florido wrote:
I have installed OpenMPI on both computers and my application works on on
both of them, but when I try to communicate between them, the method
MPI_Lookup_name can't resolve the name published by the other machine.
I've tried to run the example from mpi-forum t
Hi Heitor,
Heitor Florido wrote:
Hello,
I have built an application using opemmpi 1.2.8 that is a client/server
application that uses MPI_publish_name and MPI_Lookup_name to start the
communication.
This application works fine on a single computer.
However, i'd like to run it on 2 pcs using li
-- I just don't think
that user/system time tell you what most people think they're telling
you in a parallel+MPI context.
On Nov 14, 2008, at 4:32 AM, Raymond Wan wrote:
Hi Fabian,
Thank you for clarifying things and confirming some of the things
that I thought. I guess I
Hi Reuti,
I have to admit that I'm so familiar with SGE, but I'll take a look at
it so that I'll learn something. In my current situation, I don't
/need/ to report a user time. I was just wondering if it has any
meaning and what people mean when they show numbers or a graph and just
says "
Hi Fabian,
Thank you for clarifying things and confirming some of the things that I
thought. I guess I have a clearer understanding now.
Fabian Hänsel wrote:
H, I guess user time does not matter since it is real time that
we are interested in reducing.
Right. Even if we *could*
Hi Fabian,
Fabian Hänsel wrote:
On a separate topic, but related to your post here, how did you do
the timing? [Especially to so many digits of accuracy. :-) ]
two things to consider:
i) What do I actually (want to) measure?
ii) How accurate can I do that?
i)
Option iA) execution ti
Hi Fabian,
On a separate topic, but related to your post here, how did you do the
timing? [Especially to so many digits of accuracy. :-) ]
I will have to time my program and I don't think /usr/bin/time would do
it. Are the numbers it report accurate [for an MPI program]? I think
the "us
Hi Jeff,
Jeff Squyres wrote:
On Nov 10, 2008, at 6:41 AM, Jed Brown wrote:
With #define's and compiler flags, I think that can be easily done --
was wondering if this is something that developers using MPI do and
whether AC/AM supports it.
AC will allow you to #define whatever you want --
Jed Brown wrote:
On Mon 2008-11-10 12:35, Raymond Wan wrote:
With #define's and compiler flags, I think that can be easily done --
was wondering if this is something that developers using MPI do and
whether AC/AM supports it.
The normal way to do this is by building against a s
Dear Erin,
I'm nowhere near a guru, so I hope you don't what I have to say (it
might be wrong...).
But what I did was just put a long loop into the program and while it
was running, I opened another window and looked at the output of "top".
Obviously, without the loop, the program would te
(MPICC)
AC_SUBST(MPICXX)
fi
AM_CONDITIONAL([WE_HAVE_MPI],[test "x$with_mpi" = "xyes"])
(...)
Makefile.am:
(...)
# MPI headers/libraries:
INCLUDES+=$(MPI_CXXFLAGS)
OTHERLIBS+=$(MPI_CXXLIBS)
etc
I would start by improving the mentioned macro with specific support for each
M
e).
It gets tricky, though, because not all MPI implementations have
wrapper compilers -- so it's up to you to decide how portable you want
to be. The open source MPI's both have wrapper compilers by the same
names (mpicc et al.), but some of the vendor/MPP platform-specific
MPI
Hi all,
I'm not sure if this is relevant to this mailing list, but I'm trying to
get autoconf/automake working with an Open MPI program I am writing (in
C++) but unfortunately, I don't know where to begin. I'm new to both
tools but have it working well enough for a non-MPI program. When I
Hi all,
Dirk Eddelbuettel wrote:
On 18 October 2008 at 03:30, Terry Frankcombe wrote:
|
| But again, this is a discussion for the Debian list.
In particularly for the 'package Open MPI maintainers' list at
http://lists.alioth.debian.org/mailman/listinfo/pkg-openmpi-maintainers
so
Hi all,
I'm very new to MPI and am trying to install it on to a Debian Etch
system. I did have mpich installed and I believe that is causing me
problems. I completely uninstalled it and then ran:
update-alternatives --remove-all mpicc
Then, I installed the following packages:
libibverbs1
32 matches
Mail list logo