Hi sebi!
On 10/2/07, Sebastian Schulz wrote:
> Amit Kumar Saha wrote:
> > what i find bizarre is that I used Open MPI 1.2.3 to install on all my
> > 4 machines. whereas, 'orted' is installed in /usr/local/bin on all the
> > other 3 machines, the 4th machine which is giving me
Thank you very much for detailed explanation! I was affraid that
previous mismatch in MPI_Bcast could be the problem, I hoped that whole
data stream goes in the main tag so I can catch problems easily just by
sending synced data. I assume that Bcast is performed that way to speed
up the
Hello Oleg :-)
I am a newbie as far as MPI is concerned. Still I will take a shot:
On 10/2/07, Oleg Morajko wrote:
> Hello,
>
> In the context of my PhD research, I have been developing a run-time
> performance analyzer for MPI-based applications.
> My tool provides a
Hi there,
I have a 2-cpu system (linux/x86-64), running openmpi-1.1. I do not
specify a hostfile.
Lately I'm having performance problems when running my mpi-app this way:
mpiexec -n 2 ./mpi-app config.ini
Both mpi-app processes are running on cpu0, leaving cpu1 idle.
After reading the mpirun
Hi Miguel
I don't know if it's a typo - but actually it should be
mpiexec -np 2 ./mpi-app config.ini
and not
> mpiexec -n 2 ./mpi-app config.ini
Jody
Hi,
I had found that the problem is the firewall on one of my computers. When I
set firewall allow to connect with orther computer through tcp with port
from 1024 to 4999, it is ok, there is no error about connection. But I still
can not checkpoint and restart my program.
The error is:
$ mpirun
Hi,
On 10/3/07, jody wrote:
> Hi Miguel
> I don't know if it's a typo - but actually it should be
> mpiexec -np 2 ./mpi-app config.ini
> and not
> > mpiexec -n 2 ./mpi-app config.ini
thanks for the remark, you're right, but in the man page says -n is a
synonym for -np
Kind
Hi again,
I'm trying to debug the problem I posted
on several times recently; I thought I'd try asking a more focused
question:
I have the following sequence in the client code:
MPI_Status stat;
ret = MPI_Probe(0, MPI_ANY_TAG, MPI_COMM_WORLD, );
assert(ret == MPI_SUCCESS);
ret =
-c, -np, --np, -n, --n all do exactly the same thing.
Tim
Miguel Figueiredo Mascarenhas Sousa Filipe wrote:
Hi,
On 10/3/07, jody wrote:
Hi Miguel
I don't know if it's a typo - but actually it should be
mpiexec -np 2 ./mpi-app config.ini
and not
mpiexec -n 2 ./mpi-app
Hi,
Miguel Figueiredo Mascarenhas Sousa Filipe wrote:
Hi there,
I have a 2-cpu system (linux/x86-64), running openmpi-1.1. I do not
specify a hostfile.
Lately I'm having performance problems when running my mpi-app this way:
mpiexec -n 2 ./mpi-app config.ini
Both mpi-app processes are
Unfortunately, I am out of ideas on this one. It is very strange. Maybe
someone else has an idea.
I would recommend trying to install Open MPI again. First be sure to get
rid of all of the old installs of Open MPI from all your nodes, then
reinstall and try again.
Tim
Dino Rossegger wrote:
Thanks for the report!
I have reproduced this bug and have filed a ticket on this
(https://svn.open-mpi.org/trac/ompi/ticket/1157). You should receive
updates as this bug is worked on.
Thanks,
Tim
Chris Johnson wrote:
Hi, I'm trying to run an MPI program of mine under OpenMPI 1.2 using
Marco,
Thanks for the report, and sorry for the delayed response. I can
replicate a problem using your test code, but it does not segfault for
me (although I am using a different version of Open MPI).
I filed a bug on this so (hopefully) out collective gurus will look at
it soon. You will
So you did:
ssh which orted
and it found the orted?
Tim
Amit Kumar Saha wrote:
Hi sebi!
On 10/2/07, Sebastian Schulz wrote:
Amit Kumar Saha wrote:
what i find bizarre is that I used Open MPI 1.2.3 to install on all my
4 machines. whereas, 'orted' is installed in
On 10/4/07, Tim Prins wrote:
> So you did:
> ssh which orted
>
> and it found the orted?
Yes. it reported it in '/usr/bin/orted'
Regards,
Amit
--
Amit Kumar Saha
*NetBeans Community Docs Coordinator*
me blogs@ http://amitksaha.blogspot.com
Gil,
Josh created a few HOWTO pages that should help you get
your internal Mellanox MTT server setup:
https://svn.open-mpi.org/trac/mtt/wiki/HTTPServer
https://svn.open-mpi.org/trac/mtt/wiki/ServerMaintenance
https://svn.open-mpi.org/trac/mtt/wiki/Database
Josh, another item Gil and I
On Oct 3, 2007, at 8:50 AM, Ethan Mallove wrote:
Josh, another item Gil and I talked about was moving a
subset of test results from the internal Mellanox database
into the general Open MPI database. Could the scripts in
server/sql/support accomplish this?
This is for a future install/use of
17 matches
Mail list logo