Hmmm...yes, the code does seem to handle that '=' being in there. Forgot it was
there.
Depending on the version you are using, mpirun could just open the display for
you. There is an mpirun option that tells us to please start each app in its
own xterm.
You shouldn't need forwarding if you
Hi Xianjun
Suggestions/Questions:
1) Did you check if malloc returns a non-NULL pointer?
Your program is assuming this, but it may not be true,
and in this case the problem is not with MPI.
You can print a message and call MPI_Abort if it doesn't.
2) Have you tried MPI_Isend/MPI_Irecv?
Or
Without including the -x DISPLAY, glut doesn't know what display to open.
For instance, without the -x DISPLAY parameter glut returns an error from
each process stating that it could not find display "" (empty string). This
strategy is briefly described in the openmpi
Hi
Are you running on two processes (mpiexec -n 2)?
Yes
Have you tried to print Gsize?
Yes, I had checked my codes several times, and I thought the errors came
from the OpenMpi. :)
The command line I used:
"mpirun -hostfile ./Serverlist -np 2 ./test". The "Serverlist" file include
several
Guess I'm not entirely sure I understand how this is supposed to work. All the
-x does is tell us to pickup an envar of the given name and forward its value
to the remote apps. You can't set the envar's value on the cmd line. So you
told mpirun to pickup the value of an envar called
Hello,
I'm working on an mpi application that opens a glut display on each node of
a small cluster for opengl rendering (each node has its own display). My
current implementation scales great with mpich2, but I'd like to use openmpi
infiniband, which is giving me trouble.
I've had some success
Hi Xianjun
Are you running on two processes (mpiexec -n 2)?
I think this code will deadlock for more than two processes.
The MPI_Recv won't have a matching send for rank>1.
Also, this is C, not MPI,
but you may be wrapping into the negative numbers.
Have you tried to print Gsize?
It is probably
Hi,
What interconnect and command line do you use? For InfiniBand openib
component there is a known issue with large transfers (2GB)
https://svn.open-mpi.org/trac/ompi/ticket/2623
try disabling memory pinning:
http://www.open-mpi.org/faq/?category=openfabrics#large-message-leave-pinned
regards
On Monday 06 December 2010 15:03:13 Mathieu Gontier wrote:
> Hi,
>
> A small update.
> My colleague made a mistake and there is no arithmetic performance
> issue. Sorry for bothering you.
>
> Nevertheless, one can observed some differences between MPICH and
> OpenMPI from 25% to 100% depending
Hi,
I'm using mkl scalapack in my project. Recently, I was trying to run
my application on new set of nodes. Unfortunately, when I try to
execute more than about 20 processes, I get segmentation fault.
[compn7:03552] *** Process received signal ***
[compn7:03552] Signal: Segmentation fault (11)
Hi,
A small update.
My colleague made a mistake and there is no arithmetic performance
issue. Sorry for bothering you.
Nevertheless, one can observed some differences between MPICH and
OpenMPI from 25% to 100% depending on the options we are using into our
software. Tests are lead on a
On 12/6/2010 3:16 AM, Hicham Mouline wrote:
Hello,
1. MPI_THREAD_SINGLE: Only one thread will execute.
Does this really mean the process cannot have any other threads at all, even if
they doen't deal with MPI at all?
I'm curious as to how this case affects the openmpi implementation?
Hi Benjamin
I guess you could compile OpenMPI with standard integer and real sizes.
Then compile your application (DRAGON) with the flags to change to 8-byte
integers and 8-byte reals.
We have some programs here that use real8 and are compiled this way,
and run without a problem.
I guess this is
13 matches
Mail list logo