Hi List,
I have a MPI program which uses one sided communication with derived
datatypes (MPI_Type_create_indexed_block). I developed the code with
MPICH2 and unfortunately didn't thought about trying it out with
OpenMPI. Now that I'm "porting" the Application to OpenMPI I'm facing
some problems.
Hi
Biagio Lucini wrote:
Hello,
I am new to this list, where I hope to find a solution for a problem
that I have been having for quite a longtime.
I run various versions of openmpi (from 1.1.2 to 1.2.8) on a cluster
with Infiniband interconnects that I use and administer at the same
time. T
Hi List,
the attached test program (bsm-db.cc) always dies in malloc called from
opal_free_list (backtrace in error.txt and valgrind (vg*) output can be
found in the tar file). It seems that there is an invalid write in
ompi_osc_pt2pt_sendreq_send. I checked the derived datatype using the
che
Mahmoud Payami wrote:
Dear OpenMPI Users,
I have two systems, one with Intel64 processor, and one with IA32. The
OSs on first is CentOS-86_64 and the other CentOS-i386. I installed
Intel fortran compiler 10.1 on both. In the first I use the fce, and
in the second I use fc directories (ifort
Ted Yu wrote:
I'm trying to run a job based on openmpi. For some reason, the program and the
global communicator are not in sync and it reads that there is only one
processors, whereas, there should be 2 or more. Any advice on where to look?
Here is my PBS script. Thanx!
PBS SCRIPT:
#!/bi
Vittorio wrote:
Hi!
I'm using OpenMPI 1.3 on two nodes connected with Infiniband; i'm using
Gentoo Linux x86_64.
I've noticed that before any application starts there is a variable amount
of time (around 3.5 seconds) in which the terminal just hangs with no output
and then the application starts
Mark Allan wrote:
Dear all,
With this simple code I find I am getting a memory leak when I run on 2
processors. Can anyone advise why?
I suspect the prototype of nonBlockingRecv is actually
MPI_Request nonBlockingRecv(int **t, int &size, const int tag, const int
senderRank)
and in thi
John Wohlbier wrote:
I'm sure that I'm not the first person who wants their MPI program to
compile when MPI is not available. It seems like the simplest solution to
this is to have a header file (with implementation, or header file and .c
file) that implements all of the functions for the case wh