Hi,
I am using openmpi 1.6 and when I tried to send a large array through MPI_BCAST
in fortran. The number of count is larger than 3 billions. Although I compiled
openmpi and
my code with the option to declare my fortran integer as 64 bit long in linux.
I found out
that fortran pbcast_f.c will
Hi,
I encountered a core dump when using ompi-checkpoint --term pid.
Here is the trace:
[genova:01808] *** Process received signal ***
[genova:01808] Signal: Segmentation fault (11)
[genova:01808] Signal code: Address not mapped (1)
[genova:01808] Failing at address: 0x90
[genova:01808] [ 0]
Dear all,
I am using openmpi 1.6 on linux. I have a question on MPI_Reduce_scatter.
I try to see how large the data can push through MPI_Reduce_scatter using the
following code.
size = (long) 1024*1024*1024*4;
for(k=1;k<=16;++k) {
bufsize = k*size/16;
for(i=0;i
Dear all,
Thank you for all responses. There is another problem using
-fdefault-integer-8.
I am using 1.6..
For the i8:
configure:44650: checking for the value of MPI_STATUS_SIZE
configure:44674: result: 3 Fortran INTEGERs
configure:44866: checking if Fortran compiler works
configure:44895:
My concern is how do the C side know fortran integer using 8 bytes?
My valgrind check show something like:
==8482== Invalid read of size 8
==8482==at 0x5F4A50E: ompi_op_base_minloc_2integer (op_base_functions.c:631)
==8482==by 0xBF70DD1: ompi_coll_tuned_allreduce_intra_recursivedoubling
Hi,
I try to compile my fortran program in linux with gfortran44 using option
-fdefault-integer-8,
then all my integer will of kind=8.
My question is what should I do with openmpi? I am using 1.6, should I compile
openmpi
with the same options? Will it get the correct size of MPI_INTEGER and
Hi,
When I ran multiple processes in a single machine, the programs are
hanging in mpi_allreduce in different points
during different runs. I am using 1.3.4. When I used different machines
to run the processes, it is OK. Also, when
I recompiled open mpi in debug mode, the problem goes