On Jan 23, 2009, at 12:30 AM, David Robertson wrote:

I don't know how helpful this code will be unless you happen to have HDF5/NetCDF4 already installed. I looked at the code NetCDF4 uses to test parallel IO but it is all in C so it wasn't very helpful. If you have the NetCDF4 source code the parallel IO tests are in the nc_test4 directory.
Mm, yes, that would be difficult for me to build/run. Can you or your developer trim it down to a small independent example?

I will talk to the developer tomorrow to see if he can come up with an independent example.

Thanks.

FWIW, this is not happening for me -- I can call subroutines or functions with MPI_COMM_WORLD and then use that value (which should be 0, btw) to call an MPI function such as MPI_COMM_DUP. Per your prior comment about the debugging not being able to find MPI_COMM_WORLD -- perhaps the compiler is optimizing it out...? Or perhaps it was transmorgified to lower case (i.e., try seeing if "mpi_comm_world" exists -- I see it in your mpi.mod file)...?

I have looked for both MPI_COMM_WORLD and mpi_comm_world but neither can be found by totalview (the parallel debugger we use) when I compile with "USE mpi". When I use "include 'mpif.h'" both MPI_COMM_WORLD and mpi_comm_world are zero.

I'm afraid I don't know why that would be.  :-(

MPI_COMM_WORLD is set to a large integer (1140850688) in MPICH2 so I wonder if there is something in HDF5 and/or NetCDF4 that doesn't like 0 for the communicator handle. At any rate, you have given me some ideas of things to check in the debugger tomorrow. Is there a safe way to change what MPI_COMM_WORLD is set to in OpenMPI?

No.  Open MPI's Fortran MPI_COMM_WORLD is pretty much hard-wired to 0.

One question: you *are* using different HDF5/NetCDF4 installations for Open MPI and MPICH2, right? I.e., all the software that uses MPI needs to be separately compiled/installed against different MPI implementations. Case in point: if you have HDF5 compiled against MPICH2, it will not work properly with Open MPI-compiled MPI applications.

--
Jeff Squyres
Cisco Systems

Reply via email to