Rob, Well, if I understand what you are saying, maybe my concerns about sub-optimality are unfounded. Yes, the 'buffer' array is contiguous in memory for each subdomain, and that is what the successful call to 'mpi_file_read_at_all' sees. The call to 'mpi_type_indexed' creates my 'datatype' and the 'mpi_file_set_view' essentially 'installs' it. I suppose a possible thing to try would be to define an F90 structure for the buffer array, but that would be of dubious benefit. So, thanks again for the help.
T. Rosmond On Mon, 2014-07-21 at 13:37 -0500, Rob Latham wrote: > > On 07/20/2014 04:23 PM, Tom Rosmond wrote: > > Hello, > > > > For several years I have successfully used MPIIO in a Fortran global > > atmospheric ensemble data assimilation system. However, I always > > wondered if I was fully exploiting the power of MPIIO, specifically by > > using derived data types to better describe memory and file data > > layouts. All of my IO has been using elementary data types, e.g. > > mpi_real, mpi_integer, for file 'datatype's, and numerous references > > suggest that datatypes of derived data types could improve IO > > performance. > > > > Attached is a KSH script with an included fortran program of a very > > simple example of what I am now doing successfully, and poses the > > question why doesn't my attempt with a derived data type work. The > > fortran program is well commented to explain each step. I run the > > script on a 4 core linux workstation, and the example is setup for that > > system. On a similar system just 'chmod' the script executable and run > > it. The script will compile amd execute the program . It should first > > show printed output from successful IO using my current approach, and > > then aborts when trying my derived data type test. What am I doing > > wrong? Any advice is appreciated. > > Ah ha. I spent so much time looking at how ROMIO processed your > datatypes that I did not at first notice how you were using those > datatypes. > > This works for you: > > allocate(buffer(ijsiz(ir),numrec)) > ioff = 0 > lenij = ijsiz(ir)*numrec > call mpi_file_read_at_all(ifh,ioff,buffer,lenij,mpi_real,status,ierr) > > but this does not: > > ioff=0 > lenij=1 > call mpi_file_read_at_all(ifh,ioff,buffer,lenij,datatype,status,ierr) > > > The mistake is a natural one to make: the 'buffer, count, datatype' > tuple passed to the read commands (and passed to many other MPI > routines) describes the layout of memory -- Not the layout of data in > the file. > > To describe the file layout you set a file view, as you have already > done a few calls earlier. > > What is your memory buffer? it allocated like this: > > allocate(buffer(ijsiz(ir),numrec)) > > which (if I am reading fortran correctly) is a contiguous chunk of memory. > > If instead you had a more elaborate data structure, like a mesh of some > kind, then passing an indexed type to the read call might make more sense. > > ==rob >