Hi, Are there plans to upgrade MPI function calls from MPI-1 to MPI-2, still supporting MPI-1 for somebody who would need it??
Reason I am asking is: We have a mpi wrapper library which contains definitions of mpi functions & some other typechecking etc before calling actual mpi function calls. Based on what the user has chosen at run time i.e. which mpi (intel, hp, openmpi etc), correct wrapper is picked (wrapper/intel/*.so, wrapper/hp/*.so, wrapper/openmpi/*.so) which in turn is compiled with intel/hp/openmpi libraries. So the flow of any mpi function call is : application -> wrapper -> intel/hp/openmpi Now I have compiled PHDF5 after changes to use the wrapper stub(placeholder needed for compilation of application code) instead of any specfic mpi. Now while writing parallel collective IO with 2 nodes, it fails and the stack is MPI_Type_struct in H5S_mpio_hyper_type (H5Smpio.c) <- H5S_mpio_space_type (H5Smpio.c) <- H5D_inter_collective_io (H5Dmpio.c) <- H5D_contig_collective_write (H5Dmpio.c) Seems the failure is due to MPI_LB & MPI_UB (defined also in wrapper but runtime call to this datatype constant in user selected mpi lib). MPI-2 guidelines say that "these are deprecated and their use is awkward & error prone". And I am having a real hard time to figure out how to replace MPI_LB with something appropriate. Thanks Saurabh -- View this message in context: http://hdf-forum.184993.n3.nabble.com/HDF5-parallel-I-O-errors-tp814338p814338.html Sent from the hdf-forum mailing list archive at Nabble.com. _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
