Jed and Matt and anyone else who understands the HDF5 viewer

       No one has answered this. If I get no response  I am going to assume 
that PETSc requires HDF5 built with MPI and remove the #if defs in the code.

  Barry

> On Mar 18, 2016, at 2:50 PM, Barry Smith <[email protected]> wrote:
> 
> 
>  I am confused about the usage of HDF5 from PETSc.
> 
>   In hdf5.py
> 
>  def configureLibrary(self):
>    if self.libraries.check(self.dlib, 'H5Pset_fapl_mpio'):
>      self.addDefine('HAVE_H5PSET_FAPL_MPIO', 1)
>    return
> 
>  So PETSc does not require HDF5 to have been built using MPI (for example if 
> it was built by someone else without MPI.)
> 
>  In PetscErrorCode  PetscViewerFileSetName_HDF5(PetscViewer viewer, const 
> char name[])
> 
> #if defined(PETSC_HAVE_H5PSET_FAPL_MPIO)
>  PetscStackCallHDF5(H5Pset_fapl_mpio,(plist_id, 
> PetscObjectComm((PetscObject)viewer), info));
> #endif
> 
>  so it only sets collective IO if the symbol was found and hence HDF5 was 
> built for MPI
> 
>  But in places like  VecView_MPI_HDF5(Vec xin, PetscViewer viewer)
> 
>   it uses MPI as if it was collective? Though it might not be because hdf5 
> could have been built without MPI
> 
>  So if I build PETSc with a non-MPI hdf5 and yet use the hdf5 viewer in 
> parallel; do the generated hdf5 files contain garbage?
> 
>  It seems to me we need to have hdf5.py REQUIRE the existence of 
> H5Pset_fapl_mpio?
> 
>  Barry
> 
> 
> 
> 
> 

Reply via email to