Hi Rob,

On Jan 17, 2011, at 10:24 AM, Rob Latham wrote:

> On Sun, Jan 16, 2011 at 07:39:48PM -0600, Elena Pourmal wrote:
>> Hi Leigh,
>> 
>> It looks like that collective write fails but independent succeeds. Which 
>> version did you build? We will try to reproduce it.
> 
> Hi Elena:
> 
> Do you think this could be a regression of a bug I reported against
> 1.8.4 about a year ago?

Sounds very similar, yes.

> I sent a test case to the hdf5 help desk 2
> December 2009 with the subject "bug in HDF5 type handling", though
> that was in the read path.  Sorry, I don't have a ticket id or
> anything like that to help track it down.  
> 
We do have your initial report and followed email exchange. The bug you 
reported was fixed around December 11 - 14, 2009. It definitely should be fixed 
1.8.6, but I am not sure if it made into 1.8.5-patch1.
I hope to have more information about this bug later tomorrow, when we all back 
to the office.

Thanks a lot for bringing this to my attention!

Elena
> ==rob
> 
>> Thank you!
>> 
>> Elena
>> 
>> 
>> On Jan 16, 2011, at 3:47 PM, Leigh Orf wrote:
>> 
>>> I managed to build pHDF5 on blueprint.ncsa.uiuc.edu (IBM AIX Power 6). I 
>>> compiled the hyperslab_by_chunk.f90 test program found at 
>>> http://www.hdfgroup.org/HDF5/Tutor/phypechk.html without error. When I run 
>>> it, however, I get the following output:
>>> 
>>> ATTENTION: 0031-408  4 tasks allocated by LoadLeveler, continuing...
>>> ERROR: 0032-110 Attempt to free a predefined datatype  (2) in 
>>> MPI_Type_free, task 0
>>> ERROR: 0032-110 Attempt to free a predefined datatype  (2) in 
>>> MPI_Type_free, task 1
>>> ERROR: 0032-110 Attempt to free a predefined datatype  (2) in 
>>> MPI_Type_free, task 2
>>> ERROR: 0032-110 Attempt to free a predefined datatype  (2) in 
>>> MPI_Type_free, task 3
>>> HDF5: infinite loop closing library
>>>      
>>> D,S,T,D,S,F,D,G,S,T,F,AC,FD,P,FD,P,FD,P,E,E,SL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL,FL
>>> HDF5: infinite loop closing library
>>> 
>>> The line which causes the grief is:
>>> 
>>>    CALL h5dwrite_f(dset_id, H5T_NATIVE_INTEGER, data, dimsfi, error, &
>>>                    file_space_id = filespace, mem_space_id = memspace, 
>>> xfer_prp = plist_id)
>>> 
>>> If I replace that call with the one that is commented out in the program, 
>>> it runs without a problem. That line is:
>>> 
>>> CALL h5dwrite_f(dset_id, H5T_NATIVE_INTEGER, data, dimsfi,error, &
>>>                      file_space_id = filespace, mem_space_id = memspace)
>>> 
>>> Any ideas? I definitely want to take advantage of doing collective I/O if 
>>> possible.
>>> 
>>> Leigh
>>> 
>>> --
>>> Leigh Orf
>>> Associate Professor of Atmospheric Science
>>> Department of Geology and Meteorology
>>> Central Michigan University
>>> Currently on sabbatical at the National Center for Atmospheric Research in 
>>> Boulder, CO
>>> NCAR office phone: (303) 497-8200
>>> 
>>> _______________________________________________
>>> Hdf-forum is for HDF software users discussion.
>>> [email protected]
>>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>> 
> 
>> _______________________________________________
>> Hdf-forum is for HDF software users discussion.
>> [email protected]
>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
> 
> 
> -- 
> Rob Latham
> Mathematics and Computer Science Division
> Argonne National Lab, IL USA
> 
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to