Hi,

I noticed today something strange. I've been using HDF5 with parallel
processes, and when I raise the dataset size, the library seems to
slow down.
Currently, I have a fixed number of processes writing data from my
simulation. All these processes write 1MB concurrently to contiguous
but exclusive locations in the 1D dataset. When having more than 1
billion elements (for instance), the library is getting really, really
slow (more than 20 times slower than what a usual pwrite would take),
where as nothing can be noticed when writing say 1 million cell.
Is there something fancy going on? I'm trying to access the simplest
part of the library so that the H5write call can be more or less
reduced to a pyrite call, but something is wrong. Should I set
something to tell HF5 than it shouldn't do anything except than
writing data from the calling process to the filesystem (Lustre in my
case, MPI library configured with Lustre support).

Regards,

Matthieu Brucher
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

Reply via email to