data is
laid out.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
they think are needed.
==rob
-Mehmet
On May 6, 2013, at 10:47 AM, Rob Latham wrote:
On Fri, Apr 19, 2013 at 12:47:40PM -0400, Mehmet Belgin wrote:
Hello everyone,
We cannot use parallel HDF5 on any of our systems. The processes either
crash or hang (and they work with sequential
to some resources that may help?
I presume you've already found
http://hdf5.net/
The archives have some C# discussions, too, most recently november
2012
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf
with recent (gpfs-3.4 or gpfs-3.5)
versions of GPFS. I suspect they still work (the gpfs-specific
IOCTLS, i mean: i'm sure HDF5's implementation of them is fine), but
would like to hear others experiences.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
are talking about a parallel file system there!
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo
of the dataset to write and read from?
If you use MPIIO, doing things this way gets you a lot of
optimizations. I imagine one day you'll want to scale up beyond 4
processors..
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
-POSIX?
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
I did this a few years back I read (H5Dread) a value of out a
dataset, then watched the output with strace to get the offset. Not
pretty, but it worked for what I wanted to do.
Quincey said something more hdf5-native would be interesting, pending
funding to work on it.
==rob
--
Rob Latham
to do.
h5perf has a ton of features, though, so you can probably find a
configuration that comes close.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf
of consistency semantics you require for a
parallel I/O workload.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman
://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman
of calls are not at
all equivalent statements.
I think you need the call H5S_select_none so the do nothing workers
can still participate in this collective routine, even if they have no
i/o to contribute.
http://www.hdfgroup.org/HDF5/doc/RM/RM_H5S.html#Dataspace-SelectNone
==rob
--
Rob Latham
processes decomposition as a puzzle piece, all the
puzzle pieces would fit together to be the full 3d array. Then, your
MPI-IO library can work some magic.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
to make
the call.
Sometimes if the i/o is always on a certain set of processors,
applications make a sub-communicator and pass that into HDF5.
Probably do not need to worry about that, though.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
already. After exiting NetCDF define mode, the size of the
attributes and objects will be known. NetCDF callers are familiar with
the potential pain of re-entering define mode.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
of the behavior or you can record every
operation (and potentially perturb the results).
the Argonne 'darshan' project might give enough of a big picture
summary, but it was designed foremost to be lightweight, not
exhaustive:
http://press.mcs.anl.gov/darshan/
--
Rob Latham
Mathematics and Computer
file per core.
Lustre is kind of a pain in the neck with regards to concurrent I/O.
please let me know the platform and MPI implementation you are using
and I'll tell you what you need to do to get good performance out of
it.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne
,
essentially, (rank,call,time,duration).
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf
give you the offset information. It wraps fseek(3) but HDF5
using MPI-IO is probably going to call lseek(2), lseek64(2) some other
seek-like system call.
IPM is pretty close, giving the file, size, and a timestamp all tucked
into a file-per-rank.
==rob
--
Rob Latham
Mathematics and Computer
/O in HDF5.
If you are on a Cray, be sure to use MPT-3.2 or newer.
If you are on a linux cluster, use MPICH2-1.3.1 or newer.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software
data sieving at the MPI-IO layer and get around this
problem.
through an HDF5 property list you can set the MPI-IO hints
romio_ds_read to disable and romio_ds_write to disable.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
, at the MPI-IO level, independent I/O. Here again
you do so with MPI-IO tuning parameters. We can go into more detail
later, if it's even needed.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum
servers, faced with the HDF5-generated workload, died a fiery
death.
Can you bring this up on the PVFS mailing list? I think we're dealing
with a PVFS issue here, and not an HDF5 defect.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
is protected by #ifdef __USE_GNU.
For the sake of portability, we try to avoid non-standard flags in MPI
land, but this optimization is easy and presumably worthwhile so I'll
ask our hard-nosed portability guys how we can use this.
==rob
--
Rob Latham
Mathematics and Computer Science Division
. Then the library,
whenever possible, will basically do the sorts of optimizations you're
thinking about. You do have to, via property lists, explicitly enable
MPI-IO support and collective I/O.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
as if you asked HDF5 to do the compression
for you: I guess you'd have to find a stream-based compression
algorithm (gzip?) that can work on concatenated blocks, and annotate
the dataset with the compression algorithm you selected.
==rob
--
Rob Latham
Mathematics and Computer Science Division
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
PVFS-specific optimizations.
You could run a one-server PVFS instance on your NFS server.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum
parallelism then maybe one can run it in gdb and collect a backtrace
of all the processors? (mpiexec -np 8 xterm -e gdb ...)
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users
, back to the
low-level API for me, then :
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf
hyperslabs in HDF5 (in both sequential and parallel modes).
Oh, I don't have any better suggestions. I started to think of some
changes that would make it more parallel-io friendly and then I
quickly ended up right back at the regular hdf5 parallel I/O
interface.
==rob
--
Rob Latham
Mathematics
: *** [h5_parallel_write.o] Error 1
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
./configure --enable-fortran
( the --enable-parallel is ok to keep but it is enabled when HDF5 sees
you're building with an MPI compiler)
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF
that works with those three versions, you could likely get
away with that.
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
___
Hdf-forum is for HDF software users discussion.
Hdf-forum@hdfgroup.org
http
34 matches
Mail list logo