Re: [Hdf-forum] Highly optimized and Efficient implementation of unstructured data using HDF5

2013-05-13 Thread Rob Latham
data is laid out. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Re: [Hdf-forum] ADIOI Lock problems on NFS and Panasas

2013-05-06 Thread Rob Latham
they think are needed. ==rob -Mehmet On May 6, 2013, at 10:47 AM, Rob Latham wrote: On Fri, Apr 19, 2013 at 12:47:40PM -0400, Mehmet Belgin wrote: Hello everyone, We cannot use parallel HDF5 on any of our systems. The processes either crash or hang (and they work with sequential

Re: [Hdf-forum] C# and HDF5

2013-04-08 Thread Rob Latham
to some resources that may help? I presume you've already found http://hdf5.net/ The archives have some C# discussions, too, most recently november 2012 ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf

[Hdf-forum] HDF5 and GFPS optimizations

2013-03-01 Thread Rob Latham
with recent (gpfs-3.4 or gpfs-3.5) versions of GPFS. I suspect they still work (the gpfs-specific IOCTLS, i mean: i'm sure HDF5's implementation of them is fine), but would like to hear others experiences. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] RESTful HDF5

2013-01-24 Thread Rob Latham
are talking about a parallel file system there! ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman/listinfo

Re: [Hdf-forum] Example using mpiposix

2012-05-25 Thread Rob Latham
of the dataset to write and read from? If you use MPIIO, doing things this way gets you a lot of optimizations. I imagine one day you'll want to scale up beyond 4 processors.. -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] HDF5 error with parallel IO

2012-04-20 Thread Rob Latham
-POSIX? ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Re: [Hdf-forum] Getting the file byte-offset for selection

2012-02-14 Thread Rob Latham
I did this a few years back I read (H5Dread) a value of out a dataset, then watched the output with strace to get the offset. Not pretty, but it worked for what I wanted to do. Quincey said something more hdf5-native would be interesting, pending funding to work on it. ==rob -- Rob Latham

Re: [Hdf-forum] HDF5 benchmarks

2012-01-23 Thread Rob Latham
to do. h5perf has a ton of features, though, so you can probably find a configuration that comes close. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf

Re: [Hdf-forum] HDF5 across multiple nodes/disks

2012-01-06 Thread Rob Latham
of consistency semantics you require for a parallel I/O workload. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman

Re: [Hdf-forum] rsync with hdf5 files

2011-12-21 Thread Rob Latham
://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman

Re: [Hdf-forum] Does PHDF5 forces all the process to write the same quantity of information?

2011-09-12 Thread Rob Latham
of calls are not at all equivalent statements. I think you need the call H5S_select_none so the do nothing workers can still participate in this collective routine, even if they have no i/o to contribute. http://www.hdfgroup.org/HDF5/doc/RM/RM_H5S.html#Dataspace-SelectNone ==rob -- Rob Latham

Re: [Hdf-forum] About Parallel Hdf5 Reading for a 3D dataset

2011-09-12 Thread Rob Latham
processes decomposition as a puzzle piece, all the puzzle pieces would fit together to be the full 3d array. Then, your MPI-IO library can work some magic. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] Does PHDF5 forces all the process to write the same quantity of information?

2011-09-12 Thread Rob Latham
to make the call. Sometimes if the i/o is always on a certain set of processors, applications make a sub-communicator and pass that into HDF5. Probably do not need to worry about that, though. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] Attributes vs Datasets

2011-05-04 Thread Rob Latham
already. After exiting NetCDF define mode, the size of the attributes and objects will be known. NetCDF callers are familiar with the potential pain of re-entering define mode. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] Tracing pHDF5's MPI-IO calls

2011-04-06 Thread Rob Latham
of the behavior or you can record every operation (and potentially perturb the results). the Argonne 'darshan' project might give enough of a big picture summary, but it was designed foremost to be lightweight, not exhaustive: http://press.mcs.anl.gov/darshan/ -- Rob Latham Mathematics and Computer

Re: [Hdf-forum] H5FD_MPIO_INDEPENDENT vs H5FD_MPIO_COLLECTIVE

2011-04-01 Thread Rob Latham
file per core. Lustre is kind of a pain in the neck with regards to concurrent I/O. please let me know the platform and MPI implementation you are using and I'll tell you what you need to do to get good performance out of it. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne

Re: [Hdf-forum] Tracing pHDF5's MPI-IO calls

2011-03-04 Thread Rob Latham
, essentially, (rank,call,time,duration). ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman/listinfo/hdf

Re: [Hdf-forum] Tracing pHDF5's MPI-IO calls

2011-03-04 Thread Rob Latham
give you the offset information. It wraps fseek(3) but HDF5 using MPI-IO is probably going to call lseek(2), lseek64(2) some other seek-like system call. IPM is pretty close, giving the file, size, and a timestamp all tucked into a file-per-rank. ==rob -- Rob Latham Mathematics and Computer

Re: [Hdf-forum] File locking of parallel HDF5 on lustre without file locking support

2011-02-22 Thread Rob Latham
/O in HDF5. If you are on a Cray, be sure to use MPT-3.2 or newer. If you are on a linux cluster, use MPICH2-1.3.1 or newer. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software

Re: [Hdf-forum] File locking of parallel HDF5 on lustre without file locking support

2011-02-22 Thread Rob Latham
data sieving at the MPI-IO layer and get around this problem. through an HDF5 property list you can set the MPI-IO hints romio_ds_read to disable and romio_ds_write to disable. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] Poor write performance with 30, 000 MPI ranks (pHDF5)

2011-02-21 Thread Rob Latham
, at the MPI-IO level, independent I/O. Here again you do so with MPI-IO tuning parameters. We can go into more detail later, if it's even needed. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum

Re: [Hdf-forum] parallel test failure of HDF5 1.8.6

2011-02-21 Thread Rob Latham
servers, faced with the HDF5-generated workload, died a fiery death. Can you bring this up on the PVFS mailing list? I think we're dealing with a PVFS issue here, and not an HDF5 defect. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] round-robin (not parallel) access to single hdf5 file

2010-12-14 Thread Rob Latham
is protected by #ifdef __USE_GNU. For the sake of portability, we try to avoid non-standard flags in MPI land, but this optimization is easy and presumably worthwhile so I'll ask our hard-nosed portability guys how we can use this. ==rob -- Rob Latham Mathematics and Computer Science Division

Re: [Hdf-forum] round-robin (not parallel) access to single hdf5 file

2010-12-09 Thread Rob Latham
. Then the library, whenever possible, will basically do the sorts of optimizations you're thinking about. You do have to, via property lists, explicitly enable MPI-IO support and collective I/O. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] round-robin (not parallel) access to single hdf5 file

2010-12-09 Thread Rob Latham
as if you asked HDF5 to do the compression for you: I guess you'd have to find a stream-based compression algorithm (gzip?) that can work on concatenated blocks, and annotate the dataset with the compression algorithm you selected. ==rob -- Rob Latham Mathematics and Computer Science Division

Re: [Hdf-forum] Problem with parallel compiling

2010-11-08 Thread Rob Latham
==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Re: [Hdf-forum] problems with parallel I/O

2010-10-27 Thread Rob Latham
PVFS-specific optimizations. You could run a one-server PVFS instance on your NFS server. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum

Re: [Hdf-forum] problems with parallel I/O

2010-10-27 Thread Rob Latham
parallelism then maybe one can run it in gdb and collect a backtrace of all the processors? (mpiexec -np 8 xterm -e gdb ...) ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users

[Hdf-forum] hl_region and parallel I/O

2010-09-03 Thread Rob Latham
, back to the low-level API for me, then : ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman/listinfo/hdf

Re: [Hdf-forum] hl_region and parallel I/O

2010-09-03 Thread Rob Latham
hyperslabs in HDF5 (in both sequential and parallel modes). Oh, I don't have any better suggestions. I started to think of some changes that would make it more parallel-io friendly and then I quickly ended up right back at the regular hdf5 parallel I/O interface. ==rob -- Rob Latham Mathematics

Re: [Hdf-forum] Compling Flash on hdf5-1.8.4-patch1

2010-06-14 Thread Rob Latham
: *** [h5_parallel_write.o] Error 1 ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA

Re: [Hdf-forum] Problem with compiling HDF5 1.6 for parallel I/O

2010-06-07 Thread Rob Latham
./configure --enable-fortran ( the --enable-parallel is ok to keep but it is enabled when HDF5 sees you're building with an MPI compiler) ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF

Re: [Hdf-forum] Link error building ParallelHDF5 with --enable-shared

2010-02-24 Thread Rob Latham
that works with those three versions, you could likely get away with that. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA ___ Hdf-forum is for HDF software users discussion. Hdf-forum@hdfgroup.org http