Hi Izaak,
I think I misunderstood your question.
the fapl call is just setting properties (comm and info) to use when
accessing the file. And since the communicator and info object are
duplicated internally, you can do whatever you want in your program with
the the comm, or the info object
Hi Stephane,
I could not replicate the problem you are seeing. I ran the sample
program that you sent me on an XE6 and it worked fine..
I am not aware of any limit on the number of times you can open/close a
file (at least from the HDF5 side of things). Not sure if the filesystem
you are
I tried this on hopper on /scratch2 which is Lustre also, but it works
fine for me..
I used the Cray provided HDF5 (1.8.7)
Since I could not replicate the problem, did you try opening/closing the
file using plain MPI-I/O:
MPI_File_open()
MPI_File_close()
If you get the same problem
On 04/20/2012 08:56 AM, Rob Latham wrote:
On Mon, Apr 16, 2012 at 01:16:10PM +0200, Abul Latif wrote:
Hi Guys,
I have problem in reading hdf5 files with multiple cpus (IO parallel). I
use intel compilers (version 11.0) and intel openmpi. Files are read
properly with serial IO (when they are
Hi Tobias,
On 05/08/2012 03:20 PM, Tobias Mayer wrote:
Dear all!
I encountered an issue with creating a hdf5 file parallely. On my
local machine it works fine, but on the computation cluster it does not.
Is the file located on a file system accessible by all the nodes on your
cluster?
Hi Kshitij,
On 5/24/2012 4:32 PM, Kshitij Mehta wrote:
Hello,
I am just learning to use HDF5 , and I have a question on how to use the
mpiposix driver.
Attached herewith is a simple mpi program where each process does the following:
open hdf file;
write data into a 2x2 dataset;
close file;
Rob is definitely correct.. The question that I should have asked also
in the previous email:
Is there a reason why you are using the mpiposix VFD and not the mpiio VFD?
Mohamad
On 5/25/2012 11:09 AM, Rob Latham wrote:
On Fri, May 25, 2012 at 09:41:57AM -0500, Mohamad Chaarawi wrote
implementation is not doing aggregation. If you are
using ROMIO, two phase should do this for you which sets the default
to the number of nodes (not processes). I would also try and
increase the cb_buffer_size (default is 4MBs).
Thanks,
Mohamad
On May 30, 2012 8:19 AM, Mohamad Chaarawi chaar
with the maximum available for the stripe count. You can try
and experiment with the stripe size, maybe 32 MB would be good..
Increasing ROMIO's cb_buffer_size through an MPI Info hint is also worth
trying.
Mohamad
On Wed, May 30, 2012 at 1:32 PM, Mohamad Chaarawi [via hdf-forum]
[hidden
Yes Mark is correct. You program is erroneous.
The current interface for reading and writing to datasets (collectively)
requires all processes to call the operation for each read/write
operation. You can correct your program by having each processes
participate with a NULL selection in the
be?
Thanks,
Håkon
On 14. okt. 2012 17:02, Mohamad Chaarawi wrote:
Hi Håkon,
On 10/14/2012 3:54 AM, Håkon Strandenes wrote:
Thanks, I had a suspicion about that. Some more problems have appeared
(H5Dcreate2 freezes/hangs after a few writes), but I will try to debug
some more before I ask you
Hi Peter,
The problem does sound strange.
I do not understand why file locking helped reduce errors. I though you
said each process writes to its own file anyway, so locking the file or
having one process manage the reads/writes should not matter anyway.
Is it possible you could send me a
as the workflow goes, a scheduler provides the basic h5 file with
all the parameters and tells the workers to load this file and then put
their measurements in. So they are enlarging the file as time goes by.
Have a nice day, Peter
On 11/19/2012 03:36 PM, Mohamad Chaarawi wrote:
Hi Peter
On Dec 11, 2012, at 9:33 PM, Suman Vajjala suman.g...@gmail.com wrote:
Hi,
I thank you for the replies. The problem is that for creating a dataset
using H5Dcreate every processor has to know that dataset's size. Is there a
way of of doing it without the processors knowing the data
process (H5Dopen is not collective), then access the dataset for each
process independently as you were doing.
Mohamad
On Dec 11, 2012, at 10:19 PM, Mohamad Chaarawi chaar...@hdfgroup.org wrote:
On Dec 11, 2012, at 9:33 PM, Suman Vajjala suman.g...@gmail.com wrote:
Hi,
I thank you
On Dec 11, 2012, at 10:26 PM, Mohamad Chaarawi chaar...@hdfgroup.org wrote:
Of course, a much simpler way to do this is to create the file using
MPI_COMM_SELF, create the data sets independently as you did since collective
on comm_self is well, independent, and close the file.
Ahh wait
Hi Bin,
The VOL work is not part of any release yet and still sitting in a
separate branch. We are aiming to release with 1.10.
You can access the branch from:
http://svn.hdfgroup.uiuc.edu/hdf5/features/vol
I have to warn you that this is not a release branch, but a development
branch, so
Hi Robert,
This is a known limitation in Parallel HDF5. It is more of a limitation
inside ROMIO, which should be part of your MPI library, where you can
not write more than 2GB worth of data in one call. I do not have a
specific time table as to when this will be fixed.
more details about
you need (NULL if you don't need
any) in the set_vol call.
Thanks,
Mohamad
Thanks.
Bin
--
365 http://gongyi.163.com/love365
At 2013-01-08 23:24:14,Mohamad Chaarawi chaar...@hdfgroup.org wrote:
Hi Bin,
The VOL work is not part of any release yet and still sitting
On 4/23/2013 11:19 AM, Maxime Boissonneault wrote:
Hi,
I am trying to write a ~24GB large array of floats to a file with
PHDF5. I am running on a Lustre PFS, with IB networking. I am running
the software on 128 processes, spread amongst 16 nodes of 8 cores
each. The MPI implementation is
I managed to replicate the issue.
We test only with the default PGI compilers, which work quite fine.
However when I switch to the Cray compilers, I see the same issue you
guys are seeing.
We'll add it to our bug tracking system. Thanks for reporting!
Mohamad
On 4/26/2013 9:47 AM, Albert
Oh and please follow Albert's instructions about giving us more
information about your system and build failure.
More information = Better fix :-)
Thanks,
Mohamad
On 4/26/2013 9:56 AM, Mohamad Chaarawi wrote:
I managed to replicate the issue.
We test only with the default PGI compilers
Hi Roc,
You need to provide a little more information:
* platform you are using
* MPI library and version you are using
* config.log
* configure make output
* make check output
Thanks,
Mohamad
On 5/1/2013 10:54 AM, Roc Wang wrote:
I run the test program after I installed HDF5. Most of
Albert,
On 5/7/2013 12:50 PM, Albert Cheng wrote:
Since the configure process is executed in the front end which is a
Linux system,
configure naturally thinks it is working in a Linux system, therefore
it goes to
the Linux configure settings. I think the right solution is to tell
configure
Hi Bin,
This should be fixed in the 1.8.11 release (May 15th).
You can actually get a copy of the unofficial source code now:
http://www.hdfgroup.org/ftp/HDF5/releases/hdf5-1.8.11/src/
Thanks,
Mohamad
On 5/10/2013 10:26 AM, goon83 wrote:
Hi all,
The work I am doing needs to store big
On 5/10/2013 10:39 AM, goon83 wrote:
Hi Mohamad,
Thanks for your information ! Does the HDF5 1.8.1 support VOL ?
Because we need to stay with the version with VOL support.
Ah. No the VOL branch is not part of any release branch.
It is just a branch off the trunk that I maintain. I'm not
26 matches
Mail list logo