rogez wrote:
> 
> Hello,
> 
> I'm new to HDF and have read tutorials and some parts of the user's guide
> and reference manual but I can't find the best practice to implement the
> following structure.
> 
> Basically, I need two datasets :
> 
>     one defining my result data (a one-dimensional array of a complex
> compound datatype).
>     one sorting this data in a 3D space (a three-dimensional dataset
> containing an array of indices or references to an item of the previous
> dataset).
> 
> These data will approach two or three hundreds of gigabytes.
> It will be generated on a computing grid with thousands of nodes, each one
> writing to a file.
> I would like to avoid using MPI but it is not mandatory.
> 
> Is there a mean to access data through the second dataset (by passing a 3D
> coordinate) and retrieve in a single call all the data in the first
> dataset dispatched in the multiple generated files ?
> 
> I've seen that I can create a wrapper file that store external links to
> each single file and parse all my links to browse my whole data, but I
> wonder if there already exists a ready-to-use solution for that...
> 
> Thanks a lot in advance,
>  
> Yves
> 

I've seen the multi virtual file driver but it seems to be able to separate
the datatypes and not splitting the raw data itself.

Is that right ?

--
View this message in context: 
http://hdf-forum.184993.n3.nabble.com/Access-dataset-divided-into-several-files-tp3369462p3370407.html
Sent from the hdf-forum mailing list archive at Nabble.com.

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to