Hi Elena,
For our direct numerical simulations of compressible and hypersonic
turbulence we take a similar domain decomposition approach to Rob: The 3-D
spatial data array is subdivided over a 2-D domain decomposition across the
computational elements. Each element receives a wall-normal column of data
that is i by j by k in dimension. If I, J, K are the dimensions of the
entire flow field then i ~ I/N, j ~ J/M and k = K. Here K is the index in
the wall normal coordinate.

The wall normal direction is the stiffest direction, and sometimes it will
be treated implicitly while the other two directions are treated
explicitly. When a strong radiating shock layer is added, then additional
radiative physics need to be added to each of these columns, which might be
implemented in an embarrassingly parallel fashion depending on how you
model radiation.

As far as chunks go, right now I am not chunking the data set, which is
probably sub-optimal. It's just stored as one large block of I-J-K
(fortran/column major) ordered data. The sims create a TON of data, and I
haven't been able to decide if I want to optimize the IO for writing during
the simulation, or for various post processing tasks. The issue here is
that, in much of the statistical post processing the data can be reduced
along the statistically homogeneous directions: time and the J index,
before proceeding further.

At any rate, parallel compression would force me to commit to a chunking
scheme, and help ease our storage pains, which are numerous.

Izaak Beekman
===================================
(301)244-9367
UMD-CP Visiting Graduate Student
Aerospace Engineering
[email protected]
[email protected]
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to