Not as long term a solution as xdmf will hopefully be but I have a branch (https://bitbucket.org/tferma/dolfin/commits/branch/vtkfile) that provides some of the functionality you're looking for using vtk files. It provides an interface that allows outputting of multiple functions per vtk file:

file << [u, p, theta, E, B]

The version you suggested (including time) is currently disabled because I don't know enough swig to write the typemap for std::pair<std::vector, double> but it is implemented otherwise.

The branch also provides the functionality to output GenericFunctions. Unfortunately this meses up the nice '<<' interface a bit as you have to also provide a mesh to output on (switching to c++ as this is untested in python):

file.write(std::vector<const GenericFunction*>& us, const Mesh& mesh, double time);

Finally, there is a similar interface to output on a particular functionspace:

file.write(std::vector<const GenericFunction*>& us, const FunctionSpace& functionspace, double time);

Here the functionspace has to be scalar and supported by vtk (p1, p2, p1dg, p2dg). I find this particularly useful when trying to debug dg simulations.

Caveats here are that I only just got around to porting this over to git from bzr so it's not very well tested yet. In particular, I don't use the python interface and haven't tested pvtus yet. Additionally you'll still have replication of the meshes between successive vtus/timesteps.

I modified the navier-stokes demo in that branch to show the basic functionality.

Cheers,
Cian


On 06/03/2013 09:17 AM, Nico Schlömer wrote:
Hi all,

since recently, I'm doing time-dependent computations where Navier--Stokes, the heat equation, and Maxwell's equations are coupled such that the solution is composed of (u, p, theta, E, B), all of which are living on the same mesh. When storing the files, I use a separate file for each quantity and I eventually get a directory full of

velocity*.vtu
pressure*.vtu
temperature*.vtu
magnetic*.vtu
electric*.vtu

The total amount of data is easily in the range of several GB.
This becomes a bottle neck for storage and post-processing, so I was thinking about ways to reduce this.

One thing that's immediately obvious is the fact that the mesh is stored anew for each variable in each time step. When looking at the files, mesh data accounts for about 70% of the data per file. I now ask myself the question what a sensible backwards-compatible API would look like to storing several arrays in one file. On the Python side, I could imagine admitting a list of functions to the writer,

File('myfile.pvd') << ([u, p, theta, E, B], t)

(with an assertion that the functions indeed to live on the same mesh). I'm not sure how (if?) this would translate to C++ though.

Has anyone else ever run into similar issues or thought about this?

Cheers,
Nico


_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to