Dear PETSc users,
I'm trying to wrap my head around parallel I/O. If I understand correctly, a
decent way of doing this is having one rank (say 0) writing to disk, and the
other ranks communicating their part of the solution to rank 0. Please correct
me if I'm wrong here.
I'm using DMDA to manage my domain decomposition. As a first step, I've been
trying to create an array on rank 0 holding the entire global solution and then
writing this to file by re-using some routines from our serial codes (the
format is Tecplot ASCII). (I realize that neither this approach nor an ASCII
format are good solutions in the end, but I have to start somewhere.) However,
I haven't been able to find any DMDA routines that give me an array holding the
entire global solution on rank 0. Are there any, or is this too much of a
"dirty trick"? (For just 1 process there is no problem, the output files
generated look good.)
I'm also willing to try the VTK way of doing things, but I hit a problem when I
tried that: even though I include "petscviewer.h" (also tried adding
"petscviewerdef.h"), when I do
call PetscViewerSetType(viewer,PETSCVIEWERVTK,ierr)
my compiler complains that PETSCVIEWERVTK is undefined (has no implicit type).
This is from Fortran90 using preprocessing macros to #include the files. I
tried PETSCVIEWERASCII as well, same problem. This is with 3.4.3. Any hints on
this?
Also, there are many different examples and mailing list threads about VTK
output. What is the currently recommended way of doing things? I need to output
at least (u,v,w) as vector components of one field, together with a scalar
field (p). These currently have separate DM's, since I only use PETSc to solve
for p (the pressure).
Best regards,
Åsmund