Konrad,

I wrote a shared memory parallel code for a single machine and I am very happy that I can easily compute around 50 million+ DOFs in reasonable time for a stationary problem.

Nice!


However, writing such an amount of data is a bottleneck. I know about distributed memory solutions for writing output efficiently but is there also a good solution solution for shared memory parallelism? I would like to write vtk-files.

Maybe I missed it while browsing the documentation. Can anyone please point me to the right spot?

There is nothing there right now, but let's measure first. Have you timed the different parts of your program? (Like in step-32 or step-40, for example?) As a percentage of the overall run time of your program, how long do the operations
  data_out.build_patches()
and
  date_out.write_vtu (...)
take? These are the two where substantial work happens.

Best
 W.


--
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 [email protected]
                           www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to