On 8/25/22 16:56, Raghunandan Pratoori wrote:
Thank you for your reply Prof. Bangerth. I believe I am already using distributed computing in my code. But I am not sure it is true for the post processing part. I use write_vtu and write_pvtu_record to write the output files. Is that the best way to do it in distributed computing? Is there any example where distributed computing is implemented in the post processing so that I can compare my code and see if it can be improved?

I see now why you have so many entries in your solution vector :-)

I don't think I have anything to point to, but the general approach needs to be that on every process you only allocate memory for each locally owned DoF (i.e., locally owned vector entry), rather than for *every* DoF or vector entry. That is the only way this can scale.

You can easily query how many locally owned DoFs there are on the current process, using functions in class DoFHandler. You then probably have to translate between global indices j and the how-many-th locally owned DoF index j corresponds to. To this end, ask DoFHandler for the IndexSet corresponding to the locally owned indices; class IndexSet has functions that can translate in both directions.

I hope this helps!
Best
 W.

--
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 [email protected]
                           www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b72e7fd7-e70b-e6dd-81ce-96382bb7f557%40colostate.edu.

Reply via email to