Dear Wolfgang, 

Thank you for the clarifications.

I am trying now to export a file per process (and frequency) to avoid the 
issue that I had (previously mentioned). However, What I get is a vector 
with the total dof instead of the locally own dof.

My solver function is 
* PETScWrappers::MPI::Vector 
completely_distributed_solution(locally_owned_dofs,mpi_communicator);*
* SolverControl cn;*
*PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);*
*solver.solve(system_matrix, completely_distributed_solution, system_rhs);*
*constraints.distribute(completely_distributed_solution);*
* locally_relevant_solution = completely_distributed_solution;*

while the exporting is the same as mentioned before, but adding the label 
of the corresponding process to each file.
*testvec=locally_relevant_solution;*
*testvec.print(outloop,9,true,false);*

It is clear that the problem I have now is that I am exporting the 
completely_distributed_solution and that is not what I want. 
Could you please informe me how to obtain the locally own solution? I can 
not find the way of obtaining that,

Thank you
Regards

El viernes, 19 de agosto de 2022 a las 22:21:57 UTC+1, Wolfgang Bangerth 
escribió:

> On 8/19/22 14:25, Uclus Heis wrote:
> > 
> > "/That said, from your code, it looks like all processes are opening the 
> same/
> > /file and writing to it. Nothing good will come of this. There is of 
> course
> > also the issue that importing all vector elements to one process cannot 
> scale
> > to large numbers of processes."/
> > /
> > /
> > What would you suggest to export in a text file the whole domain when 
> running 
> > many processes ?
> > A possible solution that I can think is to export for each frequency 
> (loop 
> > iteration) a file per process. In addition, I would need to export 
> (print) the 
> > locally_owned_dofs (IndexSet) to construct in an external environment 
> the 
> > whole domain solution. How could I solve the issue of //importing all 
> vector 
> > elements to one process ?
>
> When you combine things into one file, you will always end up with a very 
> large file if you are doing things on 1000 processes. Where and how you do 
> the 
> combining is secondary, the underlying fact of the resulting file is the 
> same. 
> So: if the file is of manageable size, you can do it in a deal.II program 
> as 
> you are already doing right now. If the file is no longer manageable, it 
> doesn't matter whether you try to combine it in a deal.II-based program or 
> later on, it's not manageable one way or the other.
>
> Best
> W.
>
> -- 
> ------------------------------------------------------------------------
> Wolfgang Bangerth email: [email protected]
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/56f9be19-90c3-401d-b187-ad3e2043574cn%40googlegroups.com.

Reply via email to