Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Wolfgang Bangerth

On 8/22/22 09:55, Uclus Heis wrote:
Would be also a poddible solution to export my testvec as it is right 
now (which contains the global solution) but instead of exporting with 
all the preocess, call the print function only for one process?


Yes. But that runs again into the same issue mentioned before: If you 
have a large number of processes (say, 1000), then you have one process 
doing a lot of work (1000x as much as necessary) and 999 doing nothing. 
This is bound to take a long time.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7aedd158-0f8c-8c7e-4682-fcdb687b174d%40colostate.edu.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Uclus Heis
Dear Wolfgang,

Thank you very much for the suggestion.
Would be also a poddible solution to export my testvec as it is right now
(which contains the global solution) but instead of exporting with all the
preocess, call the print function only for one process?

Thank you

El El lun, 22 ago 2022 a las 16:51, Wolfgang Bangerth <
bange...@colostate.edu> escribió:

> On 8/21/22 04:29, Uclus Heis wrote:
> > //
> > /testvec.print(outloop,9,true,false);/
> >
> > It is clear that the problem I have now is that I am exporting the
> > completely_distributed_solution and that is not what I want.
> > Could you please informe me how to obtain the locally own solution? I
> > can not find the way of obtaining that,
>
> I don't know what data type you use for testvec, but it seems like this
> vector is not aware of the partitioning and as a consequence it just
> outputs everything it knows. You need to write the loop yourself, as in
> something along the lines of
>for (auto i : locally_owned_dofs)
>  outloop << testvec(i);
> or similar.
>
> Best
>   W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/9baaf996-f9b4-65a9-5855-71897421b041%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAEt%2B5Lvx%3DYnF1Wk96fS6-On-8xQz5qwGPfydW1BvOBARHeCSbQ%40mail.gmail.com.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Wolfgang Bangerth

On 8/21/22 04:29, Uclus Heis wrote:

//
/testvec.print(outloop,9,true,false);/

It is clear that the problem I have now is that I am exporting the 
completely_distributed_solution and that is not what I want.
Could you please informe me how to obtain the locally own solution? I 
can not find the way of obtaining that,


I don't know what data type you use for testvec, but it seems like this 
vector is not aware of the partitioning and as a consequence it just 
outputs everything it knows. You need to write the loop yourself, as in 
something along the lines of

  for (auto i : locally_owned_dofs)
outloop << testvec(i);
or similar.

Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9baaf996-f9b4-65a9-5855-71897421b041%40colostate.edu.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-21 Thread Uclus Heis
Dear Wolfgang, 

Thank you for the clarifications.

I am trying now to export a file per process (and frequency) to avoid the 
issue that I had (previously mentioned). However, What I get is a vector 
with the total dof instead of the locally own dof.

My solver function is 
* PETScWrappers::MPI::Vector 
completely_distributed_solution(locally_owned_dofs,mpi_communicator);*
* SolverControl cn;*
*PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);*
*solver.solve(system_matrix, completely_distributed_solution, system_rhs);*
*constraints.distribute(completely_distributed_solution);*
* locally_relevant_solution = completely_distributed_solution;*

while the exporting is the same as mentioned before, but adding the label 
of the corresponding process to each file.
*testvec=locally_relevant_solution;*
*testvec.print(outloop,9,true,false);*

It is clear that the problem I have now is that I am exporting the 
completely_distributed_solution and that is not what I want. 
Could you please informe me how to obtain the locally own solution? I can 
not find the way of obtaining that,

Thank you
Regards

El viernes, 19 de agosto de 2022 a las 22:21:57 UTC+1, Wolfgang Bangerth 
escribió:

> On 8/19/22 14:25, Uclus Heis wrote:
> > 
> > "/That said, from your code, it looks like all processes are opening the 
> same/
> > /file and writing to it. Nothing good will come of this. There is of 
> course
> > also the issue that importing all vector elements to one process cannot 
> scale
> > to large numbers of processes."/
> > /
> > /
> > What would you suggest to export in a text file the whole domain when 
> running 
> > many processes ?
> > A possible solution that I can think is to export for each frequency 
> (loop 
> > iteration) a file per process. In addition, I would need to export 
> (print) the 
> > locally_owned_dofs (IndexSet) to construct in an external environment 
> the 
> > whole domain solution. How could I solve the issue of //importing all 
> vector 
> > elements to one process ?
>
> When you combine things into one file, you will always end up with a very 
> large file if you are doing things on 1000 processes. Where and how you do 
> the 
> combining is secondary, the underlying fact of the resulting file is the 
> same. 
> So: if the file is of manageable size, you can do it in a deal.II program 
> as 
> you are already doing right now. If the file is no longer manageable, it 
> doesn't matter whether you try to combine it in a deal.II-based program or 
> later on, it's not manageable one way or the other.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/56f9be19-90c3-401d-b187-ad3e2043574cn%40googlegroups.com.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-19 Thread Wolfgang Bangerth

On 8/19/22 14:25, Uclus Heis wrote:


"/That said, from your code, it looks like all processes are opening the same/
/file and writing to it. Nothing good will come of this. There is of course
also the issue that importing all vector elements to one process cannot scale
to large numbers of processes."/
/
/
What would you suggest to export in a text file the whole domain when running 
many processes ?
A possible solution that I can think is to export for each frequency (loop 
iteration) a file per process. In addition, I would need to export (print) the 
locally_owned_dofs (IndexSet) to construct in an external environment the 
whole domain solution. How could I solve the issue of //importing all vector 
elements to one process ?


When you combine things into one file, you will always end up with a very 
large file if you are doing things on 1000 processes. Where and how you do the 
combining is secondary, the underlying fact of the resulting file is the same. 
So: if the file is of manageable size, you can do it in a deal.II program as 
you are already doing right now. If the file is no longer manageable, it 
doesn't matter whether you try to combine it in a deal.II-based program or 
later on, it's not manageable one way or the other.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/2759827b-9906-5813-92cd-c791a0267914%40colostate.edu.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-19 Thread Uclus Heis
Dear Wolfgang, 

Thank you very much for your answer. Regarding what you mentioned:

"*That said, from your code, it looks like all processes are opening the 
same*


*file and writing to it. Nothing good will come of this. There is of 
coursealso the issue that importing all vector elements to one process 
cannot scaleto large numbers of processes."*

What would you suggest to export in a text file the whole domain when 
running many processes ? 
A possible solution that I can think is to export for each frequency (loop 
iteration) a file per process. In addition, I would need to export (print) 
the locally_owned_dofs (IndexSet) to construct in an external environment 
the whole domain solution. How could I solve the issue of  importing all 
vector elements to one process ?

Thank you
Regards

El viernes, 19 de agosto de 2022 a las 18:57:05 UTC+2, Wolfgang Bangerth 
escribió:

> On 8/19/22 03:25, Uclus Heis wrote:
> > /
> > /
> > The way of extracting and exporting the solution with 
> > /testvec=locally_relevant_solution / is a bad practice? I am saving the 
> > locally relevant solution from many different processes in one single 
> file for 
> > a given frequency. I am afraid that there is no synchronization between 
> > processes and the results will be saved without following the right 
> order of 
> > DOF (which is needed for me). Is this statement correct?
>
> Assuming that testvec is a vector that has all elements stored on the 
> current 
> process, then the assignment
> testvec = locally_relevant_solution;
> synchronizes among all processes.
>
> That said, from your code, it looks like all processes are opening the 
> same 
> file and writing to it. Nothing good will come of this. There is of course 
> also the issue that importing all vector elements to one process cannot 
> scale 
> to large numbers of processes.
>
>
> > Another issue that I found is that this approach increases dramatically 
> the 
> > computational time of the run() function. For a particular case, solving 
> the 
> > domain takes 1h without exporting the domain, while it takes 8h adding 
> the 
> > previous piece of code to export the domain. Is this because the print 
> > function is slow or there is some sync going on when calling 
> > /testvec=locally_relevant_solution?/
>
> You can not tell which part of a code is expensive unless you actually 
> time 
> it. Take a look at the TimerOutput class used in step-40, for example, and 
> how 
> you can use it to time individual code blocks.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/797bf029-8aa2-4a54-ab9a-9fda60fbbfadn%40googlegroups.com.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-19 Thread Wolfgang Bangerth

On 8/19/22 03:25, Uclus Heis wrote:

/
/
The way of extracting and exporting the solution with 
/testvec=locally_relevant_solution / is a bad practice? I am saving the 
locally relevant solution from many different processes in one single file for 
a given frequency. I am afraid that there is no synchronization between 
processes and the results will be saved without following the right order of 
DOF (which is needed for me). Is this statement correct?


Assuming that testvec is a vector that has all elements stored on the current 
process, then the assignment

  testvec = locally_relevant_solution;
synchronizes among all processes.

That said, from your code, it looks like all processes are opening the same 
file and writing to it. Nothing good will come of this. There is of course 
also the issue that importing all vector elements to one process cannot scale 
to large numbers of processes.



Another issue that I found is that this approach increases dramatically the 
computational time of the run() function. For a particular case, solving the 
domain takes 1h without exporting the domain, while it takes 8h adding the 
previous piece of code to export the domain. Is this because the print 
function is slow or there is some sync going on when calling 
/testvec=locally_relevant_solution?/


You can not tell which part of a code is expensive unless you actually time 
it. Take a look at the TimerOutput class used in step-40, for example, and how 
you can use it to time individual code blocks.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a45bf115-aca2-a12c-d795-b4031d71d154%40colostate.edu.


Re: [deal.II] Re: MPI, synchronize processes

2022-08-19 Thread Uclus Heis
Dear all, 

after some time I came back to this problem again. I would kindly ask for 
some guidance to see if I can understand and solve the issue.
I am using a parallel::distributed::Triangulation with MPI.  I call the 
function solve() in a loop for different frequencies and want to export the 
solution of the whole domain for each frequency.
The code looks like:

*for ( int i= f0; i < fend; ++i)*
*{*
**
*solve();  // solve a frequency*

*testvec=locally_relevant_solution;   //extract the solution*

*// DataOut *
*DataOut data_out;*
*data_out.attach_dof_handler(dof_handler);*

*string f_output("sol_" + std::to_string(i) + ".txt");*
*std::ofstream outloop(f_output);*
*testvec.print(outloop,9,true,false); // Save txt file with solution for 1 
frequency.*
*}*

The way of extracting and exporting the solution with 
*testvec=locally_relevant_solution * is a bad practice? I am saving the 
locally relevant solution from many different processes in one single file 
for a given frequency. I am afraid that there is no synchronization between 
processes and the results will be saved without following the right order 
of DOF (which is needed for me). Is this statement correct?
In that case, what would be the better way to export my domain for each 
frequency? 

Another issue that I found is that this approach increases dramatically the 
computational time of the run() function. For a particular case, solving 
the domain takes 1h without exporting the domain, while it takes 8h adding 
the previous piece of code to export the domain. Is this because the print 
function is slow or there is some sync going on when calling 
*testvec=locally_relevant_solution?*

I would really appreciate if  you can clarify and guide mee to solve this 
issue.
Thank you very much
Regards

El jueves, 17 de febrero de 2022 a las 19:02:07 UTC+1, Wolfgang Bangerth 
escribió:

> On 2/17/22 09:22, Uclus Heis wrote:
> > 
> > I still had problems as I first copy the array and then I store it in a 
> matrix 
> > for diffeerent frequencies.The result I got was differet whene using few 
> > process compared to using 1 single process. I added the following code 
> and now 
> > works, is it right?
>
> It copies a vector into a row of a matrix. Whether that's what you want is 
> a 
> different question, so we can't tell you whether it's "right" :-)
>
> You can simplify this by saying
> tmparray = locally_relevant_solution;
>
> Best
> W.
>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c7b78893-9758-4f26-b46c-7bb62c95662dn%40googlegroups.com.


Re: [deal.II] Re: MPI, synchronize processes

2022-02-17 Thread Wolfgang Bangerth

On 2/17/22 09:22, Uclus Heis wrote:


I still had problems as I first copy the array and then I store it in a matrix 
for diffeerent frequencies.The result I got was differet whene using few 
process compared to using 1 single process. I added the following code and now 
works, is it right?


It copies a vector into a row of a matrix. Whether that's what you want is a 
different question, so we can't tell you whether it's "right" :-)


You can simplify this by saying
  tmparray = locally_relevant_solution;

Best
 W.


--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3ea418af-1699-d448-6df9-7162a5fe0b72%40colostate.edu.