Dear Wolfgang, I have figured out what the trouble is. I notice that the dimensions stored in "dofs_per_block" are the global number of dof per each block. Hence, the constructor "BlockDynamicSparsityPattern dsp" allocates for each block a std::vector<line> with the size equal to the global dof number of the respective block. Since each MPI process performs this allocation, it can cause an out of memory error if the code runs with many MPI processes.
Thank you very much for your time and for your help. Best, Matteo Il giorno ven 11 ott 2019 alle ore 10:01 Matteo Frigo <[email protected]> ha scritto: > I performed some tests with the tutorial 55. I noted that the some issue > shows up here, too. > Especially it occurs when I try solving the 3D problem after 5 refinement > cycles. > Changing the following part of code : > BlockDynamicSparsityPattern dsp(dofs_per_block, dofs_per_block); > DoFTools::make_sparsity_pattern(dof_handler, coupling, dsp, > constraints, false); > SparsityTools::distribute_sparsity_pattern( > dsp, > dof_handler.compute_locally_owned_dofs_per_processor(), > mpi_communicator, > locally_relevant_dofs); > system_matrix.reinit(owned_partitioning, dsp, mpi_communicator); > with this one: > TrilinosWrappers::BlockSparsityPattern > sp(owned_partitioning,MPI_COMM_WORLD); > DoFTools::make_sparsity_pattern(dof_handler, > coupling, > sp, > constraints, > false, > > Utilities::MPI::this_mpi_process(mpi_communicator)); > sp.compress(); > system_matrix.reinit(sp); > everything works fine. > Obviously I changed the same part for making the preconditioner sparsity > pattern. > I think that this test case can help you to figure out what is going on. > Thanks, > > Matteo > > Il giorno gio 10 ott 2019 alle ore 22:41 Wolfgang Bangerth < > [email protected]> ha scritto: > >> On 10/9/19 10:26 AM, Matteo Frigo wrote: >> > >> > I'm saying that making a DynamicSparsityPattern by using the procedure >> > described above is more expensive (from the memory point of view) >> respect >> > using TrilinosWrappers::BlockSparsityPattern. >> > I noted this trouble trying to run some test cases with a large number >> of >> > unknowns (100 millions of dofs). >> > In such cases, I get an out of memory error if I use >> DynamicSparsityPattern, >> > whereas it works fine using TrilinosWrappers::BlockSparsityPattern. >> > Investigating on smaller cases, I noted that a peak of memory usage >> occurs >> > during the call of the function: >> > DoFTools::make_sparsity_pattern(dof_handler, scratch_coupling, dsp, >> > constraints, false, this_mpi_process); >> > It means that the problem remains, even if the program runs until the >> end. >> > As far as the analysis of memory consumption is concerned, I used the >> Massif >> > tool from Valgrind. >> >> Matteo, >> thanks for clarifying. It would still be really nice if you could create >> a >> small program that really just builds a mesh, a DoFhandler, and then the >> sparsity pattern with both ways -- this would make it easier for us to >> figure >> out what is going on! >> >> Best >> W. >> >> >> -- >> ------------------------------------------------------------------------ >> Wolfgang Bangerth email: [email protected] >> www: http://www.math.colostate.edu/~bangerth/ >> >> -- >> The deal.II project is located at http://www.dealii.org/ >> For mailing list/forum options, see >> https://groups.google.com/d/forum/dealii?hl=en >> --- >> You received this message because you are subscribed to the Google Groups >> "deal.II User Group" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/dealii/9a02c6b1-3ce2-02b7-6db8-0c05e4102702%40colostate.edu >> . >> > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/CAAUoYWUcu_RLv4oVfL5p5RLrtfg6GfdLUvE4aZTUQhd1ryqGRg%40mail.gmail.com.
