Hello, I'm currently working on the upgrading of my code, adding PETSc as an alternative to Trilinos for the Linear Algebra package. I'm implementing this option following Tutorial 55. However, I'm dealing with some issues when I try to run massive parallel simulations. Especially large memory consumption occurs in the system setup phase. After some debugging, I was able to figure out that the part of the code responsible for this is the generation of the sparsity pattern, i.e., the following rows:
BlockDynamicSparsityPattern dsp(local_partitioning); DoFTools::make_sparsity_pattern(dof_handler, scratch_coupling, dsp, constraints, false, this_mpi_process); I wanted to point out that this behavior doesn't depend on PETSc, but it is related only with the procedure wherewith we make the Block Sparsity Pattern (BSP). Indeed I ran into the same issue with Trilinos if the above strategy is selected. In the previous version of the code, I used these rows to generate the BSP: TrilinosWrappers::BlockSparsityPattern sp(local_partitioning,MPI_COMM_WORLD); DoFTools::make_sparsity_pattern(dof_handler, matrix_coupling, sp, constraints, false, this_mpi_process); sp.compress(); In this last case, the amount of memory required to generate the BSP is much less respect with the first case. Any ideas what is going on? Am I doing wrong something? Thank you very much for your support. Matteo -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/42a40c37-eb6b-4f0c-994d-4e74e57c3764%40googlegroups.com.
