Dear community,

if I am not mistaking my analysis, it turned out that the memory loss is 
caused by this call:

BiCG.solve (this->system_matrix, distributed_incremental_displacement, 
this->system_rhs, preconditioner);

because if I turn it off the top command shows no change in the RES at all.

Maybe this is of use. Thanks in advance.

Alberto

Il giorno venerdì 24 luglio 2020 alle 11:32:13 UTC+2 Alberto Salvadori ha 
scritto:

> Dear community
>
> I have written the simple code below for solving a system using PETSc,
> having defined 
>
> Vector<double> incremental_displacement;
> Vector<double> accumulated_displacement;
>
> in the class LargeStrainMechanicalProblem_OneField<dim>.
>
> It turns out that this code produces a memory loss, quite significant 
> since I am solving my system thousands of times, eventually inducing the 
> run to fail. I am not sure what is causing this issue and how to solve it, 
> maybe more experienced users than myself can catch the problem with a snap 
> of fingers. 
>
> I have verified the issue on my mac (Catalina) as well as on linux ubuntu 
> (4.15.0), using deal.ii 9.1.1.
> Apparently the issue reveals only when mpi is invoked with more than one 
> processor, whereas it does not emerge when running in serial or by mpirun 
> -np 1.
>
> Thanks in advance
>
> Alberto
>
> =========
>
>
>
>
> template <int dim> 
> unsigned int LargeStrainMechanicalProblem_OneField<dim> 
> :: 
> solve ( 
> const unsigned penaltyAmplification 
> ) 
>
> // 
> // this simplified version of solve has been written to find out 
> // the source of memory leak in parallel 
> // 
>
> { 
>
> PETScWrappers::MPI::Vector distributed_incremental_displacement 
> (this>locally_owned_dofs,this->mpi_communicator); 
>
> distributed_incremental_displacement = incremental_displacement; 
>
> size_t 
> bicgstab_max_iterations = 20000 ; 
>
> double 
> tolerance = 1e-10 * this->system_rhs.l2_norm() ; 
>
> unsigned solver_control_last_step; 
>
> SolverControl bicgstab_solver_control ( bicgstab_max_iterations , 
> tolerance ); 
>
> PETScWrappers::PreconditionJacobi preconditioner( this->system_matrix ); 
>
> this->pcout << " Bicgstab " << std::flush ; 
>
> PETScWrappers::SolverBicgstab BiCG (bicgstab_solver_control, 
> this->mpi_communicator); 
>
> BiCG.solve (this->system_matrix, distributed_incremental_displacement, 
> this->system_rhs, preconditioner); 
>
> solver_control_last_step = bicgstab_solver_control.last_step(); 
>
> incremental_displacement = distributed_incremental_displacement; 
> accumulated_displacement += incremental_displacement; 
> this->hanging_node_constraints.distribute (accumulated_displacement); 
>
> return solver_control_last_step; 
>
> }
>

-- 


Informativa sulla Privacy: http://www.unibs.it/node/8155 
<http://www.unibs.it/node/8155>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/24389a5b-59ba-4f32-8c4b-06d23d0fe2ban%40googlegroups.com.

Reply via email to