Dear community,

I have found a way to implement the case of multiple constraints that was 
mentioned in my previous discussion. I am sharing it here, in case of use.
Basically, it seems to me that all affine constraints should be merged 
before constructing the sparsity pattern. Differently than before, I build 
the affine constraint m_interfaces_constraints
as








*for (auto tie_table_ii : tie_table ){   if ( tie_table_ii.first < 
tie_table_ii.second )  {     m_interfaces_constraints.add_line( 
tie_table_ii.first );     m_interfaces_constraints.add_entry( 
tie_table_ii.first, tie_table_ii.second, 1 );   }}*

and then I merged it with the hanging_node_constraints


*this->hanging_node_constraints.merge( this->m_interfaces_constraints );*

from that point on, nothing changes, 

*this->hanging_node_constraints.distribute_local_to_global(cell_matrix, 
cell_rhs,*
*                                                                  
local_dof_indices,*
*                                                                  
this->system_matrix, this->system_rhs);*

is all I need. It works, but I did not make any test on mesh refinement so 
far.
Thanks!

In developing those ideas, I found the .print method of AffineConstraints 
particularly useful.

Alberto

Il giorno martedì 10 agosto 2021 alle 06:52:40 UTC+2 Alberto Salvadori ha 
scritto:

> Dear community
>
> I am asking your help today about imposing affine constraints. I 
> understood well example 6, but when it comes to apply it to sparse MPI 
> matrices I get lost. I am not sure if example 27 is of help for my case.
>
> What would I like to simulate? Take two separate domains, with conformal 
> meshes. I want to impose that two sets of nodes share the same solution, 
> i.e. the solution field is continuous. To this aim, the dofs to be tied 
> have been collected in a set of pairs (which has been verified, it is OK). 
> Hence, say the set is named tie_table, the dof tie_table.first shall be 
> equal to tie_table.second .
>
> So far so good, but from now on it blurs to me. I figured out that I shall 
> build the affine constraint (say it is named m_interfaces_constraints) and 
> update the sparsity pattern (say it is named dsp). 
>
> The dsp has been created after the hanging node constraints
>   
>   DoFTools::make_flux_sparsity_pattern( this->dof_handler,
>                                        dip,
>                                        this->hanging_node_constraints,
>                                        /*keep constrained dofs*/ false);
>
> and it has been updated afterwards like that:
>
>   for (auto tie_table_ii : tie_table )
>         {
>
>             if ( tie_table_ii.first < tie_table_ii.second )
>
>             // we shall make sure not to impose a tie twice ...
>             
>           {
>                 dsp->add( tie_table_ii.first, tie_table_ii.second  ) ;
>                 dsp->add( tie_table_ii.second, tie_table_ii.first  ) ;
>                 m_interfaces_constraints.add_line( tie_table_ii.first );
>                 m_interfaces_constraints.add_entry( tie_table_ii.first, 
> tie_table_ii.second, 1 );
>           }
>         }
>
>   m_interfaces_constraints.close();
>
>
> Is the above reasonable to build the m_interfaces_constraints? Am I 
> missing something? At this point how shall I proceed in distributing the 
> cell matrix into the system matrix in the assembly phase? The following 
> code does not seem to work.
>
>   ... make entries in cell_matrix, cell_rhs ...
>         cell->get_dof_indices (local_dof_indices);
>         
> this->hanging_node_constraints.distribute_local_to_global(cell_matrix, 
> cell_rhs,
>                                                                   
> local_dof_indices,
>                                                                   
> this->system_matrix, this->system_rhs);
>         
> this->m_interfaces_constraints.distribute_local_to_global(cell_matrix, 
> cell_rhs,
>                                                                 
>  local_dof_indices,
>                                                                 
>  this->system_matrix, this->system_rhs);
>
> specifically, the instruction 
> this->m_interfaces_constraints.distribute_local_to_global generates the 
> following error:
>
>
> --------------------------------------------------------
> An error occurred in line <1683> of file 
> </var/folders/py/_sybqp9s7pn6y633bs_bqyp00000gn/T/albertosalvadori/spack-stage/spack-stage-dealii-9.2.0-ujawazz34bb5ybgxwfovovdrwaem4ymj/spack-src/source/lac/trilinos_sparse_matrix.cc>
>  
> in function
>     void dealii::TrilinosWrappers::SparseMatrix::add(const 
> dealii::TrilinosWrappers::SparseMatrix::size_type, const 
> dealii::TrilinosWrappers::SparseMatrix::size_type, const 
> dealii::TrilinosWrappers::SparseMatrix::size_type *, const 
> dealii::TrilinosScalar *, const bool, const bool)
> The violated condition was: 
>     ierr == 0
> Additional information: 
>     An error with error number 2 occurred while calling a Trilinos function
> --------------------------------------------------------
>
> Finally, shall I call this function? What does it do?
>
>   m_interfaces_constraints.condense(*dsp);
>
> I appreciate your help, very much.
>
> Alberto
>
>
>
>
>
-- 


Informativa sulla Privacy: http://www.unibs.it/node/8155 
<http://www.unibs.it/node/8155>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5db6ca7d-18ae-4bf0-9ffc-f0183aa94c24n%40googlegroups.com.

Reply via email to