Dear Prof. Bangerth,

Thank you for your explanation. I have converted matrices and vectors into 
non-blocked objects and have changed them into PETScWrappers::MPI data 
types.

I would like to use SparseDirectMumps to solve the linear system 
<https://github.com/dealii/code-gallery/blob/master/Quasi_static_Finite_strain_Compressible_Elasticity/cook_membrane.cc#L2171>.
 
The documentation tells that I would need an mpi_communicator object to use 
this solver. Since my code is running on a single machine in serial (only 
the assembly is hyper-threaded), how do I need to initialize 
mpi_communicator? Do I need to pass this mpi_communicator object when I 
setup the sparsity pattern, tangent matrix etc in the system_setup() 
function. I could not understand what other changes do I need to bring in 
my code so that I can use SparseDirectMumps solver. Ultimately, my plan is 
to use the SLEPc eigenvalue solver after the solution convergences at every 
load/displacement step.

Thanks a lot!

Animesh

On Friday, January 15, 2021 at 10:53:17 PM UTC-6 Wolfgang Bangerth wrote:

>
> > Thank you for your reply. I tried as you suggested. However, I am 
> getting the 
> > following error -
> > 
> > 
> /home/animesh/Documents/dealii/dealii-9.2.0/examples/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis.cc:873:20:
>  
>
> > error: ‘BlockSparseMatrix’ in namespace ‘dealii::PETScWrappers’ does not 
> name 
> > a type
> >      PETScWrappers::BlockSparseMatrix         tangent_matrix;
>
> I now remember that we removed the non-MPI class at some point.
>
>
> > So I used as suggested in the documentation and am getting the following 
> error 
> > related to that.
> > 
> > 
> /home/animesh/Documents/dealii/dealii-9.2.0/examples/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis/Quasi_Static_Finite_Strain_Beam_Buckling_Analysis.cc:1288:37:
>  
>
> > error: no matching function for call to 
> > ‘dealii::PETScWrappers::MPI::BlockVector::reinit(std::vector<unsigned 
> int>&)’
> >      system_rhs.reinit(dofs_per_block);
>
> Look up the documentation of that class and what arguments the reinit() 
> and 
> constructor functions take:
>
> https://www.dealii.org/current/doxygen/deal.II/classPETScWrappers_1_1MPI_1_1BlockVector.html
>
>
> > Also, I wish to use a Direct solver by PETSc and could not find any 
> related to 
> > the PETSc block vector. In my current code, I am using 
> SparseDirectUMFPACK. Do 
> > I need to convert my entire code to be using non-blocked vectors and 
> matrices 
> > to use the Direct Solver by PETSc? (Although the system is "blocked", it 
> only 
> > has one block that contains the displacement degree of freedom).
>
> Correct. PETSc does not know the concept of "block" vectors and matrices. 
> So 
> if you want to use a direct solver via PETSc, then you have to put 
> everything 
> into non-blocked objects.
>
> The point of blocks is to facilitate block-based solvers such as Schur 
> complements etc. If you don't intend to build solvers this way, there is 
> no 
> benefit to using block matrices and vectors. Just because you have 
> multiple 
> solution components doesn't mean that you *have* to split your linear 
> algebra 
> objects into blocks.
>
> Best
> W.
>
> -- 
> ------------------------------------------------------------------------
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/49df65b6-2bfd-4e92-89d0-b72f2231f233n%40googlegroups.com.

Reply via email to