Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-25 Thread Anna Avdeeva
Dear Timo, Sorry, I did not mean to write to your personal e-mail. Just pressed the wrong reply button by mistake. Thank you very much for your reply. I will check the way how I set mpi_communicator now. Hopefully, will find the problem. I have changed step-40 couple of days ago myself to

Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-25 Thread Timo Heister
and from your email I got off-list (please try to use the mailinglist): > [0]PETSC ERROR: #1 PetscCommDuplicate() line 137 in > /home/anna/petsc-3.6.4/src/sys/objects/tagm.c > An error occurred in line <724> of file > in function > void dealii::PETScWrappers::SparseDirectMUMPS::solve(const >

Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-20 Thread Anna Avdeeva
Dear Timo, The main reason I do this is that I do not understand how to reuse this decomposition in deal.ii. I am relatively new to deal.ii and C++, and I have never used MUMPS before. The way I set it up with SparseDirectUMFPACK was to use InnerPreconditioner structure: template struct

Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-19 Thread Anna Avdeeva
Dear Timo, Because I do not understand how to reuse this decomposition in deal.ii. I am relatively new to deal.ii and C++, and I have never used mumps before. I would appreciate any advice on this. Maybe there is some example in deal.ii that use mumps to construct a preconditioner. Thank you

Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-19 Thread Timo Heister
> template > void Pa_Preconditioner:: > vmult(LA::MPI::BlockVector , const LA::MPI::BlockVector ) const > { > SolverControl cn; > PetScWrappers::SparseDirectMUMPS solver(cn,mpi_communicator); > solver.set_symmetric_mode(true); > solver.solve(B,dst.block(0),src.block(0)); >

Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-19 Thread Anna Avdeeva
Dear Timo, Thank you for your reply. I am still having trouble with implementing my code with MUMPS. I will briefly describe the problem: I am solving 2 systems of Maxwell's equations in the following way: the systems are Ax1=b1 and Ax2=b2; A is a sparse block symmetric matrix (C -M; -M

Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-19 Thread Anna Avdeeva
Dear Timo, Thank you for your reply. I am still having trouble with implementing my code with MUMPS. I will briefly describe the problem: I am solving 2 systems of Maxwell's equations in the following way: the systems are Ax1=b1 and Ax2=b2; A is a sparse block symmetric matrix (C -M; -M

Re: [deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-13 Thread Timo Heister
Anna, > Now with parallel implementation I would like to use > PETScWrappers::SparseDirectMUMPS instead of SparseDirectUMFPACK. > > However I am getting the following error: > > error: ‘SparseDirectMUMPS’ in namespace ‘LA’ does not name a type > typedef LA::SparseDirectMUMPS type; You have to

[deal.II] PETScWrappers::SparseDirectMUMPS use in parallel version of step-22

2017-09-13 Thread Anna Avdeeva
Dear All, I have modified step-22 to solve system of Maxwell's equations, and then used step-40(and 55) to parallelize the code. In the serial version I used template struct InnerPreconditioner; template<> struct InnerPreconditioner<0> { typedef SparseDirectUMFPACK type; }; template<> struct