Dear all,
I am experiencing strange behaviour in what I am trying to do and I would
like to ask for a bit of support.
I have a running solver done with a distributed MPI implementation. The
solver is similar to step-29, where real and imaginary parts are used and a
2N system is solved.
Now I want to apply reduced basis methods to it and to do that I will solve
a FullMatrix system Lx=b by x=L^1*b.
For now, and to have a sanity check, what I am doing is to convert the
operators (PETScWrappers::MPI::SparseMatrix) to FullMatrix after
assemble_system(),
and solve the system by inverting the operator and using vmult(). However,
the results are different when I solve with sparse matrix compared to
FullMatrix.
The solver code is bellow, where * locally_relevant_solution *is the
solution for the distributed system and *solution_rb* is thee
for the FullMatrix system.
__________________________________________________________________________
* void Problem<dim>::solve()*
*{*
*//Distributed solver*
* PETScWrappers::MPI::Vector
completely_distributed_solution(locally_owned_dofs,mpi_communicator);*
* SolverControl cn;*
* PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);*
* solver.solve(system_matrix, completely_distributed_solution, system_rhs);*
* constraints.distribute(completely_distributed_solution);*
* locally_relevant_solution = completely_distributed_solution;*
*// Full matrix solver*
* FullMatrix<double> Linv(dof_handler.n_dofs(),dof_handler.n_dofs()); *
* Vector<double> solution_rb(dof_handler.n_dofs());*
* Linv.invert(L);*
* Linv.vmult(solution_rb,b,false);*
*}*
__________________________________________________________________________
I checked that in both cases, the operators/system_matrix have the same
values just in the assemble_system() function. I was wondering if there is
something internally that change somehow the systme_matrix. I can see that
after checking the values of the operators I have the following lines which
I am not sure if there is something happening:
_________________
*...*
* constraints.distribute_local_to_global(cell_matrix,*
* cell_rhs,*
* local_dof_indices,*
* system_matrix,*
* system_rhs);*
* }*
* system_matrix.compress(VectorOperation::add);*
* system_rhs.compress(VectorOperation::add);*
_________________
An example of how the results mismatch is given in the following figure
where the dotted line corresponds to the solution with FullMatrix and the
line corresponds to the distributed solution.
[image: result.png]
I have to add that the distributed solver that I have is validated and
thus, I trust the solutions. It looks like there is a multiplication factor
somewhere, that I am missing where using the FullMatrix.
Thank you very much for your help. Regards,
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/03437f5b-5d49-4b84-8367-7e9d34f382e8n%40googlegroups.com.