Hello everyone,
I am having problem with applying static condensation as explained in
step-44 in parallel. The objective is to employ a parallel solver. Any
guidance on the issue would be much helpful.
As of now, the code works when run on single MPI process, but fails when np
> 1. The error originates from the following part of the code where it
tries to read the elements of the tangent_matrix, which is a
PETScWrapper::MPI::BlockSparseMatrix. The issue arises trying to access the
dofs not owned by the MPI process at the boundary partition.
//-----------------------------------------------------------------------------
for (unsigned int i=0; i<dofs_per_cell; ++i)
for (unsigned int j=0; j<dofs_per_cell; ++j)
data.k_orig(i,j) = tangent_matrix.el(data.local_dof_indices[i],
data.local_dof_indices[j]);
//-----------------------------------------------------------------------------
In the case of PETScWrapper::MPI::BlockVector I can circumvent such a
problem defining a locally_relevant BlockVector as below. I am able to
print out all the elements in this way (I am not sure if this is the most
efficient way!).
//-----------------------------------------------------------------------------
// Access ghost elements in system_rhs
LA::MPI::BlockVector system_rhs_local (locally_owned_partitioning,
locally_relevant_partitioning,
mpi_communicator);
system_rhs_local = system_rhs;
// printing out all the system_rhs values for all dofs in the cell
for (unsigned int i=0; i<dofs_per_cell; ++i)
if (this_mpi_process == 1)
std::cout << " \n Cell:"
<< cell
<< ";system_rhs("<< i <<"):"
<<system_rhs_local(data.local_dof_indices[i])
<< " ; this_mpi_process: "
<< this_mpi_process << std::flush;
//-----------------------------------------------------------------------------
Below is the initialization of the tangent_matrix BlockSparseMatrix for
reference.
//-----------------------------------------------------------------------------
BlockDynamicSparsityPattern dsp(locally_relevant_partitioning);
DoFTools::make_sparsity_pattern(dof_handler, coupling, dsp, constraints,
false);
SparsityTools::distribute_sparsity_pattern(
dsp,
dof_handler.locally_owned_dofs(),
mpi_communicator,
locally_relevant_dofs);
tangent_matrix.reinit(locally_owned_partitioning,dsp, mpi_communicator);
//-----------------------------------------------------------------------------
Please let me know if any more details are needed.
Looking forward to your input.
Thank you and Regards,
Rahul G R
Post Doc - MPI PKS, Dresden
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/7f722419-1e63-483e-bb8b-cacf5cd36338n%40googlegroups.com.