Hi all,
I am re-posting this here and delete the previous one since the images in
my original post do not load properly. So here it is!
I am seeing a rather strange behavior from AffineConstraints when dealing
with inhomogenous Dirichlet-type conditions in an MPI context. To explain
this behavior, I set up a small problem which involves solving a system of
linear equations. The coefficient matrix is an 8x8 identity matrix (i.e.
all entries on diagonal are 1 and all other components are zero). The
solution to the system of equations is subject to the following constraints
1. *u_0 = u_4*
2. *u_1 = u_5*
3. *u_2 = 0*
4. *u_3 = 0*
5. *u_4 = 0.05*
6. *u_6 = 0*
7. *u_7 = 0*
Clearly DOF 5 is the only true unknown while the rest are known. All
components of the force vector corresponding to this system is zero except
for
- *F_5 = 1*
Note that the force vector entries corresponding to all constrained DOFs
are zero (since AffineConstraints will take care of imposing the correct
values later) and the only non-zero entry corresponds to the true unknown
DOF 5.
This system is distributed between 2 processes such that
- Process 0: handles DOFs 0 to 3
- Process 1: handles DOFs 4 to 7
Due to this partitioning, the affine constraint set up by process 0 takes
care of constraints 1 to 4 (given above) and process 1 takes care of
constraints 5 to 7.
*My expectation*
I expected when I solve the system and then call *distribute* method of
*AffineConstraints*, the solution should come out as
- u_0 = 0.05
- u_1 = 1
- u_2 = 0
- u_3 = 0
- u_4 = 0.05
- u_5 = 1
- u_6 = 0
- u_7 = 0
*My observation*
I observe that after calling the distribute method, solution is
- u_0 = 0
- u_1 = 1
- u_2 = 0
- u_3 = 0
- u_4 = 0.05
- u_5 = 1
- u_6 = 0
- u_7 = 0
That is, the AffineConstraints object correctly imposed all constraints
except for constraint 1 on DOF 0. I noticed that if I invoke *distribute*
method
twice, I will get the correct result though.
*My question:*
Is this behavior expected? Or am I doing something wrong or perhaps my
understanding of this subject is flawed?
I prepared a small code to demonstrate this behavior. Thanks in advance for
your help and sorry for this long description of the problem.
Regards,
Ahmad
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/cf6b5b76-a9f4-4be5-a60c-8fb4a5905b5c%40googlegroups.com.
#include <deal.II/base/mpi.h>
#include <deal.II/base/index_set.h>
#include <deal.II/base/exceptions.h>
#include <deal.II/lac/petsc_vector.h>
#include <deal.II/lac/petsc_solver.h>
#include <deal.II/lac/petsc_sparse_matrix.h>
#include <deal.II/lac/affine_constraints.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/sparsity_tools.h>
#include <deal.II/lac/solver_control.h>
#include <deal.II/lac/petsc_precondition.h>
#include <set>
#include <fstream>
int main(int argc, char** argv) {
dealii::Utilities::MPI::MPI_InitFinalize mpiContext(argc, argv, 1);
auto idProcess = dealii::Utilities::MPI::this_mpi_process(MPI_COMM_WORLD);
auto nProcesses = dealii::Utilities::MPI::n_mpi_processes(MPI_COMM_WORLD);
AssertThrow(nProcesses == 2u, dealii::ExcMessage("Please run this code with 2 processes"));
/* ****************************************************************************************
*
* Set locally owned DOF indices
*
* Process 0 is in charge of dofs 0 to 3
* Process 1 is in charge of dofs 4 to 7
*
* Both processes declare dofs 0 to 7 as locally relevant
*****************************************************************************************/
dealii::IndexSet locallyOwnedIndices(8), locallyRelevantIndices(8);
locallyRelevantIndices.add_range(0, 8);
if (idProcess == 0) {
locallyOwnedIndices.add_range(0, 4);
} else {
locallyOwnedIndices.add_range(4, 8);
}
locallyRelevantIndices.compress();
locallyOwnedIndices.compress();
/* ****************************************************************************************
*
* Set up the constraints matrix
*
* Process 0 sets DOF indices 2 and 3 to zero. Moreover, it sets DOF 0 and 1 equal to
* DOFs 4 and 5, respectively.
*
* Process 1 sets DOF indices 6 and 7 to zero. Moreover, it sets DOF 4 to 0.05
*
*****************************************************************************************/
dealii::AffineConstraints<double> constraints(locallyRelevantIndices);
if (idProcess == 0) {
constraints.add_lines(std::set<unsigned>({0, 1, 2, 3}));
constraints.add_entry(0, 4, 1.0);
constraints.add_entry(1, 5, 1.0);
} else {
constraints.add_lines(std::set<unsigned>({4, 6, 7}));
constraints.set_inhomogeneity(4, 0.05);
}
constraints.close();
/* ****************************************************************************************
*
* Set up the stiffness matrix and force and solution vectors
*
*****************************************************************************************/
dealii::PETScWrappers::MPI::Vector vectorForce(locallyOwnedIndices, MPI_COMM_WORLD);
dealii::PETScWrappers::MPI::Vector vectorSolution(locallyOwnedIndices, MPI_COMM_WORLD);
// Set up the sparsity pattern for a diagonal matrix
dealii::DynamicSparsityPattern dsp(locallyRelevantIndices);
for (const auto &locallyOwnedIndex : locallyOwnedIndices) {
dsp.add(locallyOwnedIndex, locallyOwnedIndex);
}
dsp.compress();
dealii::SparsityTools::distribute_sparsity_pattern(
dsp,
{4, 4} /*rows per process*/,
MPI_COMM_WORLD,
locallyOwnedIndices);
dealii::PETScWrappers::MPI::SparseMatrix matrixStiffness(
MPI_COMM_WORLD,
dsp,
{4, 4} /*rows per process*/,
{4, 4} /*columns per process*/,
idProcess);
/* ****************************************************************************************
*
* Assign 1.0 to the diagonal of the stiffness matrix
*
*****************************************************************************************/
for (const auto &locallyOwnedIndex : locallyOwnedIndices) {
matrixStiffness.set(locallyOwnedIndex, locallyOwnedIndex, 1.0);
}
matrixStiffness.compress(dealii::VectorOperation::insert);
/* ****************************************************************************************
*
* Assign a unit force to index 5 of force vector
*
*****************************************************************************************/
if (idProcess == 1) {
vectorForce.set({5}, {1});
}
vectorForce.compress(dealii::VectorOperation::insert);
/* ****************************************************************************************
*
* Finally solve the system of equations
*
*****************************************************************************************/
dealii::SolverControl solverControl;
dealii::PETScWrappers::SolverCG solverCg(solverControl, MPI_COMM_WORLD);
dealii::PETScWrappers::PreconditionJacobi precond(matrixStiffness);
vectorSolution = 0;
solverCg.solve(matrixStiffness, vectorSolution, vectorForce, precond);
if (idProcess == 0) {
std::cout << "Solver succeeded? "
<< (solverControl.last_check() == solverControl.success)
<< std::endl;
}
/* ****************************************************************************************
*
* Print the solution after the FIRST distribute operation
*
*****************************************************************************************/
constraints.distribute(vectorSolution);
dealii::Vector<double> vectorSolutionGlobal(8);
vectorSolutionGlobal = vectorSolution;
if (idProcess == 0) {
std::cout << "Solution after the first distribute operations:" << std::endl;
vectorSolutionGlobal.print(std::cout);
std::cout << std::endl;
}
/* ****************************************************************************************
*
* Print the solution after the SECOND distribute operation
*
*****************************************************************************************/
constraints.distribute(vectorSolution);
vectorSolutionGlobal = vectorSolution;
if (idProcess == 0) {
std::cout << "Solution after the second distribute operations:" << std::endl;
vectorSolutionGlobal.print(std::cout);
std::cout << std::endl;
}
MPI_Barrier(MPI_COMM_WORLD);
}