Hi all,

I am seeing a rather strange behavior from AffineConstraints when dealing 
with inhomogenous Dirichlet-type conditions in an MPI context. To explain 
this behavior, I set up a small problem which involves solving a system of 
linear equations. The coefficient matrix is an 8x8 identity matrix (i.e. 
all entries on diagonal are 1 and all other components are zero). The 
solution to the system of equations is subject to the following constraints

   1.  
   2.  
   3. 
   4. 
   5. 
   6. 
   7. 
   

Clearly only DOF 5 is unknown while the rest are known. The force vector 
corresponding to this system is given by




 Note that the force vector entries corresponding to all constrained DOFs 
are zero (since AffineConstraints will take care of imposing the correct 
values later) and the only non-zero entry corresponds to the true unknown 
DOF 5. 

This system is distributed between 2 processes such that

   - Process 0: handles DOFs 0 to 3
   - Process 1: handles DOFs 4 to 7

Due to this partitioning, the affine constraint set up by process 0 takes 
care of constraints 1 to 4 and process 1 takes care of constraints 5 to 7.

*My expectation*
I expected when I solve the system and then call *distribute* method of 
*AffineConstraints*, the solution should come out as




*My observation*
I observe that after calling the distribute method, solution is



That is, the AffineConstraints object correctly imposed all constraints 
except for constraint 1. If I invoke *distribute* method twice, I get the 
correct result





My question:
Is this behavior expected? Or am I doing something wrong or perhaps my 
understanding of this subject is flawed?

I prepared a small code to demonstrate this behavior. Thanks in advance for 
your help and sorry for this long description of the problem.

Regards,
Ahmad







 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/82ed5d28-11ed-4f8c-970d-9de98b5424f9%40googlegroups.com.
#include <deal.II/base/mpi.h>
#include <deal.II/base/index_set.h>
#include <deal.II/base/exceptions.h>

#include <deal.II/lac/petsc_vector.h>
#include <deal.II/lac/petsc_solver.h>
#include <deal.II/lac/petsc_sparse_matrix.h>
#include <deal.II/lac/affine_constraints.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/sparsity_tools.h>
#include <deal.II/lac/solver_control.h>
#include <deal.II/lac/petsc_precondition.h>

#include <set>
#include <fstream>

int main(int argc, char** argv) {


    dealii::Utilities::MPI::MPI_InitFinalize mpiContext(argc, argv, 1);
    auto idProcess = dealii::Utilities::MPI::this_mpi_process(MPI_COMM_WORLD);
    auto nProcesses = dealii::Utilities::MPI::n_mpi_processes(MPI_COMM_WORLD);

    AssertThrow(nProcesses == 2u, dealii::ExcMessage("Please run this code with 2 processes"));

    /* ****************************************************************************************
     *
     * Set locally owned DOF indices
     *
     *      Process 0 is in charge of dofs 0 to 3
     *      Process 1 is in charge of dofs 4 to 7
     *
     * Both processes declare dofs 0 to 7 as locally relevant
     *****************************************************************************************/
    dealii::IndexSet locallyOwnedIndices(8), locallyRelevantIndices(8);

    locallyRelevantIndices.add_range(0, 8);

    if (idProcess == 0) {

        locallyOwnedIndices.add_range(0, 4);

    } else  {

        locallyOwnedIndices.add_range(4, 8);
    }


    locallyRelevantIndices.compress();
    locallyOwnedIndices.compress();


    /* ****************************************************************************************
     *
     * Set up the constraints matrix
     *
     *      Process 0 sets DOF indices 2 and 3 to zero. Moreover, it sets DOF 0 and 1 equal to
     *      DOFs 4 and 5, respectively.
     *
     *      Process 1 sets DOF indices 6 and 7 to zero. Moreover, it sets DOF 4 to 0.05
     *
     *****************************************************************************************/
    dealii::AffineConstraints<double> constraints(locallyRelevantIndices);


    if (idProcess == 0) {

        constraints.add_lines(std::set<unsigned>({0, 1, 2, 3}));
        constraints.add_entry(0, 4, 1.0);
        constraints.add_entry(1, 5, 1.0);

    } else {

        constraints.add_lines(std::set<unsigned>({4, 6, 7}));
        constraints.set_inhomogeneity(4, 0.05);

    }

    constraints.close();

    /* ****************************************************************************************
     *
     * Set up the stiffness matrix and force and solution vectors
     *
     *****************************************************************************************/
    dealii::PETScWrappers::MPI::Vector vectorForce(locallyOwnedIndices, MPI_COMM_WORLD);
    dealii::PETScWrappers::MPI::Vector vectorSolution(locallyOwnedIndices, MPI_COMM_WORLD);

    // Set up the sparsity pattern for a diagonal matrix
    dealii::DynamicSparsityPattern dsp(locallyRelevantIndices);
    for (const auto &locallyOwnedIndex : locallyOwnedIndices) {
        dsp.add(locallyOwnedIndex, locallyOwnedIndex);
    }
    dsp.compress();

    dealii::SparsityTools::distribute_sparsity_pattern(
            dsp,
            {4, 4} /*rows per process*/,
            MPI_COMM_WORLD,
            locallyOwnedIndices);

    dealii::PETScWrappers::MPI::SparseMatrix matrixStiffness(
            MPI_COMM_WORLD,
            dsp,
            {4, 4} /*rows per process*/,
            {4, 4} /*columns per process*/,
            idProcess);


    /* ****************************************************************************************
     *
     * Assign 1.0 to the diagonal of the stiffness matrix
     *
     *****************************************************************************************/

    for (const auto &locallyOwnedIndex : locallyOwnedIndices) {
        matrixStiffness.set(locallyOwnedIndex, locallyOwnedIndex, 1.0);
    }

    matrixStiffness.compress(dealii::VectorOperation::insert);



    /* ****************************************************************************************
     *
     * Assign a unit force to index 5 of force vector
     *
     *****************************************************************************************/

    if (idProcess == 1) {
        vectorForce.set({5}, {1});
    }

    vectorForce.compress(dealii::VectorOperation::insert);



    /* ****************************************************************************************
     *
     * Finally solve the system of equations
     *
     *****************************************************************************************/
    dealii::SolverControl solverControl;
    dealii::PETScWrappers::SolverCG solverCg(solverControl, MPI_COMM_WORLD);
    dealii::PETScWrappers::PreconditionJacobi precond(matrixStiffness);

    vectorSolution = 0;
    solverCg.solve(matrixStiffness, vectorSolution, vectorForce, precond);

    if (idProcess == 0) {

        std::cout << "Solver succeeded? "
                  << (solverControl.last_check() == solverControl.success)
                  << std::endl;

    }
    /* ****************************************************************************************
     *
     * Print the solution after the FIRST distribute operation
     *
     *****************************************************************************************/
    constraints.distribute(vectorSolution);

    dealii::Vector<double> vectorSolutionGlobal(8);
    vectorSolutionGlobal = vectorSolution;

    if (idProcess == 0) {
        std::cout << "Solution after the first distribute operations:" << std::endl;
        vectorSolutionGlobal.print(std::cout);
        std::cout << std::endl;
    }



    /* ****************************************************************************************
     *
     * Print the solution after the SECOND distribute operation
     *
     *****************************************************************************************/
    constraints.distribute(vectorSolution);

    vectorSolutionGlobal = vectorSolution;

    if (idProcess == 0) {
        std::cout << "Solution after the second distribute operations:" << std::endl;
        vectorSolutionGlobal.print(std::cout);
        std::cout << std::endl;
    }


    MPI_Barrier(MPI_COMM_WORLD);


}


Reply via email to