I modify step-75 into a matrix-based, basiclly combining the hpbox repo in 
github. I came across a error while building:

/home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:710: 
error: ‘class dealii::hp::FEValues<2, 2>’ has no member named ‘unsubscribe’
In file included from 
/home/ubuntu/deal.II/installed/include/deal.II/distributed/tria.h:22,
                 from 
/home/ubuntu/deal.II/installed/include/deal.II/distributed/grid_refinement.h:24,
                 from 
/home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:34:
/home/ubuntu/deal.II/installed/include/deal.II/base/smartpointer.h: In 
instantiation of ‘dealii::SmartPointer<T, P>::~SmartPointer() [with T = 
dealii::hp::FEValues<2, 2>; P = void]’:
/home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:220:9:
 
  required from ‘void std::default_delete<_Tp>::operator()(_Tp*) const 
[with _Tp = Step75::MatrixBasedOperator<2, double>]’
/usr/include/c++/9/bits/unique_ptr.h:292:17:   required from 
‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = 
Step75::MatrixBasedOperator<2, double>; _Dp = 
std::default_delete<Step75::MatrixBasedOperator<2, double> >]’
/home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:710:9:
 
  required from here
/home/ubuntu/deal.II/installed/include/deal.II/base/smartpointer.h:277:8: 
error: ‘class dealii::hp::FEValues<2, 2>’ has no member named ‘unsubscribe’
  277 |     t->unsubscribe(&pointed_to_object_is_alive, id);
      |     ~~~^~~~~~~~~~~

I cannot figure out why this is wrong since it's same as hpbox. Could this 
be due to different version (mine is 9.3.0)?
在2022年10月7日星期五 UTC+8 13:45:41<peterr...@gmail.com> 写道:

> Yes.
>
> Peter
> On 07.10.22 07:44, 'yy.wayne' via deal.II User Group wrote:
>
> I see. So that's why we create a new dof_handlers in solve_with_gmg() 
> function instead of continue using dof_handler as other tutorials. If I 
> want to build mg_matrices and their SparsityPattern, they should based on 
> dof_handlers as well, is it correct? 
>
> Thank you
>
> 在2022年10月7日星期五 UTC+8 13:35:16<peterr...@gmail.com> 写道:
>
>> Hej, 
>>
>> > One thing I'm confused is that why we use 
>> "make_hanging_node_constraints" and "interpolate_boundary_values" for a new 
>> constraints in solve_with_gmg in step-75, instead of the 
>> boundary_constraints as done in previous MG tutorials.
>>
>> Your observation is correct. It is related to the fact that p-multigrid 
>> in deal.II uses the new global-coarsening infrastructure (it was introduced 
>> in release 7.3). In the other tutorials, the local-smoothing infrastructure 
>> is used. 
>>
>> In a nutshell, the difference is that local smoothing is working on 
>> refinement levels, while global coarsening works on the complete active 
>> levels of multiple Triangulation/DoFHandler objects. In the case of 
>> h-multigrid, one works on multiple meshes, which are created by globally 
>> coarsening the finest mesh (this is the reason for the name "global 
>> coarsening"); in the case of p-multigrid, one would work with the same 
>> Triangulation but would distribute different FEs on different DoFHandler 
>> objects. Since you work on the active level, you can simple use the 
>> functions you would use if you don't work on refinement levels.
>>
>> For more details see our publication: https://arxiv.org/abs/2203.12292
>>
>> Hope this help,
>> Peter
>>
>> On Wednesday, 5 October 2022 at 14:17:03 UTC+2 yy.wayne wrote:
>>
>>> Hello sir, 
>>>
>>> One thing I'm confused is that why we use 
>>> "make_hanging_node_constraints" and "interpolate_boundary_values" for a new 
>>> constraints in solve_with_gmg in step-75, instead of the 
>>> boundary_constraints as done in previous MG tutorials.
>>>
>>> 在2022年10月4日星期二 UTC+8 00:11:49<mafe...@gmail.com> 写道:
>>>
>>>> Hi!
>>>>
>>>> The "Possibilities for extensions" section of step-75 gives you a few 
>>>> hints on how to achieve p-multigrid preconditioning with matrix-based 
>>>> methods:
>>>>
>>>> https://www.dealii.org/current/doxygen/deal.II/step_75.html#Solvewithmatrixbasedmethods
>>>>
>>>> If you look for an example implementation, the "hpbox" tool 
>>>> demonstrates both matrix-based and matrix-free implementations of the 
>>>> p-multigrid method.
>>>> https://zenodo.org/record/6425947
>>>> https://github.com/marcfehling/hpbox/
>>>>
>>>> Best,
>>>> Marc
>>>>
>>>> On Monday, October 3, 2022 at 3:39:51 AM UTC-6 yy.wayne wrote:
>>>>
>>>>> Hello, 
>>>>>
>>>>> I plan to test the converge of polynomial multigrid preconditioning on 
>>>>> a problem, with matrix-based framework. However, step-75 is a bit 
>>>>> confusing 
>>>>> since I cannot clearly separate classses contributed by matrix-free and 
>>>>> p-multigrid. 
>>>>>
>>>>> Specifically, in step-16 (with MeshWorker) and step-56 (assemble 
>>>>> cell_matrix by hand) there are assemble_system() and 
>>>>> assemble_multigrid(), 
>>>>> while in step-75 there are solve_system() which calls solve_with_gmg and 
>>>>> mg_solve. It is due to matrix-free or p-multigrid? By the way, is there a 
>>>>> code gallery for matrix-based polynomial-multigrid ? 
>>>>>
>>>>> Thanks
>>>>>
>>>> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/55c95bd6-48b2-4dcf-9e6c-6544b85be53en%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/dealii/55c95bd6-48b2-4dcf-9e6c-6544b85be53en%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ad723a30-26a4-4e43-a649-8de00981a0a2n%40googlegroups.com.
/* ---------------------------------------------------------------------
 *
 * Copyright (C) 2021 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE.md at
 * the top level directory of deal.II.
 *
 * ---------------------------------------------------------------------

 *
 * Author: Marc Fehling, Colorado State University, 2021
 *         Peter Munch, Technical University of Munich and Helmholtz-Zentrum
 *                      hereon, 2021
 *         Wolfgang Bangerth, Colorado State University, 2021
 */


// @sect3{Include files}
//
// The following include files have been used and discussed in previous tutorial
// programs, especially in step-27 and step-40.
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/index_set.h>
#include <deal.II/base/mpi.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/timer.h>

#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/distributed/tria.h>

#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>

#include <deal.II/grid/grid_generator.h>

#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_series.h>

#include <deal.II/hp/fe_collection.h>
#include <deal.II/hp/refinement.h>

#include <deal.II/lac/affine_constraints.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/trilinos_precondition.h>
#include <deal.II/lac/trilinos_sparse_matrix.h>
#include <deal.II/lac/trilinos_solver.h>
#include <deal.II/lac/vector.h>

#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/numerics/smoothness_estimator.h>
#include <deal.II/numerics/vector_tools.h>

#include <algorithm>
#include <fstream>
#include <iostream>

// For load balancing we will assign individual weights on cells, and for that
// we will use the class parallel::CellWeights.
#include <deal.II/distributed/cell_weights.h>

// The solution function requires a transformation from Cartesian to polar
// coordinates. The GeometricUtilities::Coordinates namespace provides the
// necessary tools.
#include <deal.II/base/function.h>
#include <deal.II/base/geometric_utilities.h>

// The following include files will enable the MatrixFree functionality.
#include <deal.II/matrix_free/matrix_free.h>
#include <deal.II/matrix_free/fe_evaluation.h>
#include <deal.II/matrix_free/tools.h>

// We will use LinearAlgebra::distributed::Vector for linear algebra operations.
#include <deal.II/lac/la_parallel_vector.h>

// We are left to include the files needed by the multigrid solver.
#include <deal.II/multigrid/mg_coarse.h>
#include <deal.II/multigrid/mg_constrained_dofs.h>
#include <deal.II/multigrid/mg_matrix.h>
#include <deal.II/multigrid/mg_smoother.h>
#include <deal.II/multigrid/mg_tools.h>
#include <deal.II/multigrid/mg_transfer_global_coarsening.h>
#include <deal.II/multigrid/multigrid.h>

namespace Step75
{
  using namespace dealii;

  ConditionalOStream &
  getPCOut()
  {
    static ConditionalOStream pcout(
      std::cout, (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0));
    return pcout;
  }

  TimerOutput &
  getTimer()
  {
    static TimerOutput computing_timer(MPI_COMM_WORLD,
                                       getPCOut(),
                                       TimerOutput::never,
                                       TimerOutput::wall_times);
    return computing_timer;
  }

  // @sect3{The <code>Solution</code> class template}

  // We have an analytic solution to the scenario at our disposal. We will use
  // this solution to impose boundary conditions for the numerical solution of
  // the problem. The formulation of the solution requires a transformation to
  // polar coordinates. To transform from Cartesian to spherical coordinates, we
  // will use a helper function from the GeometricUtilities::Coordinates
  // namespace. The first two coordinates of this transformation correspond to
  // polar coordinates in the x-y-plane.
  template <int dim>
  class Solution : public Function<dim>
  {
  public:
    Solution()
      : Function<dim>()
    {}

    virtual double value(const Point<dim> &p,
                         const unsigned int /*component*/) const override
    {
      const std::array<double, dim> p_sphere =
        GeometricUtilities::Coordinates::to_spherical(p);

      constexpr const double alpha = 2. / 3.;
      return std::pow(p_sphere[0], alpha) * std::sin(alpha * p_sphere[1]);
    }
  };



  // @sect3{Parameters}

  // For this tutorial, we will use a simplified set of parameters. It is also
  // possible to use a ParameterHandler class here, but to keep this tutorial
  // short we decided on using simple structs. The actual intention of all these
  // parameters will be described in the upcoming classes at their respective
  // location where they are used.
  //
  // The following parameter set controls the coarse-grid solver, the smoothers,
  // and the inter-grid transfer scheme of the multigrid mechanism.
  // We populate it with default parameters.
  struct MultigridParameters
  {
    struct
    {
      std::string  type            = "cg_with_amg";
      unsigned int maxiter         = 10000;
      double       abstol          = 1e-20;
      double       reltol          = 1e-4;
      unsigned int smoother_sweeps = 1;
      unsigned int n_cycles        = 1;
      std::string  smoother_type   = "ILU";
    } coarse_solver;

    struct
    {
      std::string  type                = "chebyshev";
      double       smoothing_range     = 20;
      unsigned int degree              = 5;
      unsigned int eig_cg_n_iterations = 20;
    } smoother;

    struct
    {
      MGTransferGlobalCoarseningTools::PolynomialCoarseningSequenceType
        p_sequence = MGTransferGlobalCoarseningTools::
          PolynomialCoarseningSequenceType::decrease_by_one;
      bool perform_h_transfer = true;
    } transfer;
  };



  // This is the general parameter struct for the problem class. You will find
  // this struct divided into several categories, including general runtime
  // parameters, level limits, refine and coarsen fractions, as well as
  // parameters for cell weighting. It also contains an instance of the above
  // struct for multigrid parameters which will be passed to the multigrid
  // algorithm.
  struct Parameters
  {
    unsigned int n_cycles         = 8;
    double       tolerance_factor = 1e-12;

    MultigridParameters mg_data;

    unsigned int min_h_level            = 5;
    unsigned int max_h_level            = 12;
    unsigned int min_p_degree           = 2;
    unsigned int max_p_degree           = 6;
    unsigned int max_p_level_difference = 1;

    double refine_fraction    = 0.3;
    double coarsen_fraction   = 0.03;
    double p_refine_fraction  = 0.9;
    double p_coarsen_fraction = 0.9;

    double weighting_factor   = 1e6;
    double weighting_exponent = 1.;
  };

  // Matrix-based operator!!!!==============================================
  // use Trilinos
  // direct solver in lac/trilinos_solver.h
  template <int dim, typename number>
  class MatrixBasedOperator : public Subscriptor // don't need mgsolverbase
  {
  public:
    using VectorType = LinearAlgebra::distributed::Vector<number>;
//    using VectorType = TrilinosWrappers::MPI::Vector;
    using MatrixType = TrilinosWrappers::SparseMatrix;

    MatrixBasedOperator() = default;
    MatrixBasedOperator(const hp::MappingCollection<dim> &mapping,
                        const hp::QCollection<dim> & quad,
                        hp::FEValues<dim> & fe_values_collection); //

    void reinit(const DoFHandler<dim> &dof_handler,
                const AffineConstraints<number> & constraints,
                VectorType                      & system_rhs);

    types::global_dof_index m() const;

    number el(unsigned int, unsigned int) const;

    void initialize_dof_vector(VectorType &vec) const;

    void vmult(VectorType &dst, const VectorType &src) const;

    void Tvmult(VectorType &dst, const VectorType &src) const;

    const MatrixType & get_system_matrix() const;

    void compute_inverse_diagonal(VectorType &diagonal) const;

  private:
    SmartPointer<const hp::MappingCollection<dim>> mapping_collection;
    SmartPointer<const hp::QCollection<dim>> quadrature_collection;
    SmartPointer<hp::FEValues<dim>> fe_values_collection;
//    std::unique_ptr<hp::FEValues<dim>> fe_values_collcetion;
    MatrixType system_matrix;
    std::shared_ptr<const dealii::Utilities::MPI::Partitioner> partitioner;
  };

  template <int dim, typename number>
  MatrixBasedOperator<dim, number>::MatrixBasedOperator(
                            const hp::MappingCollection<dim> &mapping,
                            const hp::QCollection<dim> & quad,
                            hp::FEValues<dim> & fe_values_collection)
    : mapping_collection(&mapping),
      quadrature_collection(&quad),
      fe_values_collection(&fe_values_collection)
  {
    ;
  } // reinit when call, different in github

  template <int dim, typename number>
  void MatrixBasedOperator<dim, number>::reinit(
                            const DoFHandler<dim> &dof_handler,
                            const AffineConstraints<number> & constraints,
                            VectorType                      & system_rhs)
  {
    {
      TimerOutput::Scope t(getTimer(), "setup_system");

      const MPI_Comm mpi_communicator = dof_handler.get_communicator();

      IndexSet locally_relevant_dofs;
      DoFTools::extract_locally_relevant_dofs(dof_handler,
                                              locally_relevant_dofs);

      this->partitioner = std::make_shared<const Utilities::MPI::Partitioner>(
            dof_handler.locally_owned_dofs(),
            locally_relevant_dofs,
            mpi_communicator);

      {
        TimerOutput::Scope t(getTimer(), "reinit_matrix");

        TrilinosWrappers::SparsityPattern dsp(
          dof_handler.locally_owned_dofs(), mpi_communicator);

        {
          TimerOutput::Scope t(getTimer(), "make_sparsity_pattern");

          DoFTools::make_sparsity_pattern(dof_handler,
                                          dsp,
                                          constraints,
                                          false);
        }

        dsp.compress();

        system_matrix.reinit(dsp);
      }


      {
        TimerOutput::Scope(getTimer(), "reinit_vectors");

        system_rhs.reinit(partitioner->locally_owned_range(),
                          partitioner->get_mpi_communicator());
      }
    }

      {
        TimerOutput::Scope t(getTimer(), "assemble_system");

        FullMatrix<double>                   cell_matrix;
        Vector<double>                       cell_rhs;
        std::vector<types::global_dof_index> local_dof_indices;
        for (const auto &cell : dof_handler.active_cell_iterators())
          {
            if (cell->is_locally_owned() == false)
              continue;

            const unsigned int dofs_per_cell = cell->get_fe().dofs_per_cell;
            cell_matrix.reinit(dofs_per_cell, dofs_per_cell);
            cell_matrix = 0;
            cell_rhs.reinit(dofs_per_cell);
            cell_rhs = 0;

            fe_values_collection->reinit(cell);
            const FEValues<dim> &fe_values =
              fe_values_collection->get_present_fe_values();

            for (unsigned int q_point = 0;
                 q_point < fe_values.n_quadrature_points;
                 ++q_point)
              for (unsigned int i = 0; i < dofs_per_cell; ++i)
                {
                  for (unsigned int j = 0; j < dofs_per_cell; ++j)
                    cell_matrix(i, j) +=
                      (fe_values.shape_grad(i, q_point) * // grad phi_i(x_q)
                       fe_values.shape_grad(j, q_point) * // grad phi_j(x_q)
                       fe_values.JxW(q_point));           // dx
                }
            local_dof_indices.resize(dofs_per_cell);
            cell->get_dof_indices(local_dof_indices);

            constraints.distribute_local_to_global(cell_matrix,
                                                   cell_rhs,
                                                   local_dof_indices,
                                                   system_matrix,
                                                   system_rhs);
          }

        system_rhs.compress(VectorOperation::values::add);
        system_matrix.compress(VectorOperation::values::add);
      }
    }

  template <int dim, typename number>
  types::global_dof_index MatrixBasedOperator<dim, number>::m() const
  {
    return system_matrix.m();
  }

  template <int dim, typename number>
  number MatrixBasedOperator<dim, number>::el(unsigned int, unsigned int) const
  {
    Assert(false, ExcNotImplemented());
    return 0;
  }

  template <int dim, typename number>
  void MatrixBasedOperator<dim,number>::initialize_dof_vector(VectorType &vec) const
  {
    //TrilinosWrappers::MPI::Vector
//    if constexpr (std::is_same<
//    typename LinearAlgebra::Vector,
//    dealii::LinearAlgebra::distributed::Vector<double>>::value)
    vec.reinit(partitioner);
//    else
//    vec.reinit(partitioner->locally_owned_range(),
//               partitioner->ghost_indices(),
//               partitioner->get_mpi_communicator());
    //
  }

  template <int dim, typename number>
  void MatrixBasedOperator<dim,number>::vmult(VectorType &dst,
                                              const VectorType &src) const
  {
    system_matrix.vmult(dst, src);
  }

  template <int dim, typename number>
  void MatrixBasedOperator<dim,number>::Tvmult(VectorType &dst,
                                              const VectorType &src) const
  {
    this->vmult(dst,src);
  }

  template <int dim, typename number>
  void MatrixBasedOperator<dim,number>::compute_inverse_diagonal(
                                              VectorType &diagonal) const
  {
    this->initialize_dof_vector(diagonal);

    for (auto entry : system_matrix)
      if (entry.row() == entry.column())
        diagonal[entry.row()] = 1.0 / entry.value();
  }

  template <int dim, typename number>
  const TrilinosWrappers::SparseMatrix &
  MatrixBasedOperator<dim,number>::get_system_matrix() const
  {
    return system_matrix;
  }
  // Matrix-based operator!!!!==============================================







  // @sect3{Solver and preconditioner}

  // @sect4{Conjugate-gradient solver with multigrid preconditioner}

  // This function solves the equation system with a sequence of provided
  // multigrid objects. It is meant to be treated as general as possible, hence
  // the multitude of template parameters.
  template <typename VectorType,
            int dim,
            typename SystemMatrixType,
            typename LevelMatrixType,
            typename MGTransferType>
  static void
  mg_solve(SolverControl &            solver_control,
           VectorType &               dst,
           const VectorType &         src,
           const MultigridParameters &mg_data,
           const DoFHandler<dim> &    dof,
           const SystemMatrixType &   fine_matrix,
           const MGLevelObject<std::unique_ptr<LevelMatrixType>> &mg_matrices,
           const MGTransferType &                                 mg_transfer)
  {
    AssertThrow(mg_data.coarse_solver.type == "cg_with_amg",
                ExcNotImplemented());
    AssertThrow(mg_data.smoother.type == "chebyshev", ExcNotImplemented());

    const unsigned int min_level = mg_matrices.min_level();
    const unsigned int max_level = mg_matrices.max_level();

    using SmootherPreconditionerType = DiagonalMatrix<VectorType>;
    using SmootherType               = PreconditionChebyshev<LevelMatrixType,
                                               VectorType,
                                               SmootherPreconditionerType>;
    using PreconditionerType = PreconditionMG<dim, VectorType, MGTransferType>;

    // We initialize level operators and Chebyshev smoothers here.
    mg::Matrix<VectorType> mg_matrix(mg_matrices);

    MGLevelObject<typename SmootherType::AdditionalData> smoother_data(
      min_level, max_level);

    for (unsigned int level = min_level; level <= max_level; level++)
      {
        smoother_data[level].preconditioner =
          std::make_shared<SmootherPreconditionerType>();
        mg_matrices[level]->compute_inverse_diagonal(
          smoother_data[level].preconditioner->get_vector());
        smoother_data[level].smoothing_range = mg_data.smoother.smoothing_range;
        smoother_data[level].degree          = mg_data.smoother.degree;
        smoother_data[level].eig_cg_n_iterations =
          mg_data.smoother.eig_cg_n_iterations;
      }

    MGSmootherPrecondition<LevelMatrixType, SmootherType, VectorType>
      mg_smoother;
    mg_smoother.initialize(mg_matrices, smoother_data);

    // Next, we initialize the coarse-grid solver. We use conjugate-gradient
    // method with AMG as preconditioner.
    ReductionControl coarse_grid_solver_control(mg_data.coarse_solver.maxiter,
                                                mg_data.coarse_solver.abstol,
                                                mg_data.coarse_solver.reltol,
                                                false,
                                                false);
    SolverCG<VectorType> coarse_grid_solver(coarse_grid_solver_control);

    std::unique_ptr<MGCoarseGridBase<VectorType>> mg_coarse;

    TrilinosWrappers::PreconditionAMG                 precondition_amg;
    TrilinosWrappers::PreconditionAMG::AdditionalData amg_data;
    amg_data.smoother_sweeps = mg_data.coarse_solver.smoother_sweeps;
    amg_data.n_cycles        = mg_data.coarse_solver.n_cycles;
    amg_data.smoother_type   = mg_data.coarse_solver.smoother_type.c_str();

    precondition_amg.initialize(mg_matrices[min_level]->get_system_matrix(),
                                amg_data);

    mg_coarse =
      std::make_unique<MGCoarseGridIterativeSolver<VectorType,
                                                   SolverCG<VectorType>,
                                                   LevelMatrixType,
                                                   decltype(precondition_amg)>>(
        coarse_grid_solver, *mg_matrices[min_level], precondition_amg);

    // Finally, we create the Multigrid object, convert it to a preconditioner,
    // and use it inside of a conjugate-gradient solver to solve the linear
    // system of equations.
    Multigrid<VectorType> mg(
      mg_matrix, *mg_coarse, mg_transfer, mg_smoother, mg_smoother);

    PreconditionerType preconditioner(dof, mg, mg_transfer);

    SolverCG<VectorType>(solver_control)
      .solve(fine_matrix, dst, src, preconditioner);
  }


  template <typename VectorType, typename OperatorType, int dim>
  void solve_with_gmg(SolverControl &                  solver_control,
                      const OperatorType &             system_matrix,
                      VectorType &                     dst,
                      const VectorType &               src,
                      const MultigridParameters &      mg_data,
                      const hp::MappingCollection<dim> mapping_collection,
                      const DoFHandler<dim> &          dof_handler,
                      const hp::QCollection<dim> &     quadrature_collection,
                      hp::FEValues<dim> &        fe_values_collection)
  {
    MGLevelObject<DoFHandler<dim>>                     dof_handlers;
    MGLevelObject<std::unique_ptr<OperatorType>>       operators;
    MGLevelObject<MGTwoLevelTransfer<dim, VectorType>> transfers;

    std::vector<std::shared_ptr<const Triangulation<dim>>>
      coarse_grid_triangulations;
    if (mg_data.transfer.perform_h_transfer)
      coarse_grid_triangulations =
        MGTransferGlobalCoarseningTools::create_geometric_coarsening_sequence(
          dof_handler.get_triangulation());
    else
      coarse_grid_triangulations.emplace_back(
        const_cast<Triangulation<dim> *>(&(dof_handler.get_triangulation())),
        [](auto &) {});

    // Determine the total number of levels for the multigrid operation and
    // allocate sufficient memory for all levels.
    const unsigned int n_h_levels = coarse_grid_triangulations.size() - 1;

    const auto get_max_active_fe_degree = [&](const auto &dof_handler) {
      unsigned int max = 0;

      for (auto &cell : dof_handler.active_cell_iterators())
        if (cell->is_locally_owned())
          max =
            std::max(max, dof_handler.get_fe(cell->active_fe_index()).degree);

      return Utilities::MPI::max(max, MPI_COMM_WORLD);
    };

    const unsigned int n_p_levels =
      MGTransferGlobalCoarseningTools::create_polynomial_coarsening_sequence(
        get_max_active_fe_degree(dof_handler), mg_data.transfer.p_sequence)
        .size();

    std::map<unsigned int, unsigned int> fe_index_for_degree;
    for (unsigned int i = 0; i < dof_handler.get_fe_collection().size(); ++i)
      {
        const unsigned int degree = dof_handler.get_fe(i).degree;
        Assert(fe_index_for_degree.find(degree) == fe_index_for_degree.end(),
               ExcMessage("FECollection does not contain unique degrees."));
        fe_index_for_degree[degree] = i;
      }

    unsigned int minlevel   = 0;
    unsigned int minlevel_p = n_h_levels;
    unsigned int maxlevel   = n_h_levels + n_p_levels - 1;

    dof_handlers.resize(minlevel, maxlevel);
    operators.resize(minlevel, maxlevel);
    transfers.resize(minlevel, maxlevel);

    for (unsigned int l = 0; l < n_h_levels; ++l)
      {
        dof_handlers[l].reinit(*coarse_grid_triangulations[l]);
        dof_handlers[l].distribute_dofs(dof_handler.get_fe_collection());
      }

    for (unsigned int i = 0, l = maxlevel; i < n_p_levels; ++i, --l)
      {
        dof_handlers[l].reinit(dof_handler.get_triangulation());

        if (l == maxlevel) // finest level
          {
            auto &dof_handler_mg = dof_handlers[l];

            auto cell_other = dof_handler.begin_active();
            for (auto &cell : dof_handler_mg.active_cell_iterators())
              {
                if (cell->is_locally_owned())
                  cell->set_active_fe_index(cell_other->active_fe_index());
                cell_other++;
              }
          }
        else // coarse level
          {
            auto &dof_handler_fine   = dof_handlers[l + 1];
            auto &dof_handler_coarse = dof_handlers[l + 0];

            auto cell_other = dof_handler_fine.begin_active();
            for (auto &cell : dof_handler_coarse.active_cell_iterators())
              {
                if (cell->is_locally_owned())
                  {
                    const unsigned int next_degree =
                      MGTransferGlobalCoarseningTools::
                        create_next_polynomial_coarsening_degree(
                          cell_other->get_fe().degree,
                          mg_data.transfer.p_sequence);
                    Assert(fe_index_for_degree.find(next_degree) !=
                             fe_index_for_degree.end(),
                           ExcMessage("Next polynomial degree in sequence "
                                      "does not exist in FECollection."));

                    cell->set_active_fe_index(fe_index_for_degree[next_degree]);
                  }
                cell_other++;
              }
          }

        dof_handlers[l].distribute_dofs(dof_handler.get_fe_collection());
      }

    MGLevelObject<AffineConstraints<typename VectorType::value_type>>
      constraints(minlevel, maxlevel);

    for (unsigned int level = minlevel; level <= maxlevel; ++level)
      {
        const auto &dof_handler = dof_handlers[level];
        auto &      constraint  = constraints[level];

        IndexSet locally_relevant_dofs;
        DoFTools::extract_locally_relevant_dofs(dof_handler,
                                                locally_relevant_dofs);
        constraint.reinit(locally_relevant_dofs);

        DoFTools::make_hanging_node_constraints(dof_handler, constraint);
        VectorTools::interpolate_boundary_values(mapping_collection,
                                                 dof_handler,
                                                 0,
                                                 Functions::ZeroFunction<dim>(),
                                                 constraint);
        constraint.close();

        VectorType dummy;

//        operators[level] = std::make_unique<OperatorType>(mapping_collection,
//                                                          dof_handler,
//                                                          quadrature_collection,
//                                                          constraint,
//                                                          dummy);
        operators[level] = std::make_unique<OperatorType>(mapping_collection,
                                                          quadrature_collection,
                                                          fe_values_collection);
        operators[level]->reinit(dof_handler, constraint, dummy);
      }

    // Set up intergrid operators and collect transfer operators within a single
    // operator as needed by the Multigrid solver class.
    for (unsigned int level = minlevel; level < minlevel_p; ++level)
      transfers[level + 1].reinit_geometric_transfer(dof_handlers[level + 1],
                                                     dof_handlers[level],
                                                     constraints[level + 1],
                                                     constraints[level]);

    for (unsigned int level = minlevel_p; level < maxlevel; ++level)
      transfers[level + 1].reinit_polynomial_transfer(dof_handlers[level + 1],
                                                      dof_handlers[level],
                                                      constraints[level + 1],
                                                      constraints[level]);

    MGTransferGlobalCoarsening<dim, VectorType> transfer(
      transfers, [&](const auto l, auto &vec) {
        operators[l]->initialize_dof_vector(vec);
      });

    // Finally, proceed to solve the problem with multigrid.
    mg_solve(solver_control,
             dst,
             src,
             mg_data,
             dof_handler,
             system_matrix,
             operators,
             transfer);
  }


  template <int dim>
  class LaplaceProblem
  {
  public:
    LaplaceProblem(const Parameters &parameters);

    void run();

  private:
    void initialize_grid();
    void setup_system();
    void print_diagnostics();
    void solve_system();
    void compute_indicators();
    void adapt_resolution();
    void output_results(const unsigned int cycle);

    MPI_Comm mpi_communicator;

    const Parameters prm;

    parallel::distributed::Triangulation<dim> triangulation;
    DoFHandler<dim>                           dof_handler;

    hp::MappingCollection<dim> mapping_collection;
    hp::FECollection<dim>      fe_collection;
    hp::QCollection<dim>       quadrature_collection;
    hp::QCollection<dim - 1>   face_quadrature_collection;
    //std::unique_ptr<hp::FEValues<dim>> fe_values_collcetion;

    IndexSet locally_owned_dofs;
    IndexSet locally_relevant_dofs;

    AffineConstraints<double> constraints;

//    LaplaceOperator<dim, double>               laplace_operator;
    LinearAlgebra::distributed::Vector<double> locally_relevant_solution;
    LinearAlgebra::distributed::Vector<double> system_rhs;
//    TrilinosWrappers::MPI::Vector locally_relevant_solution;
//    TrilinosWrappers::MPI::Vector system_rhs;

    // matrix based
    std::unique_ptr<MatrixBasedOperator<dim, double>> operator_matrixbased;
//    hp::FEValues<dim>                          fe_values_collection;
    std::unique_ptr<hp::FEValues<dim>>         fe_values_collection;

    // matrix based

    std::unique_ptr<FESeries::Legendre<dim>>    legendre;
    std::unique_ptr<parallel::CellWeights<dim>> cell_weights;

    Vector<float> estimated_error_per_cell;
    Vector<float> hp_decision_indicators;

    ConditionalOStream pcout;
    TimerOutput        computing_timer;
  };


  template <int dim>
  LaplaceProblem<dim>::LaplaceProblem(const Parameters &parameters)
    : mpi_communicator(MPI_COMM_WORLD)
    , prm(parameters)
    , triangulation(mpi_communicator)
    , dof_handler(triangulation)
    , pcout(std::cout,
            (Utilities::MPI::this_mpi_process(mpi_communicator) == 0))
    , computing_timer(mpi_communicator,
                      pcout,
                      TimerOutput::summary,
                      TimerOutput::wall_times)
  {
    Assert(prm.min_h_level <= prm.max_h_level,
           ExcMessage(
             "Triangulation level limits have been incorrectly set up."));
    Assert(prm.min_p_degree <= prm.max_p_degree,
           ExcMessage("FECollection degrees have been incorrectly set up."));

    // prepar collections
    mapping_collection.push_back(MappingQ1<dim>());

    for (unsigned int degree = 1; degree <= prm.max_p_degree; ++degree)
      {
        fe_collection.push_back(FE_Q<dim>(degree));
        quadrature_collection.push_back(QGauss<dim>(degree + 1));
        face_quadrature_collection.push_back(QGauss<dim - 1>(degree + 1));
      }

    const unsigned int min_fe_index = prm.min_p_degree - 1;
    fe_collection.set_hierarchy(
      /*next_index=*/
      [](const typename hp::FECollection<dim> &fe_collection,
         const unsigned int                    fe_index) -> unsigned int {
        return ((fe_index + 1) < fe_collection.size()) ? fe_index + 1 :
                                                         fe_index;
      },
      /*previous_index=*/
      [min_fe_index](const typename hp::FECollection<dim> &,
                     const unsigned int fe_index) -> unsigned int {
        Assert(fe_index >= min_fe_index,
               ExcMessage("Finite element is not part of hierarchy!"));
        return (fe_index > min_fe_index) ? fe_index - 1 : fe_index;
      });

    // if MatrixBased
    fe_values_collection = std::make_unique<hp::FEValues<dim>>(
                mapping_collection,
                fe_collection,
                quadrature_collection,
                update_values | update_gradients | update_quadrature_points |
                  update_JxW_values);
    fe_values_collection->precalculate_fe_values();

    operator_matrixbased =
        std::make_unique<MatrixBasedOperator<dim, double>>(
          mapping_collection, quadrature_collection,
          *fe_values_collection);

    // We initialize the FESeries::Legendre object in the default configuration
    // for smoothness estimation.
    legendre = std::make_unique<FESeries::Legendre<dim>>(
      SmoothnessEstimator::Legendre::default_fe_series(fe_collection));

    cell_weights = std::make_unique<parallel::CellWeights<dim>>(
      dof_handler,
      parallel::CellWeights<dim>::ndofs_weighting(
        {prm.weighting_factor, prm.weighting_exponent}));

    triangulation.signals.post_p4est_refinement.connect(
      [&, min_fe_index]() {
        const parallel::distributed::TemporarilyMatchRefineFlags<dim>
          refine_modifier(triangulation);
        hp::Refinement::limit_p_level_difference(dof_handler,
                                                 prm.max_p_level_difference,
                                                 /*contains=*/min_fe_index);
      },
      boost::signals2::at_front);
  }

  template <int dim>
  void LaplaceProblem<dim>::initialize_grid()
  {
    TimerOutput::Scope t(computing_timer, "initialize grid");

    std::vector<unsigned int> repetitions(dim);
    Point<dim>                bottom_left, top_right;
    for (unsigned int d = 0; d < dim; ++d)
      if (d < 2)
        {
          repetitions[d] = 2;
          bottom_left[d] = -1.;
          top_right[d]   = 1.;
        }
      else
        {
          repetitions[d] = 1;
          bottom_left[d] = 0.;
          top_right[d]   = 1.;
        }

    std::vector<int> cells_to_remove(dim, 1);
    cells_to_remove[0] = -1;

    GridGenerator::subdivided_hyper_L(
      triangulation, repetitions, bottom_left, top_right, cells_to_remove);

    triangulation.refine_global(prm.min_h_level);

    const unsigned int min_fe_index = prm.min_p_degree - 1;
    for (const auto &cell : dof_handler.active_cell_iterators())
      if (cell->is_locally_owned())
        cell->set_active_fe_index(min_fe_index);
  }


  template <int dim>
  void LaplaceProblem<dim>::setup_system()
  {
    TimerOutput::Scope t(computing_timer, "setup system");

    dof_handler.distribute_dofs(fe_collection);

    locally_owned_dofs = dof_handler.locally_owned_dofs();
    DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);

    locally_relevant_solution.reinit(locally_owned_dofs,
                                     locally_relevant_dofs,
                                     mpi_communicator);
    system_rhs.reinit(locally_owned_dofs, mpi_communicator);

    constraints.clear();
    constraints.reinit(locally_relevant_dofs);
    DoFTools::make_hanging_node_constraints(dof_handler, constraints);
    VectorTools::interpolate_boundary_values(
      mapping_collection, dof_handler, 0, Solution<dim>(), constraints);
    constraints.close();
  }



  template <int dim>
  void LaplaceProblem<dim>::print_diagnostics()
  {
    const unsigned int first_n_processes =
      std::min<unsigned int>(8,
                             Utilities::MPI::n_mpi_processes(mpi_communicator));
    const bool output_cropped =
      first_n_processes < Utilities::MPI::n_mpi_processes(mpi_communicator);

    {
      pcout << "   Number of active cells:       "
            << triangulation.n_global_active_cells() << std::endl
            << "     by partition:              ";

      std::vector<unsigned int> n_active_cells_per_subdomain =
        Utilities::MPI::gather(mpi_communicator,
                               triangulation.n_locally_owned_active_cells());
      for (unsigned int i = 0; i < first_n_processes; ++i)
        pcout << ' ' << n_active_cells_per_subdomain[i];
      if (output_cropped)
        pcout << " ...";
      pcout << std::endl;
    }

    {
      pcout << "   Number of degrees of freedom: " << dof_handler.n_dofs()
            << std::endl
            << "     by partition:              ";

      std::vector<types::global_dof_index> n_dofs_per_subdomain =
        Utilities::MPI::gather(mpi_communicator,
                               dof_handler.n_locally_owned_dofs());
      for (unsigned int i = 0; i < first_n_processes; ++i)
        pcout << ' ' << n_dofs_per_subdomain[i];
      if (output_cropped)
        pcout << " ...";
      pcout << std::endl;
    }

    {
      std::vector<types::global_dof_index> n_constraints_per_subdomain =
        Utilities::MPI::gather(mpi_communicator, constraints.n_constraints());

      pcout << "   Number of constraints:        "
            << std::accumulate(n_constraints_per_subdomain.begin(),
                               n_constraints_per_subdomain.end(),
                               0)
            << std::endl
            << "     by partition:              ";
      for (unsigned int i = 0; i < first_n_processes; ++i)
        pcout << ' ' << n_constraints_per_subdomain[i];
      if (output_cropped)
        pcout << " ...";
      pcout << std::endl;
    }

    {
      std::vector<unsigned int> n_fe_indices(fe_collection.size(), 0);
      for (const auto &cell : dof_handler.active_cell_iterators())
        if (cell->is_locally_owned())
          n_fe_indices[cell->active_fe_index()]++;

      Utilities::MPI::sum(n_fe_indices, mpi_communicator, n_fe_indices);

      pcout << "   Frequencies of poly. degrees:";
      for (unsigned int i = 0; i < fe_collection.size(); ++i)
        if (n_fe_indices[i] > 0)
          pcout << ' ' << fe_collection[i].degree << ":" << n_fe_indices[i];
      pcout << std::endl;
    }
  }



  // @sect4{LaplaceProblem::solve_system}

  // The scaffold around the solution is similar to the one of step-40. We
  // prepare a vector that matches the requirements of MatrixFree and collect
  // the locally-relevant degrees of freedoms we solved the equation system. The
  // solution happens with the function introduced earlier.
  template <int dim>
  void LaplaceProblem<dim>::solve_system()
  {
    TimerOutput::Scope t(computing_timer, "solve system");

    LinearAlgebra::distributed::Vector<double> completely_distributed_solution;
    LinearAlgebra::distributed::Vector<double> completely_distributed_system_rhs;
//    TrilinosWrappers::MPI::Vector completely_distributed_solution;
//    TrilinosWrappers::MPI::Vector completely_distributed_system_rhs; //?

    completely_distributed_solution.reinit(locally_owned_dofs, mpi_communicator);
    completely_distributed_system_rhs = system_rhs;

    SolverControl solver_control(completely_distributed_system_rhs.size(),
                                 prm.tolerance_factor *
                                  completely_distributed_system_rhs.l2_norm());

    solve_with_gmg(solver_control,
                   *operator_matrixbased,
                   completely_distributed_solution,
                   completely_distributed_system_rhs,
                   prm.mg_data,
                   mapping_collection,
                   dof_handler,
                   quadrature_collection,
                   *fe_values_collection);

    pcout << "   Solved in " << solver_control.last_step() << " iterations."
          << std::endl;

    constraints.distribute(completely_distributed_solution);

    locally_relevant_solution = completely_distributed_solution;
    locally_relevant_solution.update_ghost_values();
  }



  // @sect4{LaplaceProblem::compute_indicators}

  // This function contains only a part of the typical <code>refine_grid</code>
  // function from other tutorials and is new in that sense. Here, we will only
  // calculate all indicators for adaptation with actually refining the grid. We
  // do this for the purpose of writing all indicators to the file system, so we
  // store them for later.
  //
  // Since we are dealing the an elliptic problem, we will make use of the
  // KellyErrorEstimator again, but with a slight difference. Modifying the
  // scaling factor of the underlying face integrals to be dependent on the
  // actual polynomial degree of the neighboring elements is favorable in
  // hp-adaptive applications @cite davydov2017hp. We can do this by specifying
  // the very last parameter from the additional ones you notices. The others
  // are actually just the defaults.
  //
  // For the purpose of hp-adaptation, we will calculate smoothness estimates
  // with the strategy presented in the tutorial introduction and use the
  // implementation in SmoothnessEstimator::Legendre. In the Parameters struct,
  // we set the minimal polynomial degree to 2 as it seems that the smoothness
  // estimation algorithms have trouble with linear elements.
  template <int dim>
  void LaplaceProblem<dim>::compute_indicators()
  {
    TimerOutput::Scope t(computing_timer, "compute indicators");

    estimated_error_per_cell.grow_or_shrink(triangulation.n_active_cells());
    KellyErrorEstimator<dim>::estimate(
      dof_handler,
      face_quadrature_collection,
      std::map<types::boundary_id, const Function<dim> *>(),
      locally_relevant_solution,
      estimated_error_per_cell,
      /*component_mask=*/ComponentMask(),
      /*coefficients=*/nullptr,
      /*n_threads=*/numbers::invalid_unsigned_int,
      /*subdomain_id=*/numbers::invalid_subdomain_id,
      /*material_id=*/numbers::invalid_material_id,
      /*strategy=*/
      KellyErrorEstimator<dim>::Strategy::face_diameter_over_twice_max_degree);

    hp_decision_indicators.grow_or_shrink(triangulation.n_active_cells());
    SmoothnessEstimator::Legendre::coefficient_decay(*legendre,
                                                     dof_handler,
                                                     locally_relevant_solution,
                                                     hp_decision_indicators);
  }


  template <int dim>
  void LaplaceProblem<dim>::adapt_resolution()
  {
    TimerOutput::Scope t(computing_timer, "adapt resolution");

    parallel::distributed::GridRefinement::refine_and_coarsen_fixed_number(
      triangulation,
      estimated_error_per_cell,
      prm.refine_fraction,
      prm.coarsen_fraction);

    hp::Refinement::p_adaptivity_fixed_number(dof_handler,
                                              hp_decision_indicators,
                                              prm.p_refine_fraction,
                                              prm.p_coarsen_fraction);

    hp::Refinement::choose_p_over_h(dof_handler);

    Assert(triangulation.n_levels() >= prm.min_h_level + 1 &&
             triangulation.n_levels() <= prm.max_h_level + 1,
           ExcInternalError());

    if (triangulation.n_levels() > prm.max_h_level)
      for (const auto &cell :
           triangulation.active_cell_iterators_on_level(prm.max_h_level))
        cell->clear_refine_flag();

    for (const auto &cell :
         triangulation.active_cell_iterators_on_level(prm.min_h_level))
      cell->clear_coarsen_flag();

    triangulation.execute_coarsening_and_refinement();
  }




  template <int dim>
  void LaplaceProblem<dim>::output_results(const unsigned int cycle)
  {
    TimerOutput::Scope t(computing_timer, "output results");

    Vector<float> fe_degrees(triangulation.n_active_cells());
    for (const auto &cell : dof_handler.active_cell_iterators())
      if (cell->is_locally_owned())
        fe_degrees(cell->active_cell_index()) = cell->get_fe().degree;

    Vector<float> subdomain(triangulation.n_active_cells());
    for (auto &subd : subdomain)
      subd = triangulation.locally_owned_subdomain();

    DataOut<dim> data_out;
    data_out.attach_dof_handler(dof_handler);
    data_out.add_data_vector(locally_relevant_solution, "solution");
    data_out.add_data_vector(fe_degrees, "fe_degree");
    data_out.add_data_vector(subdomain, "subdomain");
    data_out.add_data_vector(estimated_error_per_cell, "error");
    data_out.add_data_vector(hp_decision_indicators, "hp_indicator");
    data_out.build_patches(mapping_collection);

    data_out.write_vtu_with_pvtu_record(
      "./", "solution", cycle, mpi_communicator, 2, 1);
  }




  template <int dim>
  void LaplaceProblem<dim>::run()
  {
    pcout << "Running with Trilinos on "
          << Utilities::MPI::n_mpi_processes(mpi_communicator)
          << " MPI rank(s)..." << std::endl;

    {
      pcout << "Calculating transformation matrices..." << std::endl;
      TimerOutput::Scope t(computing_timer, "calculate transformation");
      legendre->precalculate_all_transformation_matrices();
    }

    for (unsigned int cycle = 0; cycle < prm.n_cycles; ++cycle)
      {
        pcout << "Cycle " << cycle << ':' << std::endl;

        if (cycle == 0)
          initialize_grid();
        else
          adapt_resolution();

        setup_system();

        {
          operator_matrixbased->reinit(dof_handler,
                                       constraints,
                                       system_rhs); //
          print_diagnostics();

          solve_system();
//          solve(*operator_matrixbased,
//                              locally_relevant_solution,
//                              system_rhs);
        }

        compute_indicators();

        if (Utilities::MPI::n_mpi_processes(mpi_communicator) <= 32)
          output_results(cycle);

        computing_timer.print_summary();
        computing_timer.reset();

        pcout << std::endl;
      }
  }
} // namespace Step75




int main(int argc, char *argv[])
{
  try
    {
      using namespace dealii;
      using namespace Step75;

      Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

      Parameters        prm;
      LaplaceProblem<2> laplace_problem(prm);
      laplace_problem.run();
    }
  catch (std::exception &exc)
    {
      std::cerr << std::endl
                << std::endl
                << "----------------------------------------------------"
                << std::endl;
      std::cerr << "Exception on processing: " << std::endl
                << exc.what() << std::endl
                << "Aborting!" << std::endl
                << "----------------------------------------------------"
                << std::endl;

      return 1;
    }
  catch (...)
    {
      std::cerr << std::endl
                << std::endl
                << "----------------------------------------------------"
                << std::endl;
      std::cerr << "Unknown exception!" << std::endl
                << "Aborting!" << std::endl
                << "----------------------------------------------------"
                << std::endl;
      return 1;
    }

  return 0;
}

Reply via email to