Dear all,

I am currently working on an adaptive algorithm in parallel. To this end, I 
use p4est and trilinos in a parallel::distributed manner.
A DWR-error-estimator and the global refinement produce reasonable things.

I think that I found a wrong assertion and a bug, but maybe I am mistaken 
and did not use the given methods correctly. I would really appreciate it, 
if someone could look into this and either confirm my suspicion or prove me 
wrong.

1) Maybe I found a wrong assertion in the FETools.
    I have trouble with FETools while working in the debug mode. I cannot 
use the provided FETools::interpolation for the higher order 
reconstruction, which is typical used in the context of the DWR-method. If 
I am not mistaken, we have to use a ghosted vector for the InputVector and 
a non-ghosted one for the OutputVector. But I cannot access the 
locally_owned_elements of the ghosted vector, which I created based on 
locally_relevant_dofs.
   This is a part of the error, which I receive:
       An error occurred in line <1099> of file 
</home/krosin/SWD/deal_ii/dealii-8.5.1/include/deal.II/lac/trilinos_vector_base.h>
 
in function
           dealii::IndexSet 
dealii::TrilinosWrappers::VectorBase::locally_owned_elements() const
       The violated condition was: 
           owned_elements.size()==size()
       Additional information: 
          The locally owned elements have not been properly initialized! 
This happens for example if this object has been initialized with exactly 
one overlapping IndexSet.
    
   In the release mode (and after deleting the lines 92,94-97 in the file 
dealii-8.5.1/source/fe/fe_tools_interpolate.cc) everything runs nicely.
   The content of those lines were:
        (92)   const IndexSet &dof1_local_dofs = dof1.locally_owned_dofs();
        (93)   const IndexSet &dof2_local_dofs = dof2.locally_owned_dofs();
        (94)   const IndexSet u1_elements = u1.locally_owned_elements();
        (94)   Assert(u1_elements == dof1_local_dofs,
                 ExcMessage("The provided vector and DoF handler should 
have the same"
                          " index sets."));
       (94)   Assert(u2_elements == dof2_local_dofs,
                 ExcMessage("The provided vector and DoF handler should 
have the same"
                          " index sets."));
   
   Hence I suspect that the assertion in fe_tools_interpolate.cc, which is 
also used in e.g. interpolation_difference is simply not correct for the 
parallel::distributed case.
   I have also created a mini example (no. 1). Therein, I included a 
suggestion for a different assertion.


2) I have also a much worse problem right now, for which I have not found a 
satisfying solution yet.
   It seems to me that I cannot use the MeshSmooting methods for the 
parallel::distributed::Triangulation. In particular the flag patch_level_1 
causes trouble.
   See the mini example (no. 2).

   This is the conclusion of my colleague Dustin Kumor and me:
   This patch_level_1-property is only satisfied on the locally owned 
cells, which is all I need for my computations, but the MeshSmoothing 
presumes that this property is satisfied on the complete triangulation (on 
one processor including artificial cells).

   However I do have a rather dirty hack, where I completly ignore 
everything related to coarsening flags in STEP 5 (ensure patch level 1) of 
prepare_coarsening_and_refinement() in tria.cc .
   Dustin and I believe a small change of the following lines in file 
tria.cc (version 8.5.1) solves the problem with refinement flags:
        (146)       if (cell->child(i)->active())
    (12997)                if (cell->child(0)->has_children() == true)
   to
        (146)       if (cell->child(i)->active() && 
cell->child(i)->is_locally_owned())
    (12997)                if (cell->child(0)->has_children() == true || 
!cell->child(0)->is_locally_owned())


   By the way: Is it possible to consider only the refinement flags and 
ignore all coarsening?


I am sorry for the lengthy text. I hope that I provided all relevant 
information and I am looking forward to your input.


Best,
Korinna


PS: 
Used versions are:

gcc               4.8.8
openmpi        1.10 
trilinos         12.4
p4est            1.1
deal.II           8.5.1 and 9.0.0-pre

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.
//============================================================================
// Name        : fictitious_contact.cpp
// Author      : Korinna Rosin
// Version     : 0.1
// Created on  : 14.06.2017
// Copyright   : see header file
// Description : see header file
//============================================================================

#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/exceptions.h>
#include <deal.II/base/function.h>
#include <deal.II/base/geometry_info.h>
#include <deal.II/base/index_set.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/mpi.h>
#include <deal.II/base/types.h>
#include <deal.II/base/utilities.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/fe_tools.h>
#include <deal.II/fe/fe_values_extractors.h>
#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II/lac/trilinos_vector_base.h>
#include <deal.II/lac/trilinos_precondition.h>
#include <deal.II/lac/trilinos_vector.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/tria_iterator.templates.h>
#include <deal.II/lac/vector.h>
#include <deal.II/numerics/data_component_interpretation.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/vector_tools.h>
#include <mpi.h>
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <map>
#include <memory>
#include <sstream>
#include <string>
#include <vector>

using namespace dealii;
typedef TrilinosWrappers::MPI::Vector VectorType;

template <int dim>
  class MeshSmoothingTest
  {
    public:
      MeshSmoothingTest (std::string out_file);

      ~MeshSmoothingTest ()
      {
      }
      ;

      void
      run ();

    private:
      void
      make_grid ();
      void
      dirichlet_constraints ();
      void
      setup_system ();

      void
      output_results (const std::string &title) const;

      /**
       * PRIVATE VARIABLE
       */

      // MPI
      MPI_Comm mpi_communicator;
      parallel::distributed::Triangulation<dim> triangulation;

      FE_Q<dim> u;
      FE_Q<dim> u_higher_order;

      DoFHandler<dim> dof_handler;
      DoFHandler<dim> dof_handler_ho;

      IndexSet locally_owned_dofs;
      IndexSet locally_relevant_dofs;

      IndexSet locally_owned_dofs_ho;
      IndexSet locally_relevant_dofs_ho;

      // Ofstream setting to a file
      std::ofstream out;
      ConditionalOStream pcout;

      VectorType solution_rel; // PRIMAL SOLUTION
      VectorType solution_rel_ho;

      VectorType solution_loc;
      VectorType solution_loc_ho;

      bool graphical_output = true;

      void
      interpolate (const DoFHandler<dim> &dof1,
                   const TrilinosWrappers::MPI::Vector &u1,
                   const DoFHandler<dim> &dof2,
                   const ConstraintMatrix &constraints,
                   TrilinosWrappers::MPI::Vector &u2)
      {
        typedef typename TrilinosWrappers::MPI::Vector::value_type number;

        Assert(&dof1.get_triangulation()==&dof2.get_triangulation(),
            ExcInternalError());

        Assert(u1.size()==dof1.n_dofs(),
            ExcDimensionMismatch(u1.size(), dof1.n_dofs()));
        Assert(u2.size()==dof2.n_dofs(),
            ExcDimensionMismatch(u2.size(), dof2.n_dofs()));

        const IndexSet u2_elements = u2.locally_owned_elements ();
#ifdef DEBUG
        const IndexSet &dof2_local_dofs = dof2.locally_owned_dofs();
        Assert(u1.size() == dof1.n_dofs() || u1.has_ghost_elements(),
            ExcMessage("The read only vector u1 must have ghosted elements, when you use distributed Trilinos vectors."));
        Assert(u2_elements == dof2_local_dofs,
            ExcMessage("The provided vector and DoF handler should have the same"
                " index sets."));
#endif

        // allocate vectors at maximal
        // size. will be reinited in inner
        // cell, but Vector makes sure that
        // this does not lead to
        // reallocation of memory
        Vector<number> u1_local (DoFTools::max_dofs_per_cell (dof1));
        Vector<number> u2_local (DoFTools::max_dofs_per_cell (dof2));

        // have a map for interpolation
        // matrices. shared_ptr make sure
        // that memory is released again
        std::map<const FiniteElement<dim> *,
            std::map<const FiniteElement<dim> *,
                std::shared_ptr<FullMatrix<double> > > > interpolation_matrices;

        typename DoFHandler<dim>::active_cell_iterator cell1 =
            dof1.begin_active (), endc1 = dof1.end ();
        typename DoFHandler<dim>::active_cell_iterator cell2 =
            dof2.begin_active (), endc2 = dof2.end ();
        (void) endc2;

        std::vector<types::global_dof_index> dofs;
        dofs.reserve (DoFTools::max_dofs_per_cell (dof2));

        u2 = 0;
        TrilinosWrappers::MPI::Vector touch_count (u2);
        touch_count = 0;

        // for distributed triangulations,
        // we can only interpolate u1 on
        // a cell, which this processor owns,
        for (; cell1 != endc1; ++cell1, ++cell2)
          if (cell1->is_locally_owned ())
            {
              Assert(
                  cell1->get_fe().n_components() == cell2->get_fe().n_components(),
                  ExcDimensionMismatch (cell1->get_fe().n_components(), cell2->get_fe().n_components()));

              // for continuous elements on
              // grids with hanging nodes we
              // need hanging node
              // constraints. Consequently,
              // if there are no constraints
              // then hanging nodes are not
              // allowed.
              const bool hanging_nodes_not_allowed =
                  ((cell2->get_fe ().dofs_per_vertex != 0) && (constraints.n_constraints ()
                      == 0));

              if (hanging_nodes_not_allowed)
                for (unsigned int face = 0;
                    face < GeometryInfo<dim>::faces_per_cell; ++face)
                  Assert(
                      cell1->at_boundary(face) || cell1->neighbor(face)->level() == cell1->level(),
                      ExcInvalidState());

              const unsigned int dofs_per_cell1 = cell1->get_fe ().dofs_per_cell;
              const unsigned int dofs_per_cell2 = cell2->get_fe ().dofs_per_cell;
              u1_local.reinit (dofs_per_cell1);
              u2_local.reinit (dofs_per_cell2);

              // check if interpolation
              // matrix for this particular
              // pair of elements is already
              // there
              if (interpolation_matrices[&cell1->get_fe ()][&cell2->get_fe ()].get () == 0)
                {
                  std::shared_ptr<FullMatrix<double> > interpolation_matrix (
                      new FullMatrix<double> (dofs_per_cell2, dofs_per_cell1));
                  interpolation_matrices[&cell1->get_fe ()][&cell2->get_fe ()] =
                      interpolation_matrix;

                  FETools::get_interpolation_matrix (cell1->get_fe (),
                      cell2->get_fe (), *interpolation_matrix);
                }

              cell1->get_dof_values (u1, u1_local);
              interpolation_matrices[&cell1->get_fe ()][&cell2->get_fe ()]->vmult (
                  u2_local, u1_local);

              dofs.resize (dofs_per_cell2);
              cell2->get_dof_indices (dofs);

              for (unsigned int i = 0; i < dofs_per_cell2; ++i)
                {
                  // if dof is locally_owned
                  const types::global_dof_index gdi = dofs[i];
                  if (u2_elements.is_element (gdi))
                    {
                      u2 (dofs[i]) += u2_local (i);
                      touch_count (dofs[i]) += 1;
                    }
                }
            }
        // cell1 is at the end, so should
        // be cell2
        Assert(cell2 == endc2, ExcInternalError());

        u2.compress (VectorOperation::add);
        touch_count.compress (VectorOperation::add);

        // if we work on parallel distributed
        // vectors, we have to ensure, that we only
        // work on dofs this processor owns.
        IndexSet locally_owned_dofs = dof2.locally_owned_dofs ();

        // when a discontinuous element is
        // interpolated to a continuous
        // one, we take the mean values.
        // for parallel vectors check,
        // if this component is owned by
        // this processor.
        for (types::global_dof_index i = 0; i < dof2.n_dofs (); ++i)
          if (locally_owned_dofs.is_element (i))
            {
              Assert(static_cast<double>(touch_count(i)) != 0.0,
                  ExcInternalError());
              u2 (i) /= touch_count (i);
            }

        // finish the work on parallel vectors
        u2.compress (VectorOperation::insert);
        // Apply hanging node constraints.
        constraints.distribute (u2);
      }

  };

/**
 * Create the FDContact object and initialize its variables.
 * Furthermore, the parameters for the simulation (prm) and the solver (prm_solver) are set.
 */
template <int dim>
  MeshSmoothingTest<dim>::MeshSmoothingTest (std::string out_file)
      : mpi_communicator (MPI_COMM_WORLD),
        triangulation (mpi_communicator),
        u (1),
        u_higher_order (2),
        dof_handler (triangulation),
        dof_handler_ho (triangulation),
        // defining out and pcout
        out (out_file, std::ofstream::out),
        pcout (out, (Utilities::MPI::this_mpi_process (mpi_communicator) == 0))
  {
    deallog.attach (out);
  }

template <int dim>
  void
  MeshSmoothingTest<dim>::run ()
  {
    pcout << "Start running." << std::endl;
    make_grid ();
    setup_system ();
    if (graphical_output)
      {
        pcout << "  Writing graphical output. " << std::endl;
          {
            output_results ("solution");
          }
      }
  }

template <int dim>
  void
  MeshSmoothingTest<dim>::make_grid ()
  {
    pcout << "Making the grid... ";
    // create a small mesh with 2^dim cells
    GridGenerator::subdivided_hyper_cube<dim> (triangulation, 2, 0.0, 1.0);
    // make sure all dirichlet boundary ids are set
    typename dealii::Triangulation<dim>::active_cell_iterator cell =
        triangulation.begin_active ();
    typename dealii::Triangulation<dim>::active_cell_iterator endc =
        triangulation.end ();
    for (; cell != endc; ++cell)
      {
        // Dirichlet boundary
        for (unsigned int face = 0; face < GeometryInfo<dim>::faces_per_cell;
            ++face)
          if (cell->face (face)->at_boundary ())
            {
              cell->face (face)->set_boundary_id (0);
            }
      }
    triangulation.refine_global (1);
    pcout << "done" << std::endl;
  }

/**
 * Setup the degrees of freedom and constraints.
 */
template <int dim>
  void
  MeshSmoothingTest<dim>::setup_system ()
  {
    pcout << "Setting up the system: ";
    // setup dofs
      {
        // setup for lower order element
        dof_handler.distribute_dofs (u);

        locally_owned_dofs = dof_handler.locally_owned_dofs ();
        locally_relevant_dofs.clear ();
        DoFTools::extract_locally_relevant_dofs (dof_handler,
            locally_relevant_dofs);

        unsigned int nodofs = dof_handler.n_dofs ();
        pcout << "\tNumber of active cells: "
        << triangulation.n_global_active_cells ()
        << "\tNumber of degrees of freedom: " << nodofs << std::endl;


        // setup for higher order element
        dof_handler_ho.distribute_dofs (u_higher_order);
        locally_owned_dofs_ho = dof_handler_ho.locally_owned_dofs ();
        locally_relevant_dofs_ho.clear ();
        DoFTools::extract_locally_relevant_dofs (dof_handler_ho,
            locally_relevant_dofs_ho);
      }

    // initialization
      {
        solution_rel.reinit (locally_relevant_dofs, mpi_communicator);
        solution_loc.reinit (locally_owned_dofs, mpi_communicator);

        solution_rel_ho.reinit (locally_relevant_dofs_ho, mpi_communicator);
        solution_loc_ho.reinit (locally_owned_dofs_ho, mpi_communicator);
      }

    pcout << "\tSystem is successfully set up." << std::endl;

    typename DoFHandler<dim>::active_cell_iterator cell =
        dof_handler.begin_active ();
    const typename DoFHandler<dim>::cell_iterator endc = dof_handler.end ();
    const unsigned int n_points = GeometryInfo<dim>::vertices_per_cell;
    std::vector<types::global_dof_index> global_indices (4, 0);
    Point<dim> p_vertex;
    const double c = pow (2.0, triangulation.n_levels ()) + 1.0;
    // Fill the solution with some values
    for (; cell != endc; ++cell)
      {
        if (cell->is_locally_owned ())
          {
            cell->get_dof_indices (global_indices);
            for (unsigned int vertex_no = 0; vertex_no < n_points; ++vertex_no)
              {
                p_vertex = cell->vertex (vertex_no);
                solution_loc (global_indices[vertex_no]) = sqrt (
                                                               p_vertex (0) + c
                                                                   * p_vertex (
                                                                       1))
                                                           / c;
              }
          }
      }
    solution_rel = solution_loc;
    FETools::interpolate (dof_handler, solution_rel, dof_handler_ho,
        solution_loc_ho); // correct version
    ConstraintMatrix dummy;
    dummy.close();
    interpolate (dof_handler, solution_rel, dof_handler_ho, dummy,
            solution_loc_ho); // also correct
//    FETools::interpolate (dof_handler, solution_loc, dof_handler_ho, solution_loc_ho); // incorrect version
//    FETools::interpolate (dof_handler, solution_rel, dof_handler_ho, solution_rel_ho); // incorrect version
//    FETools::interpolate (dof_handler, solution_loc, dof_handler_ho, solution_rel_ho); // incorrect version

    solution_rel_ho = solution_loc_ho;
  }

/**
 * Print the higher order result into a vtk-file with the leading name title.
 *
 * Currently the vtk-file consists of:
 *  discplacement
 *  subdomain
 */
template <int dim>
  void
  MeshSmoothingTest<dim>::output_results (const std::string &title) const
  {
    std::string solution_name ("solution");

    DataOut<dim> data_out;
    data_out.attach_dof_handler (dof_handler_ho);
    data_out.add_data_vector (solution_rel_ho, solution_name,
        DataOut<dim>::type_dof_data);

    Vector<float> subdomain (triangulation.n_active_cells ());
    subdomain.add (triangulation.locally_owned_subdomain ());
    typename DoFHandler<dim>::active_cell_iterator cell =
        dof_handler_ho.begin_active (), endc = dof_handler_ho.end ();

    data_out.add_data_vector (subdomain, "subdomain");
    data_out.build_patches (u_higher_order.degree);

    const std::string filename =
        (title + "."
         + Utilities::int_to_string (triangulation.locally_owned_subdomain (),
             4));

    std::ofstream output_vtu ((filename + ".vtu").c_str ());
    data_out.write_vtu (output_vtu);

    if (Utilities::MPI::this_mpi_process (mpi_communicator) == 0)
      {
        std::vector<std::string> filenames;
        for (unsigned int i = 0;
            i < Utilities::MPI::n_mpi_processes (mpi_communicator); ++i)
          filenames.push_back (
              title + "." + Utilities::int_to_string (i, 4) + ".vtu");

        std::ofstream master_output ((title + ".pvtu").c_str ());
        data_out.write_pvtu_record (master_output, filenames);
      }
  } // output results

int
main (int argc,
      char *argv[])
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization (argc, argv);
  std::string out_file = argc > 1 ? argv[argc - 1] : "output.txt";
  std::cout << "The textual output is written to " << out_file << std::endl;
    {
      MeshSmoothingTest<2> mini_example (out_file);
      mini_example.run ();
    }
  std::cout << "Finished." << std::endl;

  return EXIT_SUCCESS;
}
//============================================================================
// Name        : fictitious_contact.cpp
// Author      : Korinna Rosin
// Version     : 0.1
// Created on  : 14.06.2017
// Copyright   : see header file
// Description : see header file
//============================================================================

#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/mpi.h>
#include <deal.II/base/point.h>
#include <deal.II/base/utilities.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/tria.h>
#include <mpi.h>
#include <cstdlib>
#include <iostream>
#include <string>
#include <vector>

#include <deal.II/lac/trilinos_vector_base.h>
#include <deal.II/lac/trilinos_precondition.h>


using namespace dealii;

template <int dim>
  class MeshSmoothingTest
  {
    public:
      MeshSmoothingTest (std::string out_file);

      ~MeshSmoothingTest ()
      {
      }
      ;

      void
      run ();
    private:
      void
      make_grid ();

      void
      refine_grid ();

      void
      output_results (const std::string &title,
                      const unsigned int &iter_step) const;

      /**
       * PRIVATE VARIABLE
       */

      // MPI
      MPI_Comm mpi_communicator;
      parallel::distributed::Triangulation<dim> triangulation;

      // Ofstream setting to a file
      std::ofstream out;
      ConditionalOStream pcout;

      bool graphical_output = true;

  };

/**
 * Create the FDContact object and initialize its variables.
 * Furthermore, the parameters for the simulation (prm) and the solver (prm_solver) are set.
 */
template <int dim>
  MeshSmoothingTest<dim>::MeshSmoothingTest (std::string out_file)
      : mpi_communicator (MPI_COMM_WORLD),
        triangulation (mpi_communicator,
            typename parallel::distributed::Triangulation<dim>::MeshSmoothing (
                parallel::distributed::Triangulation<dim>::patch_level_1),
            parallel::distributed::Triangulation<dim>::Settings::no_automatic_repartitioning),
        // defining out and pcout
        out (out_file, std::ofstream::out),
        pcout (out, (Utilities::MPI::this_mpi_process (mpi_communicator) == 0))
  {
    deallog.attach (out);
  }

template <int dim>
  void
  MeshSmoothingTest<dim>::make_grid ()
  {
    pcout << "Making the grid... ";
    // create a small mesh
    const std::vector<unsigned int> repetitions =
      { 3, 2 };
    Point<dim> p_1;
    Point<dim> p_2;
    p_2 (0) = 1.5;
    p_2 (1) = 1.0;
    if (dim == 3)
      p_2 (dim - 1) = 1.0;
    GridGenerator::subdivided_hyper_rectangle (triangulation, repetitions, p_1,
        p_2);
    // make guaranteed patch structure for the beginning
    triangulation.refine_global (1);
    pcout << "done" << std::endl;
  }

template <int dim>
  void
  MeshSmoothingTest<dim>::refine_grid ()
  {
    for (typename Triangulation<dim>::active_cell_iterator cell =
        triangulation.begin_active (); cell != triangulation.end (); ++cell)
      if (cell->is_locally_owned () && cell->center () (0) < 1.0)
        cell->set_refine_flag ();
    triangulation.execute_coarsening_and_refinement ();
  }

template <int dim>
  void
  MeshSmoothingTest<dim>::output_results (const std::string &title,
                                          const unsigned int &iter_step) const
  {
    const std::string filename = (title + "-"
                                  + Utilities::int_to_string (iter_step, 5));
    std::ofstream output_vtu_2 ((filename + ".vtu").c_str ());
    GridOut grid_out;
    grid_out.write_mesh_per_processor_as_vtu (triangulation, filename, true,
        true);
  } // output results

template <int dim>
  void
  MeshSmoothingTest<dim>::run ()
  {
    pcout << "Start running." << std::endl;
    make_grid ();
    for (unsigned int i = 0; i < 3; ++i)
      {
        if (i != 0)
          refine_grid ();
        if (graphical_output)
          {
            pcout << "  Writing graphical output. " << std::endl;
              {
                output_results ("solution", i);
              }
          }
      }
  }

int
main (int argc,
      char *argv[])
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization (argc, argv);
  std::string out_file = argc > 1 ? argv[argc - 1] : "output.txt";
  std::cout << "The textual output is written to " << out_file << std::endl;
    {
      MeshSmoothingTest<2> mini_example (out_file);
      mini_example.run ();
    }
  std::cout << "Finished." << std::endl;

  return EXIT_SUCCESS;
}

Reply via email to