Dear Prof. Bangerth,
Regarding the locally_owned_dofs_per_component() function:
Thank you for the detailed explanation, this clarifies the problem. Now I
have a better understanding of how a sequential triangulation is handled
when partitioned and used for problems ran on multiple processors. The
confusion came from the fact that printing the elements of
locally_owned_dofs actually did output only the "locally owned" ones, but
the locally_owned_dofs_per_component output "all" DoFs.
Regarding the PETScWrappers::MPI::BlockSparseMatrix:
I actually did setup the matrix following the step-32, but found that the
error that the partitions are not contiguous is also related to the
sequential triangulation. As you described, the DoFHandler considers every
cell as locally owned, as opposed to parallel::shared::Triangulation to
which I switched now and do not get the error message. See the attached
example, which works for me now.
Thank you again both Timo Heister and Prof. Bangerth for your time and help.
Best regards,
Gabriel
środa, 30 września 2020 o 00:12:09 UTC+2 Wolfgang Bangerth napisał(a):
>
> Gabriel,
>
> > 1. Regarding the initialization of PETSc::MPI::BlockSparseMatrix:
> >
> > I have used the the IndexSet::split_by_block() function and this indeed
> works
> > good. Thanks for the suggestion!
> >
> > Unfortunately, I have encountered another issue. The
> > PETSc::MPI::BlockSparseMatrix must be partitioned into contiguous
> chunks, for
> > that reason renumbering the DoFs by component in order to define blocks
> for
> > displacement DoFs and electric potentials DoFs fails, giving an error:
> "PETSc
> > only supports contiguous row/column ranges". I know that Trilinos allows
> for
> > non-contiguous partitioning (according to what is written in step-32),
> but I
> > need PETSc to run the eigenvalue problem using SLEPc. Do you have any
> ideas
> > how this issue could handled?
>
> We are all using PETSc all the time, so that can't be true.
>
> In the end, you need a block matrix each block of which is partitioned
> between
> processors. There are numerous tutorial programs that do exactly this. Why
> don't you copy the way step-32, for example, enumerates and partitions
> degrees
> of freedom? There, all that is done is to call
>
> stokes_dof_handler.distribute_dofs(stokes_fe);
>
> std::vector<unsigned int> stokes_sub_blocks(dim + 1, 0);
> stokes_sub_blocks[dim] = 1;
> DoFRenumbering::component_wise(stokes_dof_handler, stokes_sub_blocks);
>
> and that is good enough to have things partitioned into blocks, and have
> everything within each block be contiguous.
>
>
> > 2. Clarification to locally_owned_dofs_per_component(), which I
> mentioned in
> > my previous post:
> >
> > I created a small working example to demontrate what I described in the
> > previous post (see attached). I run the file using "mpirun -np 2
> test_dofs" in
> > debug mode. Please check the attached image for the output.
> >
> > This shows that the function locally_owned_dofs_per_component() divides
> the
> > DoFs correctly per component but not per processor. According to what is
> > written in the documentation, the union of locally owned DoFs for all
> > components should correspond to locally owned DoFs, which is not the
> case in
> > this example.
>
> The problem here is that you are using a sequential triangulation. On
> every
> processor, the DoFHandler owns all DoFs and that's what you see when you
> output the DoFs per component: every dof is owned by every process,
> because
> they simply don't know about each other. (The fact that you partition,
> i.e.
> set subdomain_ids, on the triangulation doesn't matter: You are using the
> sequential triangulation class, it knows nothing of other processors.)
>
> On the other hand, you call DoFTools::locally_owned_dofs_per_subdomain
> which
> simply counts DoFs per subdomain -- but for a sequential triangulation,
> the
> subdomain_id of every cell is just a number without any special meaning.
>
> Best
> W.
>
> --
> ------------------------------------------------------------------------
> Wolfgang Bangerth email: [email protected]
> www: http://www.math.colostate.edu/~bangerth/
>
>
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/c53d38a2-e755-4ec5-8954-402313ecf00bn%40googlegroups.com.
##
# CMake script
##
# Set the name of the project and target:
SET(TARGET "petscmatrix")
# Declare all source files the target consists of. Here, this is only
# the one step-X.cc file, but as you expand your project you may wish
# to add other source files as well. If your project becomes much larger,
# you may want to either replace the following statement by something like
# FILE(GLOB_RECURSE TARGET_SRC "source/*.cc")
# FILE(GLOB_RECURSE TARGET_INC "include/*.h")
# SET(TARGET_SRC ${TARGET_SRC} ${TARGET_INC})
# or switch altogether to the large project CMakeLists.txt file discussed
# in the "CMake in user projects" page accessible from the "User info"
# page of the documentation.
INCLUDE_DIRECTORIES(
${CMAKE_SOURCE_DIR}
)
FILE(GLOB_RECURSE TARGET_SRC "source/*.cc" "*.cc" "${CMAKE_SOURCE_DIR}/main.cc")
FILE(GLOB_RECURSE TARGET_INC "include/*.h")
SET(TARGET_SRC ${TARGET_SRC} ${TARGET_INC})
# Usually, you will not need to modify anything beyond this point...
CMAKE_MINIMUM_REQUIRED(VERSION 2.8.12)
FIND_PACKAGE(deal.II 9.1.1 QUIET
HINTS ${deal.II_DIR} ${DEAL_II_DIR} ../ ../../ $ENV{DEAL_II_DIR}
)
IF (NOT ${deal.II_FOUND})
MESSAGE(FATAL_ERROR "\n"
"*** Could not locate a (sufficiently recent) version of deal.II.
***\n\n"
"You may want to either pass a flag -DDEAL_II_DIR=/path/to/deal.II
to cmake\n"
"or set an environment variable \"DEAL_II_DIR\" that contains this
path."
)
ENDIF ()
#
# Are all dependencies fulfilled?
#
IF (NOT (DEAL_II_WITH_PETSC OR DEAL_II_WITH_TRILINOS) OR NOT DEAL_II_WITH_P4EST
OR DEAL_II_PETSC_WITH_COMPLEX) # keep in one line
MESSAGE(FATAL_ERROR "
Error! This tutorial requires a deal.II library that was configured with the
following options:
DEAL_II_WITH_PETSC = ON
DEAL_II_PETSC_WITH_COMPLEX = OFF
DEAL_II_WITH_P4EST = ON
or
DEAL_II_WITH_TRILINOS = ON
DEAL_II_WITH_P4EST = ON
However, the deal.II library found at ${DEAL_II_PATH} was configured with these
options
DEAL_II_WITH_PETSC = ${DEAL_II_WITH_PETSC}
DEAL_II_PETSC_WITH_COMPLEX = ${DEAL_II_PETSC_WITH_COMPLEX}
DEAL_II_WITH_P4EST = ${DEAL_II_WITH_P4EST}
DEAL_II_WITH_TRILINOS = ${DEAL_II_WITH_TRILINOS}
which conflict with the requirements.
One or both of the aforementioned combinations of prerequisites are not met by
your installation, but at least one is required for this tutorial step."
)
ENDIF ()
DEAL_II_INITIALIZE_CACHED_VARIABLES()
PROJECT(${TARGET})
DEAL_II_INVOKE_AUTOPILOT()#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/dofs/dof_renumbering.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_refinement.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/lac/vector.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/lac/petsc_block_sparse_matrix.h>
#include <deal.II/hp/dof_handler.h>
#include <deal.II/hp/fe_collection.h>
#include <deal.II/hp/fe_values.h>
#include <deal.II/fe/fe_nothing.h>
#include <deal.II/lac/affine_constraints.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/shared_tria.h>
#include <deal.II/lac/sparsity_tools.h>
using namespace dealii;
template<int dim>
class TestProblem {
public:
TestProblem();
~TestProblem();
void run();
private:
void make_grid();
void set_active_fe_indices();
void setup_system();
MPI_Comm mpi_communicator;
const unsigned int n_mpi_processes;
const unsigned int this_mpi_process;
parallel::shared::Triangulation<dim> tria;
FESystem<dim> fe_elast;
FESystem<dim> fe_piezo;
hp::FECollection<dim> fe_collection;
hp::DoFHandler<dim> dof_handler;
AffineConstraints<double> constraints;
PETScWrappers::MPI::BlockSparseMatrix block_stiffness_matrix;
};
template<int dim>
TestProblem<dim>::TestProblem() :
mpi_communicator(MPI_COMM_WORLD),
n_mpi_processes(Utilities::MPI::n_mpi_processes(mpi_communicator)),
this_mpi_process(Utilities::MPI::this_mpi_process(mpi_communicator)),
tria(mpi_communicator,
typename Triangulation<dim>::MeshSmoothing(
Triangulation<dim>::smoothing_on_refinement
| Triangulation<dim>::smoothing_on_coarsening)),
fe_elast(FE_Q<dim>(1), dim, // displacement
FE_Nothing<dim>(), 1), // potential
fe_piezo(FE_Q<dim>(1), dim, // displacement
FE_Q<dim>(1), 1), // potential
dof_handler(tria) {
fe_collection.push_back(fe_elast);
fe_collection.push_back(fe_piezo);
}
template<int dim>
TestProblem<dim>::~TestProblem() {
dof_handler.clear();
}
template<>
void TestProblem<3>::make_grid() {
Point<3> p1(-0.80,-0.31,-0.05);
Point<3> p2(0.82,0.31,0.05);
std::vector<unsigned int> subdivisions;
subdivisions.push_back(5);
subdivisions.push_back(4);
subdivisions.push_back(5);
GridGenerator::subdivided_hyper_rectangle(tria, subdivisions, p1, p2, true);
}
template<>
void TestProblem<2>::make_grid() {
Point<2> p1(-0.80,-0.31);
Point<2> p2(0.82,0.31);
std::vector<unsigned int> subdivisions;
subdivisions.push_back(10);
subdivisions.push_back(4);
GridGenerator::subdivided_hyper_rectangle(tria, subdivisions, p1, p2, true);
tria.refine_global(0);
}
template<int dim>
void TestProblem<dim>::set_active_fe_indices() {
unsigned int comp = 2;
for (auto &cell : dof_handler.active_cell_iterators()) {
if(!cell->is_locally_owned())
continue;
if (cell->center()[comp] < 0.05 &&
cell->center()[comp] > 0.03) {
cell->set_active_fe_index(1); // piezo
} else
cell->set_active_fe_index(0); // elastic
}
}
template<int dim>
void TestProblem<dim>::setup_system() {
dof_handler.distribute_dofs(fe_collection);
std::vector<unsigned int> block_component(dim + 1, 0);
block_component[dim] = 1;
DoFRenumbering::component_wise(dof_handler, block_component);
const std::vector<types::global_dof_index> dofs_per_block =
DoFTools::count_dofs_per_fe_block(dof_handler, block_component);
const IndexSet locally_owned_dofs = dof_handler.locally_owned_dofs();
const std::vector<IndexSet> locally_owned_dofs_per_proc = dof_handler.locally_owned_dofs_per_processor();
const std::vector<IndexSet> locally_owned_dofs_per_block = locally_owned_dofs.split_by_block(dofs_per_block);
IndexSet locally_relevant_dofs;
DoFTools::extract_locally_relevant_dofs(dof_handler,
locally_relevant_dofs);
const std::vector<IndexSet> locally_relevant_dofs_per_block = locally_relevant_dofs.split_by_block(dofs_per_block);
BlockDynamicSparsityPattern bdsp(locally_relevant_dofs_per_block);
DoFTools::make_sparsity_pattern(dof_handler, bdsp, constraints, false);
SparsityTools::distribute_sparsity_pattern(bdsp,
locally_owned_dofs_per_proc,
mpi_communicator,
locally_relevant_dofs);
bdsp.compress();
block_stiffness_matrix.reinit(locally_owned_dofs_per_block, bdsp, mpi_communicator);
}
template<int dim>
void TestProblem<dim>::run() {
make_grid();
set_active_fe_indices();
setup_system();
}
int main(int argc, char *argv[]) {
try {
using namespace dealii;
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
TestProblem<3> test_problem;
test_problem.run();
} catch (std::exception &exc) {
std::cerr << std::endl << std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl << exc.what()
<< std::endl << "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
} catch (...) {
std::cerr << std::endl << std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl << "Aborting!"
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}