Dear Timo, Dear Wolfgang,

Thank you both for your input!

1. Regarding the initialization of PETSc::MPI::BlockSparseMatrix:

I have used the the IndexSet::split_by_block() function and this indeed 
works good. Thanks for the suggestion!

Unfortunately, I have encountered another issue. The 
PETSc::MPI::BlockSparseMatrix must be partitioned into contiguous chunks, 
for that reason renumbering the DoFs by component in order to define blocks 
for displacement DoFs and electric potentials DoFs fails, giving an error: 
"PETSc only supports contiguous row/column ranges". I know that Trilinos 
allows for non-contiguous partitioning (according to what is written in 
step-32), but I need PETSc to run the eigenvalue problem using SLEPc. Do 
you have any ideas how this issue could handled?

2. Clarification to locally_owned_dofs_per_component(), which I mentioned 
in my previous post:

I created a small working example to demontrate what I described in the 
previous post (see attached). I run the file using "mpirun -np 2 test_dofs" 
in debug mode. Please check the attached image for the output.

This shows that the function locally_owned_dofs_per_component() divides the 
DoFs correctly per component but not per processor. According to what is 
written in the documentation, the union of locally owned DoFs for all 
components should correspond to locally owned DoFs, which is not the case 
in this example.

Best regards,
Gabriel


czwartek, 17 września 2020 o 20:19:40 UTC+2 Wolfgang Bangerth napisał(a):

> On 9/17/20 9:07 AM, Gabriel Stankiewicz wrote:
> > I created IndexSets by using the function 
> > DoFTools::locally_owned_dofs_per_component() and then gathering all 
> indices 
> > corresponding to displacement DoFs (three instances of IndexSet) into 
> one 
> > IndexSet using IndexSet::add_indices() and the fourth instance 
> correponded to 
> > electric potential DoFs. However, the function 
> > DoFTools::locally_owned_dofs_per_component() wasn't returning only 
> locally 
> > owned DoFs but also the locally not owned ones.
>
> No, that isn't right. This is a function that is so widely used that it is 
> hard to believe that it would be wrong. If you think that it returns 
> something 
> wrong, I believe that your *understanding* of what this function returns 
> is 
> wrong, and that might lead to further confusion downstream when you are 
> thinking how to initialize matrices.
>
> There are quite a number of tutorial programs that build parallel block 
> matrices. I know that step-32 does this for Trilinos, but I think that the 
> parallel Navier-Stokes solver does it for PETSc. It would be useful to 
> just 
> compare your code with how these programs do it.
>
> Best
> W.
>
> -- 
> ------------------------------------------------------------------------
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/2641f095-51de-4c88-968a-c94fc62997d5n%40googlegroups.com.
##
#  CMake script
##

# Set the name of the project and target:
SET(TARGET "test_dofs")

# Declare all source files the target consists of. Here, this is only
# the one step-X.cc file, but as you expand your project you may wish
# to add other source files as well. If your project becomes much larger,
# you may want to either replace the following statement by something like
#  FILE(GLOB_RECURSE TARGET_SRC  "source/*.cc")
#  FILE(GLOB_RECURSE TARGET_INC  "include/*.h")
#  SET(TARGET_SRC ${TARGET_SRC}  ${TARGET_INC})
# or switch altogether to the large project CMakeLists.txt file discussed
# in the "CMake in user projects" page accessible from the "User info"
# page of the documentation.

INCLUDE_DIRECTORIES(
        ${CMAKE_SOURCE_DIR}

)

FILE(GLOB_RECURSE TARGET_SRC "source/*.cc" "*.cc" "${CMAKE_SOURCE_DIR}/main.cc")
FILE(GLOB_RECURSE TARGET_INC "include/*.h")
SET(TARGET_SRC ${TARGET_SRC} ${TARGET_INC})

# Usually, you will not need to modify anything beyond this point...

CMAKE_MINIMUM_REQUIRED(VERSION 2.8.12)

FIND_PACKAGE(deal.II 9.1.1 QUIET
        HINTS ${deal.II_DIR} ${DEAL_II_DIR} ../ ../../ $ENV{DEAL_II_DIR}
        )
IF (NOT ${deal.II_FOUND})
    MESSAGE(FATAL_ERROR "\n"
            "*** Could not locate a (sufficiently recent) version of deal.II. 
***\n\n"
            "You may want to either pass a flag -DDEAL_II_DIR=/path/to/deal.II 
to cmake\n"
            "or set an environment variable \"DEAL_II_DIR\" that contains this 
path."
            )
ENDIF ()

#
# Are all dependencies fulfilled?
#
IF (NOT (DEAL_II_WITH_PETSC OR DEAL_II_WITH_TRILINOS) OR NOT DEAL_II_WITH_P4EST 
OR DEAL_II_PETSC_WITH_COMPLEX) # keep in one line
    MESSAGE(FATAL_ERROR "
Error! This tutorial requires a deal.II library that was configured with the 
following options:
    DEAL_II_WITH_PETSC = ON
    DEAL_II_PETSC_WITH_COMPLEX = OFF
    DEAL_II_WITH_P4EST = ON
or
    DEAL_II_WITH_TRILINOS = ON
    DEAL_II_WITH_P4EST = ON
However, the deal.II library found at ${DEAL_II_PATH} was configured with these 
options
    DEAL_II_WITH_PETSC = ${DEAL_II_WITH_PETSC}
    DEAL_II_PETSC_WITH_COMPLEX = ${DEAL_II_PETSC_WITH_COMPLEX}
    DEAL_II_WITH_P4EST = ${DEAL_II_WITH_P4EST}
    DEAL_II_WITH_TRILINOS = ${DEAL_II_WITH_TRILINOS}
which conflict with the requirements.
One or both of the aforementioned combinations of prerequisites are not met by 
your installation, but at least one is required for this tutorial step."
            )
ENDIF ()

DEAL_II_INITIALIZE_CACHED_VARIABLES()

PROJECT(${TARGET})

DEAL_II_INVOKE_AUTOPILOT()
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/dofs/dof_renumbering.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_refinement.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/lac/vector.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_system.h>


using namespace dealii;

template<int dim>
class TestProblem {
public:
	TestProblem();
	~TestProblem();
	void run();
private:
	void make_grid();
	void setup_system();

	MPI_Comm mpi_communicator;
	const unsigned int n_mpi_processes;
	const unsigned int this_mpi_process;

    Triangulation<dim> tria;

	DoFHandler<dim> dof_handler;

	FESystem<dim> fe;
};


template<int dim>
TestProblem<dim>::TestProblem() :
		mpi_communicator(MPI_COMM_WORLD),
		n_mpi_processes(Utilities::MPI::n_mpi_processes(mpi_communicator)),
		this_mpi_process(Utilities::MPI::this_mpi_process(mpi_communicator)),
		dof_handler(tria),
		fe(FE_Q<dim>(1), dim) {}

template<int dim>
TestProblem<dim>::~TestProblem() {
	dof_handler.clear();
}

template<>
void TestProblem<3>::make_grid() {
    Point<3> p1(-0.80,-0.31,-0.05);
    Point<3> p2(0.82,0.31,0.05);
    std::vector<unsigned int> subdivisions;
    subdivisions.push_back(2);
    subdivisions.push_back(2);
    subdivisions.push_back(1);

    GridGenerator::subdivided_hyper_rectangle(tria, subdivisions, p1, p2, true);
    //tria.refine_global(0);
}


template<>
void TestProblem<2>::make_grid() {
    Point<2> p1(-0.80,-0.31);
    Point<2> p2(0.82,0.31);
    std::vector<unsigned int> subdivisions;
    subdivisions.push_back(10);
    subdivisions.push_back(4);

    GridGenerator::subdivided_hyper_rectangle(tria, subdivisions, p1, p2, true);
    tria.refine_global(0);
}


template<int dim>
void TestProblem<dim>::setup_system() {
    GridTools::partition_triangulation(n_mpi_processes, tria);
    dof_handler.distribute_dofs(fe);

    DoFRenumbering::component_wise(dof_handler);
    //DoFRenumbering::subdomain_wise(dof_handler);

    const std::vector<IndexSet> locally_owned_dofs_per_proc = DoFTools::locally_owned_dofs_per_subdomain(dof_handler);
    const IndexSet locally_owned_dofs = locally_owned_dofs_per_proc[this_mpi_process];

    const std::vector<IndexSet> locally_owned_dofs_per_comp = DoFTools::locally_owned_dofs_per_component(dof_handler);

    std::cout << std::endl;

    std::cout << "Locally owned DoFs for processor " << this_mpi_process << ":";
    for (auto id : locally_owned_dofs) std::cout << " " << id;
    std::cout << std::endl;

    for (unsigned int i = 0; i < dim; ++i) {
      std::cout << "Locally owned DoFs per component " << i << " for processor " << this_mpi_process << ":";
      for (auto id : locally_owned_dofs_per_comp[i]) std::cout << " " << id;
      std::cout << std::endl;
    }

    std::cout << std::endl;
}


template<int dim>
void TestProblem<dim>::run() {
    make_grid();
	setup_system();
}


int main(int argc, char *argv[]) {
	try {
		using namespace dealii;
		Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
        TestProblem<3> test_problem;
		test_problem.run();
	} catch (std::exception &exc) {
		std::cerr << std::endl << std::endl
				<< "----------------------------------------------------"
				<< std::endl;
		std::cerr << "Exception on processing: " << std::endl << exc.what()
				<< std::endl << "Aborting!" << std::endl
				<< "----------------------------------------------------"
				<< std::endl;
		return 1;
	} catch (...) {
		std::cerr << std::endl << std::endl
				<< "----------------------------------------------------"
				<< std::endl;
		std::cerr << "Unknown exception!" << std::endl << "Aborting!"
				<< std::endl
				<< "----------------------------------------------------"
				<< std::endl;
		return 1;
	}
	return 0;

}

Reply via email to