Dear deal-ii users and developers, 

I am currently implementing a code that minimizes an energy functional 
depending on a traceless symmetric tensor (5D solution) for liquid crystal 
physics. The physics details matters not, I will just mention that I am 
using a combination of trust-region Newton iterations with a Truncated 
Conjugate Gradient (TCG) solver and matrix-free approach with multigrid 
preconditioning. This works exceedingly well for simple problems with 
Dirichlet boundary conditions, and I must say I am very impressed by the 
quality and efficiency of the dealii library and the excellent 
documentation, so a big thanks to all the developers! 

However, I have a problem when I want to use periodic boundary conditions 
together with matrix-free multigrid preconditioning (it works fine with 
matrix-free but no preconditioner). Specifically, the problem arises when I 
try to use the method MGTransferMatrixFree::interpolate_to_mg, that allows 
me to transfer the solution of my problem at each level of the 
triangulation (where it is further used in the calculation of the system 
matrix at this level, which in my case corresponds to the hessian of the 
free energy functional, but I think that it is irrelevant for the problem I 
have).

Basically, this method tries to access an element of a distributed vector 
that is not stored on the current MPI process, so I get an exception raised 
by LinearAlgebra::distributed::Vector. If I use Dirichlet BCs instead of 
periodic ones, the problem goes away. So it seems to be a problem with the 
way that the vectors of my problem are partitioned when I include periodic 
BCs. To illustrate this problem, I am attaching a MWE that shows how I 
setup matrix-free and multigrid objects with periodic BCs, and raises the 
same error as in my main code. I am using deal.II v9.3 on an archlinux 
distribution.

I have looked at all the tutorials related to matrix-free and multigrid 
objects (this is how I wrote the base program in the first place), but was 
not able to find a solution on my own. I have tried various way of 
specifying the AffineConstraints objects for the level matrix-free objects, 
but none worked out. I also tried dummy AffineConstraints like in step 59 
which uses periodic faces, but this does not work, so I guess this is is 
specific to FE_DGQ (whereas I am using FE_Q spaces). I must do something 
wrong because it works so well without periodic BCs, so if anyone has an 
idea this would be great :) I am happy to provide any additional details 
that you would deem important.

Best regards,

Guilhem

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/727e74bb-bd0a-4034-b095-522ef4e7dae3n%40googlegroups.com.
#include <deal.II/base/timer.h>
#include <deal.II/base/utilities.h>

#include <deal.II/distributed/tria.h>
#include <deal.II/lac/affine_constraints.h>
#include <deal.II/lac/la_parallel_vector.h>

#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/mapping_q.h>

#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/grid/grid_generator.h>

#include <deal.II/multigrid/multigrid.h>
#include <deal.II/multigrid/mg_transfer_matrix_free.h>

#include <deal.II/matrix_free/matrix_free.h>

#include <iostream>
#include <memory>
#include <string>

using namespace dealii;

int main(int argc, char *argv[]) {
	try {
		Utilities::MPI::MPI_InitFinalize mpi_init(argc, argv, 1);
		ConditionalOStream pcout(std::cout, Utilities::MPI::this_mpi_process(MPI_COMM_WORLD)==0);

		const unsigned int dim = 3;
		const unsigned int fe_degree = 5;

		pcout << "Building triangulation" << std::endl;

		parallel::distributed::Triangulation<dim> tria(
			MPI_COMM_WORLD, Triangulation<dim>::limit_level_difference_at_vertices,
			parallel::distributed::Triangulation<dim>::construct_multigrid_hierarchy);

		GridGenerator::hyper_cube(tria, 0., 1., true);

		std::vector<GridTools::PeriodicFacePair<TriaIterator<CellAccessor<dim,dim> > > >
			tria_matched_pairs;
		GridTools::collect_periodic_faces(tria, 0, 1, 0, tria_matched_pairs);
		tria.add_periodicity(tria_matched_pairs);

		tria.refine_global(2);


		pcout << "Distributing DOFs" << std::endl;

		const dealii::FESystem<dim> fe(FE_Q<dim>(fe_degree), 5);
		DoFHandler<dim> dof_handler(tria);
		dof_handler.distribute_dofs(fe);
		dof_handler.distribute_mg_dofs();


		pcout << "Setting-up matrix-free data" << std::endl;

		IndexSet locally_relevant_dofs;
		DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);
	
		AffineConstraints<double> constraints;
		constraints.reinit(locally_relevant_dofs);
		DoFTools::make_hanging_node_constraints(dof_handler, constraints);

		std::vector<GridTools::PeriodicFacePair<typename DoFHandler<dim>::cell_iterator > >
			dof_matched_pairs;
		GridTools::collect_periodic_faces(
			dof_handler, 0, 1, 0, dof_matched_pairs);
		DoFTools::make_periodicity_constraints<dim,dim>(dof_matched_pairs, constraints);
		constraints.close();
		
		typename MatrixFree<dim, double>::AdditionalData additional_data;
		additional_data.tasks_parallel_scheme = MatrixFree<dim, double>::AdditionalData::none;
		additional_data.mapping_update_flags = 
			(update_gradients | update_JxW_values | update_quadrature_points);

		auto mf_data = std::make_shared<MatrixFree<dim,double> >();
		mf_data->reinit(
			MappingQ<dim>(fe_degree+1), dof_handler, constraints,
			QGaussLobatto<1>(fe_degree+1), additional_data);

		LinearAlgebra::distributed::Vector<double> qsol, qsol_update;
		mf_data->initialize_dof_vector(qsol);
		mf_data->initialize_dof_vector(qsol_update); // not used here, just to recall my code use Newton iterations

		pcout << "Setting-up multigrid objects" << std::endl;

		MGLevelObject<LinearAlgebra::distributed::Vector<double> > mg_qsol;
		MGConstrainedDoFs mg_constrained_dofs;
		MGTransferMatrixFree<dim,double> mg_transfer;

		unsigned int n_tria_levels = tria.n_global_levels();
		mg_qsol.resize(0, n_tria_levels - 1);

		mg_constrained_dofs.initialize(dof_handler);
		mg_transfer.initialize_constraints(mg_constrained_dofs);
		mg_transfer.build(dof_handler);

		std::vector<std::shared_ptr<MatrixFree<dim,double> > > mg_mf_data;
		for(unsigned int level=0; level<n_tria_levels; ++level) {
			typename MatrixFree<dim,double>::AdditionalData mg_additional_data;
			mg_additional_data.tasks_parallel_scheme = MatrixFree<dim,double>::AdditionalData::none;
			mg_additional_data.mapping_update_flags = 
				(update_gradients | update_JxW_values | update_quadrature_points);
			mg_additional_data.mg_level = level;
	
			auto& level_constraints = mg_constrained_dofs.get_level_constraints(level);
	
			mg_mf_data.push_back(std::make_shared<MatrixFree<dim,double> >());
			mg_mf_data.back()->reinit(
				MappingQ<dim>(fe_degree+1), dof_handler, level_constraints,
				QGaussLobatto<1>(fe_degree+1), mg_additional_data);
			mg_mf_data.back()->initialize_dof_vector(mg_qsol[level]);
		}

		pcout << "Multigrid interpolation of qsol" << std::endl;
		mg_transfer.interpolate_to_mg(dof_handler, mg_qsol, qsol);
	}
	catch (std::exception &exc) {
		std::cerr << std::endl
			<< std::endl
			<< "----------------------------------------------------"
			<< std::endl;
		std::cerr << "Exception on processing: " << std::endl
			<< exc.what() << std::endl
			<< "Aborting!" << std::endl
			<< "----------------------------------------------------"
			<< std::endl;
		return 1;
	}
	catch (...) {
		std::cerr << std::endl
			<< std::endl
			<< "----------------------------------------------------"
			<< std::endl;
		std::cerr << "Unknown exception!" << std::endl
			<< "Aborting!" << std::endl
			<< "----------------------------------------------------"
			<< std::endl;
		return 1;
	}

	return 0;
}

Reply via email to