Dear Prof. Bangerth,

I have now reproduced the above issue in the attached minimal example.

Below is the algorithm of the minimal example 

1) Create a hypercube (-20,20) with origin at the center

2) Set periodic boundary conditions on all faces of the hypercube

3) Refine mesh by first doing global refinement once to get 8 cells and 
then refine 
    the cell containing the corner (-20,-20,-20) two times iteratively. 
Finally I get 71 cells (see attached image)
    with hanging nodes on three faces.

4) Create constraint matrix with both hanging node and periodic 
constraints, and call close().

5) Print the constraint equation (j,a_ij) for global dof id-52 on 
processors for which global dof id-52 is relevant, when run on two mpi 
tasks:

   $ mpirun -n 2 ./minimalExample

   number of elements: 71
   taskId: 1, globalDofId-i: 52, coordinates-i: -20 20 -10, globalDofId-j: 
16, coordinates-j: -20 -20 -10, scalarCoeff-aij: 1
   taskId: 0, globalDofId-i: 52, coordinates-i: -20 20 -10, globalDofId-j: 
32, coordinates-j: 20 -20 -10, scalarCoeff-aij: 1
  
   Clearly "j" in the constraint equation is different across processors 
for the same constrained global dof id.

Thank you,
Sambit

On Friday, August 10, 2018 at 9:39:58 AM UTC-5, Sambit Das wrote:
>
> Dear Prof. Bangerth,
>
> Yes, they should really be the same. Or, more correctly, if two processors 
>> both store the constraints for a node, they better be the same. On the 
>> other 
>> hand, of course not every processor will know every constraint. 
>>
>
> Thanks you for clarifying this.
>
>  Can you try to construct a minimal testcase for what you observe? 
>
>
> Yes, I am going to construct a minimal test case .
>
> Best,
> Sambit 
>
> On Thursday, August 9, 2018 at 11:33:13 PM UTC-5, Wolfgang Bangerth wrote:
>>
>>
>> > I created a ConstraintMatrix with both periodic and hanging node 
>> constraints, 
>> > and called close(). 
>> > 
>> > Then I pickeda constrained degree of freedom, lets say with global dof 
>> id = 
>> > “i” and printed the constraint equation pairs (j,a_ij) corresponding to 
>> “i” on 
>> > the processor for which “i” is locally owned as well as the processors 
>> for 
>> > which “i” is a ghost. I expected the constraint equation to be the same 
>> for 
>> > owning processor and the ghost processor, howeverwe have encountered a 
>> case 
>> > where printing the constraint equation entries shows different “j” for 
>> owning 
>> > and ghost processorsalthough a_ij are same. When we printed the 
>> coordinates of 
>> > “j” which were different, we found that those two nodes to be periodic 
>> images 
>> > of each other. 
>> > 
>> > 
>> > Should I except the constraint equation to be the same for the owning 
>> > processor and the ghost processors? 
>>
>> Yes, they should really be the same. Or, more correctly, if two 
>> processors 
>> both store the constraints for a node, they better be the same. On the 
>> other 
>> hand, of course not every processor will know every constraint. 
>>
>> Can you try to construct a minimal testcase for what you observe? 
>>
>> Best 
>>   Wolfgang 
>>
>>
>> -- 
>> ------------------------------------------------------------------------ 
>> Wolfgang Bangerth          email:                 [email protected] 
>>                             www: http://www.math.colostate.edu/~bangerth/ 
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.
//Include all deal.II header file
#include <deal.II/base/quadrature.h>
#include <deal.II/base/function.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/timer.h>
#include <deal.II/base/table.h>
#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/fe/mapping_q1.h>
#include <deal.II/grid/tria.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/base/tensor_function.h>
#include <deal.II/base/point.h>
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/utilities.h>
#include <deal.II/lac/lapack_full_matrix.h>
#include <deal.II/numerics/matrix_tools.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/base/parameter_handler.h>
#include <deal.II/grid/grid_refinement.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/grid/grid_in.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/dofs/dof_renumbering.h>
#include <deal.II/dofs/dof_accessor.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/solver_gmres.h>
#include <deal.II/lac/solver_bicgstab.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/lac/parallel_vector.h>
#include <deal.II/matrix_free/matrix_free.h>
#include <deal.II/matrix_free/fe_evaluation.h>
#include <deal.II/lac/slepc_solver.h>
#include <deal.II/base/config.h>
#include <deal.II/base/smartpointer.h>
//Include generic C++ headers
#include <fstream>
#include <iostream>


using namespace dealii;
int main (int argc, char *argv[])
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv);

  const double L=20;
  dealii::parallel::distributed::Triangulation<3> triangulation(MPI_COMM_WORLD);
  GridGenerator::hyper_cube (triangulation, -L, L);

  //mark faces
  typename dealii::parallel::distributed::Triangulation<3,3>::active_cell_iterator cell = triangulation.begin_active(), endc = triangulation.end();
  for(; cell!=endc; ++cell)
      for(unsigned int f=0; f < GeometryInfo<3>::faces_per_cell; ++f)
	{
	  const Point<3> face_center = cell->face(f)->center();
	  if(cell->face(f)->at_boundary())
	    {
	      if (std::abs(face_center[0]-L)<1.0e-5)
		cell->face(f)->set_boundary_id(1);
	      else if (std::abs(face_center[0]+L)<1.0e-5)
		cell->face(f)->set_boundary_id(2);
	      else if (std::abs(face_center[1]-L)<1.0e-5)
		cell->face(f)->set_boundary_id(3);
	      else if (std::abs(face_center[1]+L)<1.0e-5)
		cell->face(f)->set_boundary_id(4);
	      else if (std::abs(face_center[2]-L)<1.0e-5)
		cell->face(f)->set_boundary_id(5);
	      else if (std::abs(face_center[2]+L)<1.0e-5)
		cell->face(f)->set_boundary_id(6);
	    }
	}

  std::vector<GridTools::PeriodicFacePair<typename Triangulation<3,3>::cell_iterator> > periodicity_vector;
  for (int i = 0; i < 3; ++i)
  {
      GridTools::collect_periodic_faces(triangulation, /*b_id1*/ 2*i+1, /*b_id2*/ 2*i+2,/*direction*/ i, periodicity_vector);
  }
  triangulation.add_periodicity(periodicity_vector);

  //refine mesh
  triangulation.refine_global(1);

  Point<3> corner;
  corner[0]=-L;
  corner[1]=-L;
  corner[2]=-L;
  MappingQ1<3,3> mapping;

  const unsigned numRefinementLevels=2;
  for (int ilevel=0; ilevel<numRefinementLevels;ilevel++)
  {
     //pick an corner cell and refine
     cell = triangulation.begin_active();
     endc =  triangulation.end();
     for (; cell!=endc; ++cell)
     {
	  try
	    {
	      Point<3> p_cell = mapping.transform_real_to_unit_cell(cell,corner);
	      double dist = GeometryInfo<3>::distance_to_unit_cell(p_cell);

	      if(dist < 1e-08)
		cell->set_refine_flag();

	    }
	  catch(MappingQ1<3>::ExcTransformationFailed)
	    {
	    }
     }
     triangulation.execute_coarsening_and_refinement();
  }



  if (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD)==0)
      std::cout << "number of elements: "
 	<< triangulation.n_global_active_cells()
	<< std::endl;

  //create dofHandler
  FESystem<3> FE(FE_Q<3>(QGaussLobatto<1>(2)), 1);
  DoFHandler<3> dofHandler (triangulation);
  dofHandler.distribute_dofs(FE);

  //write mesh for visualization
  DataOut<3> data_out;
  data_out.attach_dof_handler (dofHandler);
  data_out.build_patches ();
  data_out.write_vtu_in_parallel(std::string("mesh.vtu").c_str(),MPI_COMM_WORLD);

  IndexSet   locally_relevant_dofs;
  DoFTools::extract_locally_relevant_dofs(dofHandler, locally_relevant_dofs);

  std::map<types::global_dof_index, Point<3> >supportPoints;
  DoFTools::map_dofs_to_support_points(MappingQ1<3,3>(), dofHandler, supportPoints);

  ///creating combined hanging node and periodic constraint matrix
  ConstraintMatrix constraints;
  constraints.clear();
  constraints.reinit(locally_relevant_dofs);
  DoFTools::make_hanging_node_constraints(dofHandler, constraints);
  std::vector<GridTools::PeriodicFacePair<typename DoFHandler<3>::cell_iterator> > periodicity_vectorDof;
  for (int i = 0; i < 3; ++i)
    {
      GridTools::collect_periodic_faces(dofHandler, /*b_id1*/ 2*i+1, /*b_id2*/ 2*i+2,/*direction*/ i, periodicity_vectorDof);
    }
  DoFTools::make_periodicity_constraints<DoFHandler<3> >(periodicity_vectorDof, constraints);
  constraints.close();

  for(dealii::types::global_dof_index i = 0; i < dofHandler.n_dofs(); ++i)
     if(locally_relevant_dofs.is_element(i) && constraints.is_constrained(i))
	{
	    Point<3> coordinates_i = supportPoints[i];
	    const std::vector<std::pair<dealii::types::global_dof_index, double > > * rowData=constraints.get_constraint_entries(i);

	    for(unsigned int j = 0; j < rowData->size();++j)
	      {
		double scalarCoeff = (*rowData)[j].second;
		Point<3> coordinates_j = supportPoints[(*rowData)[j].first];

	        if (i==52)
                  std::cout<<"taskId: "<<Utilities::MPI::this_mpi_process(MPI_COMM_WORLD)<<", globalDofId-i: "<<i<< ", coordinates-i: "<<coordinates_i<<  ", globalDofId-j: "<<(*rowData)[j].first<<", coordinates-j: "<<coordinates_j<< ", scalarCoeff-aij: "<< scalarCoeff<<std::endl;
	      }
	}
}

Reply via email to