Prof. Bangerth,
             Thank you very much for your valuable suggestion and It helped
me to debug the code. I have meshed up with the usage of the
locally_relevant  vector and the locally_owned vector. I have attached the
video clip of the solution and the code for your reference. I apologize for
the delayed reply since I was unable to apply your suggestion to debug for
some time.

Best Regards,
Syed Ansari S.
 Heat_Equation_Adaptive_Mesh_Parallel.mpg
<https://drive.google.com/file/d/1Lrdsl7gmXw82oB8VfrYbDhlNh6h39dCD/view?usp=drive_web>


On Tue, Feb 15, 2022 at 9:07 AM Wolfgang Bangerth <[email protected]>
wrote:

>
> Syed,
> the error message is not particularly good, but I think what is happening
> is
> that you are passing in a fully distributed vector to a function that
> expects
> a vector that has ghost elements for locally relevant but not locally
> owned
> elements.
>
> Best
>   W.
>
> On 2/11/22 07:40, syed ansari wrote:
> > *** Caution: EXTERNAL Sender ***
> >
> > Dear All,
> >              I am trying to execute step-26 (Heat Equation) in parallel
> with
> > adaptive mesh. I could execute the code with 1 processor using mpirun
> and if
> > more than one processor is used, the program returns the following error
> message
> > --------------------------------------------------------
> > An error occurred in line <1250> of file
> >
> </home/syed/dealii-candi/tmp/unpack/deal.II-v9.3.2/include/deal.II/lac/petsc_vector_base.h>
>
> > in function
> >      void
> > dealii::PETScWrappers::VectorBase::extract_subvector_to(ForwardIterator,
> > ForwardIterator, OutputIterator) const [with ForwardIterator = const
> unsigned
> > int*; OutputIterator = double*]
> > The violated condition was:
> >      index >= static_cast<unsigned int>(begin) && index <
> static_cast<unsigned
> > int>(end)
> > Additional information:
> >      This exception -- which is used in many places in the library --
> >      usually indicates that some condition which the author of the code
> >      thought must be satisfied at a certain point in an algorithm, is not
> >      fulfilled. An example would be that the first part of an algorithm
> >      sorts elements of an array in ascending order, and a second part of
> >      the algorithm later encounters an element that is not larger than
> the
> >      previous one.
> >
> >      There is usually not very much you can do if you encounter such an
> >      exception since it indicates an error in deal.II, not in your own
> >      program. Try to come up with the smallest possible program that
> still
> >      demonstrates the error and contact the deal.II mailing lists with it
> >      to obtain help.
> >
> > Stacktrace:
> > -----------
> > #0  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> > dealii::PETScWrappers::VectorBase::extract_subvector_to<unsigned int
> const*,
> > double*>(unsigned int const*, unsigned int const*, double*) const
> > #1  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> >
> dealii::internal::DoFAccessorImplementation::Implementation::extract_subvector_to<dealii::PETScWrappers::MPI::Vector,
>
> > double*>(dealii::PETScWrappers::MPI::Vector const&, unsigned int const*,
> > unsigned int const*, double*)
> > #2  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> > dealii::DoFCellAccessor<2, 2,
> > false>::get_dof_values<dealii::PETScWrappers::MPI::Vector,
> > double*>(dealii::PETScWrappers::MPI::Vector const&, double*, double*)
> const
> > #3  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> > dealii::DoFCellAccessor<2, 2,
> > false>::get_dof_values<dealii::PETScWrappers::MPI::Vector,
> > double>(dealii::PETScWrappers::MPI::Vector const&,
> dealii::Vector<double>&) const
> > #4  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> > dealii::DoFCellAccessor<2, 2,
> > false>::get_interpolated_dof_values<dealii::PETScWrappers::MPI::Vector,
> > double>(dealii::PETScWrappers::MPI::Vector const&,
> dealii::Vector<double>&,
> > unsigned int) const
> > #5  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > dealii::parallel::distributed::SolutionTransfer<2,
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >  >::pack_callback(dealii::TriaIterator<dealii::CellAccessor<2, 2> >
> const&,
> > dealii::Triangulation<2, 2>::CellStatus)
> > #6  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > dealii::parallel::distributed::SolutionTransfer<2,
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2,
> 2>::CellStatus)#1}::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> > #7  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::vector<char, std::allocator<char> >
> std::__invoke_impl<std::vector<char,
> > std::allocator<char> >,
> dealii::parallel::distributed::SolutionTransfer<2,
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus>(std::__invoke_other,
> dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulat
> > --------------------------------------------------------
> > An error occurred in line <1250> of file
> >
> </home/syed/dealii-candi/tmp/unpack/deal.II-v9.3.2/include/deal.II/lac/petsc_vector_base.h>
>
> > in function
> >      void
> > dealii::PETScWrappers::VectorBase::extract_subvector_to(ForwardIterator,
> > ForwardIterator, OutputIterator) const [with ForwardIterator = const
> unsigned
> > int*; OutputIterator = double*]
> > The violated condition was:
> >      index >= static_cast<unsigned int>(begin) && index <
> static_cast<unsigned
> > int>(end)
> > Additional information:
> >      This exception -- which is used in many places in the library --
> >      usually indicates that some condition which the author of the code
> >      thought must be satisfied at a certain point in an algorithm, is not
> >      fulfilled. An example would be that the first part of an algorithm
> >      sorts elements of an array in ascending order, and a second part of
> >      the algorithm later encounters an element that is not larger than
> the
> >      previous one.
> >
> >      There is usually not very much you can do if you encounter such an
> >      exception since it indicates an error in deal.II, not in your own
> >      program. Try to come up with the smallest possible program that
> still
> >      demonstrates the error and contact the deal.II mailing lists with it
> >      to obtain help.
> >
> > Stacktrace:
> > -----------
> > #0  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> > dealii::PETScWrappers::VectorBase::extract_subvector_to<unsigned int
> const*,
> > double*>(unsigned int const*, unsigned int const*, double*) const
> > #1  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> >
> dealii::internal::DoFAccessorImplementation::Implementation::extract_subvector_to<dealii::PETScWrappers::MPI::Vector,
>
> > double*>(dealii::PETScWrappers::MPI::Vector const&, unsigned int const*,
> > unsigned int const*, double*)
> > #2  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> > dealii::DoFCellAccessor<2, 2,
> > false>::get_dof_values<dealii::PETScWrappers::MPI::Vector,
> > double*>(dealii::PETScWrappers::MPI::Vector const&, double*, double*)
> const
> > #3  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> > dealii::DoFCellAccessor<2, 2,
> > false>::get_dof_values<dealii::PETScWrappers::MPI::Vector,
> > double>(dealii::PETScWrappers::MPI::Vector const&,
> dealii::Vector<double>&) const
> > #4  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> void
> > dealii::DoFCellAccessor<2, 2,
> > false>::get_interpolated_dof_values<dealii::PETScWrappers::MPI::Vector,
> > double>(dealii::PETScWrappers::MPI::Vector const&,
> dealii::Vector<double>&,
> > unsigned int) const
> > #5  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > dealii::parallel::distributed::SolutionTransfer<2,
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >  >::pack_callback(dealii::TriaIterator<dealii::CellAccessor<2, 2> >
> const&,
> > dealii::Triangulation<2, 2>::CellStatus)
> > #6  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > dealii::parallel::distributed::SolutionTransfer<2,
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2,
> 2>::CellStatus)#1}::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> > #7  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::vector<char, std::allocator<char> >
> std::__invoke_impl<std::vector<char,
> > std::allocator<char> >,
> dealii::parallel::distributed::SolutionTransfer<2,
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus>(std::__invoke_other,
> dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus&&)
> > #8  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::enable_if<std::__and_<std::__not_<std::is_void<std::vector<char,
> > std::allocator<char> > > >,
> >
> std::is_convertible<std::__invoke_result<dealii::parallel::distributed::SolutionTransfer<2,
>
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus>::type, std::vector<char,
> std::allocator<char> > > >::value, std::vector<char, std::allocator<char> >
> >::type std::__invoke_r<std::vector<char, std::allocator<char> >,
> dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2,
> 2>::CellStatus>(dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus&&)
> > #9  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::_Function_handler<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus),
> > dealii::parallel::distributed::SolutionTransfer<2,
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2,
> 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus&&)
> > #10  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2,
> >
> 2>::CellStatus)>::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> >
> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> > #11  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::vector<char, std::allocator<char> >
> std::__invoke_impl<std::vector<char,
> > std::allocator<char> >, std::function<std::vector<char,
> std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus>(std::__invoke_other, std::function<std::vector<char,
> > std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >
> > const&, dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >&&,
> dealii::Triangulation<2,
> > 2>::CellStatus&&)
> > #12  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::enable_if<std::__and_<std::__not_<std::is_void<std::vector<char,
> > std::allocator<char> > > >,
> > std::is_convertible<std::__invoke_result<std::function<std::vector<char,
> > std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >
> > const&, dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus>::type, std::vector<char, std::allocator<char> > >
> >::value,
> > std::vector<char, std::allocator<char> > >::type
> > std::__invoke_r<std::vector<char, std::allocator<char> >,
> > std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus>(ion<2, 2>::CellStatus)#1}&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus&&)
> > #8  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::enable_if<std::__and_<std::__not_<std::is_void<std::vector<char,
> > std::allocator<char> > > >,
> >
> std::is_convertible<std::__invoke_result<dealii::parallel::distributed::SolutionTransfer<2,
>
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus>::type, std::vector<char,
> std::allocator<char> > > >::value, std::vector<char, std::allocator<char> >
> >::type std::__invoke_r<std::vector<char, std::allocator<char> >,
> dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2,
> 2>::CellStatus>(dealii::parallel::distributed::SolutionTransfer<2,
> dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2, 2>::CellStatus)#1}&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus&&)
> > #9  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::_Function_handler<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus),
> > dealii::parallel::distributed::SolutionTransfer<2,
> > dealii::PETScWrappers::MPI::Vector, dealii::DoFHandler<2, 2>
> >
> >::register_data_attach()::{lambda(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> > const&, dealii::Triangulation<2,
> 2>::CellStatus)#1}>::_M_invoke(std::_Any_data const&,
> dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> dealii::Triangulation<2, 2>::CellStatus&&)
> > #10  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2,
> >
> 2>::CellStatus)>::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
> 2> >
> > const&, dealii::Triangulation<2, 2>::CellStatus) const
> > #11  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::vector<char, std::allocator<char> >
> std::__invoke_impl<std::vector<char,
> > std::allocator<char> >, std::function<std::vector<char,
> std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus>(std::__invoke_other, std::function<std::vector<char,
> > std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >
> > const&, dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >&&,
> dealii::Triangulation<2,
> > 2>::CellStatus&&)
> > #12  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::enable_if<std::__and_<std::__not_<std::is_void<std::vector<char,
> > std::allocator<char> > > >,
> > std::is_convertible<std::__invoke_result<std::function<std::vector<char,
> > std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >
> > const&, dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus>::type, std::vector<char, std::allocator<char> > >
> >::value,
> > std::vector<char, std::allocator<char> > >::type
> > std::__invoke_r<std::vector<char, std::allocator<char> >,
> > std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus>(std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >&&,
> dealii::Triangulation<2,
> > 2>::CellStatus&&)
> > #13  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::_Function_handler<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus), std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus)> >::_M_invoke(std::_Any_data
> const&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >&&,
> dealii::Triangulation<2,
> > 2>::CellStatus&&)
> > #14  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> >
> 2>::CellStatus)>::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
> 2>
> >  >, dealii::Triangulation<2, 2>::CellStatus) const
> > #15  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > dealii::parallel::DistributedTriangulationBase<2,
> >
> 2>::DataTransfer::pack_data(std::vector<std::pair<dealii::TriaIterator<dealii::CellAccessor<2,
>
> > 2> >, dealii::Triangulation<2, 2>::CellStatus>,
> > std::allocator<std::pair<dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >,
> > dealii::Triangulation<2, 2>::CellStatus> > > const&,
> > std::vector<std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus)>, std::allocator<std::function<std::vector<char,
> > std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >,
> > dealii::Triangulation<2, 2>::CellStatus)> > > const&,
> > std::vector<std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus)>, std::allocator<std::function<std::vector<char,
> > std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >,
> > dealii::Triangulation<2, 2>::CellStatus)> > > const&)
> > #16  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > dealii::parallel::distributed::Triangulation<2,
> > 2>::execute_coarsening_and_refinement()
> > #17  ./step-26_Adpmesh_pv: Step26::HeatEquation<2>::refine_mesh(unsigned
> int,
> > unsigned int)
> > #18  ./step-26_Adpmesh_pv: Step26::HeatEquation<2>::run()
> > #19  ./step-26_Adpmesh_pv: main
> > --------------------------------------------------------
> >
> > Calling MPI_Abort now.
> > To break execution in a GDB session, execute 'break MPI_Abort' before
> running.
> > You can also put the following into your ~/.gdbinit:
> >    set breakpoint pending on
> >    break MPI_Abort
> >    set breakpoint pending auto
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
> > with errorcode 255.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> > std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus)>&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >&&,
> dealii::Triangulation<2,
> > 2>::CellStatus&&)
> > #13  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::_Function_handler<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus), std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> > const&,
> > dealii::Triangulation<2, 2>::CellStatus)> >::_M_invoke(std::_Any_data
> const&,
> > dealii::TriaIterator<dealii::CellAccessor<2, 2> >&&,
> dealii::Triangulation<2,
> > 2>::CellStatus&&)
> > #14  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> >
> 2>::CellStatus)>::operator()(dealii::TriaIterator<dealii::CellAccessor<2,
> 2>
> >  >, dealii::Triangulation<2, 2>::CellStatus) const
> > #15  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > dealii::parallel::DistributedTriangulationBase<2,
> >
> 2>::DataTransfer::pack_data(std::vector<std::pair<dealii::TriaIterator<dealii::CellAccessor<2,
>
> > 2> >, dealii::Triangulation<2, 2>::CellStatus>,
> > std::allocator<std::pair<dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >,
> > dealii::Triangulation<2, 2>::CellStatus> > > const&,
> > std::vector<std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus)>, std::allocator<std::function<std::vector<char,
> > std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >,
> > dealii::Triangulation<2, 2>::CellStatus)> > > const&,
> > std::vector<std::function<std::vector<char, std::allocator<char> >
> > (dealii::TriaIterator<dealii::CellAccessor<2, 2> >,
> dealii::Triangulation<2,
> > 2>::CellStatus)>, std::allocator<std::function<std::vector<char,
> > std::allocator<char> > (dealii::TriaIterator<dealii::CellAccessor<2, 2>
> >,
> > dealii::Triangulation<2, 2>::CellStatus)> > > const&)
> > #16  /home/syed/dealii-candi/deal.II-v9.3.2/lib/libdeal_II.g.so.9.3.2:
> > dealii::parallel::distributed::Triangulation<2,
> > 2>::execute_coarsening_and_refinement()
> > #17  ./step-26_Adpmesh_pv: Step26::HeatEquation<2>::refine_mesh(unsigned
> int,
> > unsigned int)
> > #18  ./step-26_Adpmesh_pv: Step26::HeatEquation<2>::run()
> > #19  ./step-26_Adpmesh_pv: main
> > --------------------------------------------------------
> >
> > Calling MPI_Abort now.
> > To break execution in a GDB session, execute 'break MPI_Abort' before
> running.
> > You can also put the following into your ~/.gdbinit:
> >    set breakpoint pending on
> >    break MPI_Abort
> >    set breakpoint pending auto
> > [syed-HP-Pavilion-Laptop-15-eg0xxx:14633] 1 more process has sent help
> message
> > help-mpi-api.txt / mpi-abort
> >
> > I have attached the .cc file for your reference. Kindly help me to
> resolve
> > this issue.
> >
> > Thanks in advance.
> >
> > Best Regards,
> > Syed Ansari S.
> >
> > --
> > The deal.II project is located at http://www.dealii.org/
> > <
> https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.dealii.org%2F&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Cf320217cc9074f6da7ce08d9ed6c84a5%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637801874150334277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=216t8z9Evhqd%2FT7PztIDCFigSI6EuRWPFw2EgjVgQC8%3D&reserved=0
> >
> > For mailing list/forum options, see
> > https://groups.google.com/d/forum/dealii?hl=en
> > <
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fforum%2Fdealii%3Fhl%3Den&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Cf320217cc9074f6da7ce08d9ed6c84a5%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637801874150334277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=Zk9U7Z%2B9rQOAXo6cHEfSgZyryo9OL8l5573pGT1fEbo%3D&reserved=0
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups
> > "deal.II User Group" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an
> > email to [email protected]
> > <mailto:[email protected]>.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/dealii/CANaEQuaiJpwMy_kO-Cz7Gm96Zp25Qvx_z4mNaeaOuJJ9%3Dod_ew%40mail.gmail.com
> > <
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fdealii%2FCANaEQuaiJpwMy_kO-Cz7Gm96Zp25Qvx_z4mNaeaOuJJ9%253Dod_ew%2540mail.gmail.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Cf320217cc9074f6da7ce08d9ed6c84a5%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637801874150334277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2BD1%2BVZS%2Fh716hiuKqgXZFjRojIn6%2Fsnyy3YqbeZj748%3D&reserved=0
> >.
>
>
> --
> ------------------------------------------------------------------------
> Wolfgang Bangerth          email:                 [email protected]
>                             www: http://www.math.colostate.edu/~bangerth/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/b68e02f1-3341-1b04-f333-ece634ab97f4%40colostate.edu
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CANaEQubNyT%3D8%2BeQ8tCGjR%2B3t2-n681uWaHmmMJoQPKhJsdmHPA%40mail.gmail.com.
/* ---------------------------------------------------------------------
 *
 * Copyright (C) 2013 - 2021 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE.md at
 * the top level directory of deal.II.
 *
 * ---------------------------------------------------------------------

 *
 * Author: Wolfgang Bangerth, Texas A&M University, 2013
 */

#include <deal.II/base/utilities.h>
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/function.h>
#include <deal.II/base/logstream.h>
#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/sparse_matrix.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/distributed/tria.h>       //#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
//#include <deal.II/grid/grid_refinement.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/matrix_tools.h>

//new header files
#include <deal.II/base/index_set.h>//new
#include <deal.II/base/conditional_ostream.h>//new
#include <deal.II/base/timer.h>//new
#include <deal.II/lac/generic_linear_algebra.h>//new
#include <deal.II/lac/sparsity_tools.h>//new
#include <deal.II/dofs/dof_renumbering.h>//new
#include <deal.II/grid/tria_accessor.h>//new
#include <deal.II/grid/tria_iterator.h>//new
#include <deal.II/dofs/dof_accessor.h>//new
#include <deal.II/lac/affine_constraints.h>//new

// grid refine
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/distributed/grid_refinement.h>
#include <deal.II/distributed/solution_transfer.h>

#include <fstream>
#include <iostream>

namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \
  !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
  using namespace dealii::LinearAlgebraPETSc;
#  define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
  using namespace dealii::LinearAlgebraTrilinos;
#else
#  error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
} // namespace LA


namespace Step26
{
  using namespace dealii;


  template <int dim>
  class HeatEquation
  {
  public:
    HeatEquation();
    void run();

    double       time;
    double       time_step;
    unsigned int timestep_number;

    const double theta;

  private:
    void setup_system();
    //void setup_system_2();

    void solve_time_step();
    void output_results() const;
    void refine_mesh();void refine_mesh(const unsigned int min_grid_level,
                     const unsigned int max_grid_level);

    MPI_Comm mpi_communicator;
    parallel::distributed::Triangulation<dim> triangulation;//    Triangulation<dim> triangulation;
    FE_Q<dim>          fe;
    DoFHandler<dim>    dof_handler;

    IndexSet locally_owned_dofs;
    IndexSet locally_relevant_dofs;

    AffineConstraints<double> constraints;

    //SparsityPattern      sparsity_pattern;
    LA::MPI::SparseMatrix mass_matrix;  //SparseMatrix<double> mass_matrix;
    LA::MPI::SparseMatrix laplace_matrix;  //SparseMatrix<double> laplace_matrix;
    LA::MPI::SparseMatrix system_matrix;  //SparseMatrix<double> system_matrix;

    LA::MPI::Vector locally_owned_solution; //Vector<double> solution;//current solution
    LA::MPI::Vector locally_relevant_solution;  //Vector<double> old_solution;
    LA::MPI::Vector locally_owned_old_solution;  //Vector<double> old_solution;
    LA::MPI::Vector system_rhs;  //Vector<double> system_rhs;

    ConditionalOStream pcout;
    TimerOutput        computing_timer;
  };




  template <int dim>
  class RightHandSide : public Function<dim>
  {
  public:
    RightHandSide()
      : Function<dim>()
      , period(0.2)
    {}

    virtual double value(const Point<dim> & p,
                         const unsigned int component = 0) const override;

  private:
    const double period;
  };



  template <int dim>
  double RightHandSide<dim>::value(const Point<dim> & p,
                                   const unsigned int component) const
  {
    (void)component;
    AssertIndexRange(component, 1);
    Assert(dim == 2, ExcNotImplemented());

    const double time = this->get_time();
    const double point_within_period =
      (time / period - std::floor(time / period));

    if ((point_within_period >= 0.0) && (point_within_period <= 0.2))
      {
        if ((p[0] > 0.5) && (p[1] > -0.5))
          return 1;
        else
          return 0;
      }
    else if ((point_within_period >= 0.5) && (point_within_period <= 0.7))
      {
        if ((p[0] > -0.5) && (p[1] > 0.5))
          return 1;
        else
          return 0;
      }
    else
      return 0;
  }



  template <int dim>
  class BoundaryValues : public Function<dim>
  {
  public:
    virtual double value(const Point<dim> & p,
                         const unsigned int component = 0) const override;
  };



  template <int dim>
  double BoundaryValues<dim>::value(const Point<dim> & /*p*/,
                                    const unsigned int component) const
  {
    (void)component;
    Assert(component == 0, ExcIndexRange(component, 0, 1));
    return 0;
  }

  template <int dim>
  HeatEquation<dim>::HeatEquation()
    : mpi_communicator(MPI_COMM_WORLD)
	  , triangulation(mpi_communicator,
			          typename Triangulation<dim>::MeshSmoothing(
			          Triangulation<dim>::smoothing_on_refinement |
			          Triangulation<dim>::smoothing_on_coarsening))
    , fe(1)
    , dof_handler(triangulation)
	, pcout(std::cout,
	          (Utilities::MPI::this_mpi_process(mpi_communicator) == 0))
	, computing_timer(mpi_communicator,
	                    pcout,
	                    TimerOutput::never,
	                    TimerOutput::wall_times)
    , time_step(1. / 500)
    , theta(0.5)
  {}


  template <int dim>
  void HeatEquation<dim>::setup_system()
  {
	TimerOutput::Scope t(computing_timer,"setup_system()");

	dof_handler.distribute_dofs(fe);

	pcout << std::endl
	              << "===========================================" << std::endl
	              << "Number of degrees of freedom: " << dof_handler.n_dofs()
	              << std::endl;

    locally_owned_dofs = dof_handler.locally_owned_dofs() ;
    DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);

    constraints.clear();
    constraints.reinit(locally_relevant_dofs);
    DoFTools::make_hanging_node_constraints(dof_handler, constraints);

    VectorTools::interpolate_boundary_values(dof_handler,
                                               0,
                                               Functions::ZeroFunction<dim>(),
                                               constraints);

    constraints.close();

    DynamicSparsityPattern dsp(locally_relevant_dofs);
    DoFTools::make_sparsity_pattern(dof_handler,
                                    dsp,
									constraints,/*keep_constrained_dofs =*/ true);

    SparsityTools::distribute_sparsity_pattern(dsp, dof_handler.locally_owned_dofs(), mpi_communicator,
        		  locally_relevant_dofs); //sparsity_pattern.copy_from(dsp);

    system_matrix.reinit(locally_owned_dofs, locally_owned_dofs, dsp, mpi_communicator);//system_matrix.reinit(sparsity_pattern);
    mass_matrix.reinit(locally_owned_dofs,locally_owned_dofs, dsp, mpi_communicator); //mass_matrix.reinit(sparsity_pattern);
    laplace_matrix.reinit(locally_owned_dofs,locally_owned_dofs, dsp, mpi_communicator); //laplace_matrix.reinit(sparsity_pattern);

    // we want to output solution, so here should have ghost cells
    locally_relevant_solution.reinit(locally_owned_dofs, locally_relevant_dofs, mpi_communicator);

    //locally owned cells
    locally_owned_old_solution.reinit(locally_owned_dofs, mpi_communicator);
    locally_owned_solution.reinit    (locally_owned_dofs, mpi_communicator);
    system_rhs.reinit                (locally_owned_dofs, mpi_communicator);

    QGauss<dim>  quadrature_formula(2);

    FEValues<dim> fe_values (fe, quadrature_formula,
                             update_values   | update_gradients |
                             update_quadrature_points | update_JxW_values);

    const unsigned int   dofs_per_cell = fe.dofs_per_cell;
    const unsigned int   n_q_points    = quadrature_formula.size();


    FullMatrix<double>   local_mass_matrix    (dofs_per_cell, dofs_per_cell);

    FullMatrix<double>   local_stiffness_matrix (dofs_per_cell, dofs_per_cell);

    std::vector<types::global_dof_index> local_dof_indices (dofs_per_cell);

    mass_matrix = 0.0;
    laplace_matrix = 0.0;

    typename DoFHandler<dim>::active_cell_iterator
    cell = dof_handler.begin_active(),
    endc = dof_handler.end();

    for (; cell!=endc; ++cell)
      if(cell->is_locally_owned())
      {

        fe_values.reinit (cell);
        local_mass_matrix = 0.0;

        local_stiffness_matrix = 0.0;

        for (unsigned int q=0; q<n_q_points; ++q)
        {
          for (unsigned int i=0; i<dofs_per_cell; ++i)
            {
              const Tensor<1,dim>     div_phi_i_u = fe_values.shape_grad (i,q);
              const double            phi_i_u = fe_values.shape_value (i,q);

              for (unsigned int j=0; j<dofs_per_cell; ++j)
              {

                const Tensor<1,dim>       div_phi_j_u = fe_values.shape_grad(j,q);
                const double              phi_j_u = fe_values.shape_value (j,q);

                //local_mass_matrix
                local_mass_matrix(i,j) +=    (phi_i_u *
                                              phi_j_u *
                                              fe_values.JxW (q));
                //local stiffness matrix
                local_stiffness_matrix(i,j) +=  (div_phi_i_u *
                                              div_phi_j_u *
                                              fe_values.JxW (q));

              }

            }
        }

        cell->get_dof_indices (local_dof_indices);

        constraints.distribute_local_to_global(local_mass_matrix,
                		                       local_dof_indices,
        									   mass_matrix);

        constraints.distribute_local_to_global(local_stiffness_matrix,
                        		               local_dof_indices,
											   laplace_matrix);
      }

    mass_matrix.compress(VectorOperation::add);
    laplace_matrix.compress(VectorOperation::add);

  }


  template <int dim>
  void HeatEquation<dim>::solve_time_step()
  {
	TimerOutput::Scope t(computing_timer, "solve");
	LA::MPI::Vector completely_distributed_solution(locally_owned_dofs,
	  	                                                    mpi_communicator);

	SolverControl            solver_control(1000, 1e-8 * system_rhs.l2_norm());

#ifdef USE_PETSC_LA
  LA::SolverCG solver(solver_control, mpi_communicator);
#else
  LA::SolverCG solver(solver_control);
#endif
  LA::MPI::PreconditionAMG preconditioner;
  LA::MPI::PreconditionAMG::AdditionalData data;

#ifdef USE_PETSC_LA
    data.symmetric_operator = true;
#else
    /* Trilinos defaults are good */
#endif

/*
    SolverCG<Vector<double>> cg(solver_control);

    PreconditionSSOR<SparseMatrix<double>> preconditioner;*/
    preconditioner.initialize(system_matrix, data);

    solver.solve(system_matrix, completely_distributed_solution, system_rhs, preconditioner);


    pcout << "   Solved in " << solver_control.last_step() << " iterations."
                << std::endl;

    constraints.distribute(completely_distributed_solution);

    locally_relevant_solution = completely_distributed_solution;
  }



  template <int dim>
  void HeatEquation<dim>::output_results() const
  {
    DataOut<dim> data_out;

    data_out.attach_dof_handler(dof_handler);
    data_out.add_data_vector(locally_relevant_solution, "U");

    Vector<float> subdomain(triangulation.n_active_cells());
    for (unsigned int i = 0; i < subdomain.size(); i++)
         subdomain(i) = triangulation.locally_owned_subdomain();
    data_out.add_data_vector(subdomain, "subdomain");

    data_out.build_patches();

    data_out.set_flags(DataOutBase::VtkFlags(time, timestep_number));

    /*
        const std::string filename =
          "solution-" + Utilities::int_to_string(timestep_number, 3) + ".vtk";
        std::ofstream output(filename);*/
     data_out.write_vtu_with_pvtu_record("./", "solution", timestep_number, mpi_communicator, 2, 8);//data_out.write_vtk(output);
  }


  template <int dim>
  void HeatEquation<dim>::refine_mesh(const unsigned int min_grid_level,
                                      const unsigned int max_grid_level)
  {
    Vector<float> estimated_error_per_cell(triangulation.n_active_cells());

    KellyErrorEstimator<dim>::estimate(
      dof_handler,
      QGauss<dim - 1>(fe.degree + 1),
      std::map<types::boundary_id, const Function<dim> *>(),
      locally_relevant_solution,
      estimated_error_per_cell);

    parallel::distributed::GridRefinement::refine_and_coarsen_fixed_fraction(triangulation,
                                                      estimated_error_per_cell,
                                                      0.6,
                                                      0.4);

    if (triangulation.n_levels() > max_grid_level)
      for (const auto &cell :
           triangulation.active_cell_iterators_on_level(max_grid_level))
        cell->clear_refine_flag();
    for (const auto &cell :
         triangulation.active_cell_iterators_on_level(min_grid_level))
      cell->clear_coarsen_flag();

    parallel::distributed::SolutionTransfer<dim, LA::MPI::Vector> solution_trans(dof_handler);


    triangulation.prepare_coarsening_and_refinement();
    solution_trans.prepare_for_coarsening_and_refinement(locally_relevant_solution);

    triangulation.execute_coarsening_and_refinement();
    setup_system();

    LA::MPI::Vector previous_solution;
    previous_solution.reinit(locally_owned_dofs, mpi_communicator);
    solution_trans.interpolate(previous_solution);
    constraints.distribute(previous_solution);

    locally_relevant_solution = previous_solution;
  }



  template <int dim>
  void HeatEquation<dim>::run()
  {
	pcout << "Running with "
	 	  #ifdef USE_PETSC_LA
	 	            << "PETSc"
	 	  #else
	 	            << "Trilinos"
	 	  #endif
	 	            << " on " << Utilities::MPI::n_mpi_processes(mpi_communicator)
	 	            << " MPI rank(s)..." << std::endl;

	const unsigned int initial_global_refinement       = 2;
	const unsigned int n_adaptive_pre_refinement_steps = 4;

    GridGenerator::hyper_L(triangulation);
    triangulation.refine_global(initial_global_refinement);

    setup_system();

    unsigned int pre_refinement_step = 0;

    LA::MPI::Vector tmp;  //Vector<double> tmp;
    LA::MPI::Vector forcing_terms; //Vector<double> forcing_terms;

 start_time_iteration:

    time            = 0.0;
    timestep_number = 0;

    tmp.reinit(locally_owned_solution);
    forcing_terms.reinit(locally_owned_solution);

    VectorTools::interpolate(dof_handler,
                             Functions::ZeroFunction<dim>(),
							 locally_owned_old_solution);
    locally_relevant_solution = locally_owned_old_solution;

    output_results();

    while (time <= 0.5)
      {
        time += time_step;
        ++timestep_number;

        pcout << "Time step " << timestep_number
                          << " at t=" << time
                          << " time_step = " << time_step
                          << std::endl;

        mass_matrix.vmult(system_rhs, locally_owned_old_solution); //mass_matrix.vmult(system_rhs, old_solution);

        laplace_matrix.vmult(tmp, locally_owned_old_solution); //laplace_matrix.vmult(tmp, old_solution);

        system_rhs.add(-(1 - theta) * time_step, tmp);

        QGauss<dim>  quadrature_formula(2);

        RightHandSide<dim> rhs_func_T_1;

        FEValues<dim> fe_values (fe, quadrature_formula,
                                   update_values   | update_gradients |
                                   update_quadrature_points | update_JxW_values);


        const unsigned int   dofs_per_cell = fe.dofs_per_cell;
        const unsigned int   n_q_points    = quadrature_formula.size();

        Vector<double>       local_rhs_vector_T (dofs_per_cell);

        std::vector<types::global_dof_index> local_dof_indices (dofs_per_cell);

        forcing_terms = 0.0 ;

          typename DoFHandler<dim>::active_cell_iterator
          cell = dof_handler.begin_active(),
          endc = dof_handler.end();

          for (; cell!=endc; ++cell)
              if(cell->is_locally_owned())
            {
              fe_values.reinit (cell);
              local_rhs_vector_T = 0;

              for (unsigned int q=0; q<n_q_points; ++q)
              {
                for (unsigned int i=0; i<dofs_per_cell; ++i)
                  {
                    const double            phi_i_u = fe_values.shape_value (i,q);

                    rhs_func_T_1.set_time(time);
                    local_rhs_vector_T(i) +=     time_step * theta *
                                                 (phi_i_u *
                                                 rhs_func_T_1.value (fe_values.quadrature_point (q)) *
                                                 fe_values.JxW (q));
                  }
              	for (unsigned int i=0; i<dofs_per_cell; ++i)
                {
              		const double            phi_i_u = fe_values.shape_value (i,q);
                    rhs_func_T_1.set_time(time - time_step);
				    local_rhs_vector_T(i) +=      time_step * (1.0 - theta) *
                                                 (phi_i_u *
                                                 rhs_func_T_1.value (fe_values.quadrature_point (q)) *
                                                 fe_values.JxW (q));
                  }
              }

              cell->get_dof_indices (local_dof_indices);

             constraints.distribute_local_to_global(local_rhs_vector_T,
                                                     local_dof_indices,
													 forcing_terms);

            }

        forcing_terms.compress(VectorOperation::add);

        system_rhs.add(1, forcing_terms);
        system_rhs.compress(VectorOperation::add);

        system_matrix.copy_from(mass_matrix);
        system_matrix.add(theta * time_step, laplace_matrix);
        system_matrix.compress(VectorOperation::add);


      /* {
           BoundaryValues<dim> boundary_values_function;
           boundary_values_function.set_time(time);

           std::map<types::global_dof_index, double> boundary_values;
           VectorTools::interpolate_boundary_values(dof_handler,
                                                   0,
                                                   boundary_values_function,
                                                   boundary_values);

           MatrixTools::apply_boundary_values(boundary_values,
                                             system_matrix,
											 locally_owned_solution,
                                             system_rhs, false);
        }*/


        solve_time_step();

        if (Utilities::MPI::n_mpi_processes(mpi_communicator) <= 32)
                        {
                          TimerOutput::Scope t(computing_timer, "output");
                          output_results();
                        }

                      computing_timer.print_summary();
                      computing_timer.reset();

                      pcout << std::endl;

        if ((timestep_number == 1) &&
            (pre_refinement_step < n_adaptive_pre_refinement_steps))
          {
            refine_mesh(initial_global_refinement,
                        initial_global_refinement +
                          n_adaptive_pre_refinement_steps);
            ++pre_refinement_step;

            tmp.reinit(locally_owned_solution);
            forcing_terms.reinit(locally_owned_solution);

            std::cout << std::endl;

            goto start_time_iteration;
          }
        else if ((timestep_number > 0) && (timestep_number % 5 == 0))
          {
            refine_mesh(initial_global_refinement,
                        initial_global_refinement +
                          n_adaptive_pre_refinement_steps);
            tmp.reinit(locally_owned_solution);
            forcing_terms.reinit(locally_owned_solution);
          }

        locally_owned_solution = locally_relevant_solution ;  //old_solution = solution;
        locally_owned_old_solution = locally_relevant_solution ;

  }
  }
  } // namespace Step26



int main(int argc, char *argv[])
{
  try
    {
	  using namespace dealii;
	  using namespace Step26;

	  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

      HeatEquation<2> heat_equation_solver;
      heat_equation_solver.run();
    }
  catch (std::exception &exc)
    {
      std::cerr << std::endl
                << std::endl
                << "----------------------------------------------------"
                << std::endl;
      std::cerr << "Exception on processing: " << std::endl
                << exc.what() << std::endl
                << "Aborting!" << std::endl
                << "----------------------------------------------------"
                << std::endl;

      return 1;
    }
  catch (...)
    {
      std::cerr << std::endl
                << std::endl
                << "----------------------------------------------------"
                << std::endl;
      std::cerr << "Unknown exception!" << std::endl
                << "Aborting!" << std::endl
                << "----------------------------------------------------"
                << std::endl;
      return 1;
    }

  return 0;
}

Reply via email to