Wow, I did not expect to get such a quick fix of an answer over the 
week-end. Thank you both for taking the time to answer.

The key thing here really was to use set_future_fe_index() instead of 
set_active_fe_index(). Would it make sense to discontinue one of them in 
the future? Otherwise, I would suggest adding the first paragraph you typed 
up in the documentation of the hp finite element module. Test 
tests/mpi/p_coarsening_and_refinement.cc 
<https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fblob%2Fmaster%2Ftests%2Fmpi%2Fp_refinement_and_coarsening.cc&sa=D&sntz=1&usg=AFQjCNFdorxQOizAcJxrFmOZN9v3C7n7IA>
 was 
indeed the perfect example to showcase the p-refinement.

Regarding the Vector constructor, I had replicated the bug even with a 
single core, so any bug related to this did not show up. In my code, I do 
give an MPI communicator.

I have modified the example to make it work. Note that in my original test, 
I was replicating a test where both DG and CG elements are refinement 
simultaneously for two different solutions. Since this 
set_future_fe_index() now occurs on grid refinement instead of occuring at 
the exact moment of set_active_fe_index(), I wouldn't expect to do the 
refinement for both DoF handlers at the same time.

I'd be happy to add this test to the list if that's something you are 
interested in. I just have to take the time to read through how to.

Anyways, it all works as expected right now. Thank you again.

Doug

On Sunday, August 18, 2019 at 9:44:50 PM UTC-4, Marc Fehling wrote:
>
> Hi Doug,
>
> when dealing with distributed meshes, ownership of cells change and we may 
> not know which finite element lives on cells that the process got recently 
> assigned to. Thus, we need to transfer each cell's `active_fe_index`, which 
> we do automatically during coarsening and refinement. However, you set 
> `active_fe_indices` after refinement happened, which works in the serial 
> case, but no longer in the parallel one. Before executing refinement, you 
> need to set `future_fe_indices` that describe to which finite element your 
> cell will be assigned to, and you need to do that before refinement 
> happened! This should resolve both issues.
>
> Further, you initialize `LinearAlgebra::distrtibuted::Vector` objects 
> without any parallel distribution by using this constructor. 
> <https://www.dealii.org/current/doxygen/deal.II/classLinearAlgebra_1_1distributed_1_1Vector.html#a3be6c4ce529bb9b6c13eb831d0a86f55>
>  
> Try using one a different one.
>
> Please see `tests/mpi/solution_transfer_04.cc 
> <https://github.com/dealii/dealii/blob/master/tests/mpi/solution_transfer_04.cc>`
>  
> and `tests/mpi/p_coarsening_and_refinement.cc 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fblob%2Fmaster%2Ftests%2Fmpi%2Fp_refinement_and_coarsening.cc&sa=D&sntz=1&usg=AFQjCNFdorxQOizAcJxrFmOZN9v3C7n7IA>`
>  
> for working examples (I guess we should provide one using an actual 
> `SolutionTransfer` object as well), that should hopefully be applicable to 
> your problem. This is a recently added feature: If you have any suggestions 
> for improvement or encounter more problems, feel free to message us!
>
> Marc
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/82d7a96c-221d-4771-b79f-45f34c4bc8e0%40googlegroups.com.

Reply via email to