[deal.II] Parallel distributed hp solution transfer with FE_nothing

2020-12-07 Thread Kaushik Das
Hi all: I modified the test tests/mpi/solution_transfer_05.cc to add a FE_Nohting element to the FECollection. I also modified the other elements to FE_Q. When I run the test, it's aborting in solution transfer. Is there any limitations in using FE_Nothing with parallel solution transfer?

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-31 Thread Doug
On Tuesday, August 27, 2019 at 3:51:20 AM UTC-4, Marc Fehling wrote: > > > The patch has been merged. Let me know if this fix does the trick for you. > It does! Thank you again for the quick support. Doug -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-27 Thread Marc Fehling
Hi Doug! On Tuesday, August 27, 2019 at 3:41:11 AM UTC+2, Doug wrote: > > Thank you very much for the quick fix! Looking forward to pull this once > it goes through all the checks. > The patch has been merged. Let me know if this fix does the trick for you. We introduced a test named

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-26 Thread Doug
Thank you very much for the quick fix! Looking forward to pull this once it goes through all the checks. Doug On Sunday, August 25, 2019 at 9:44:29 AM UTC-4, Marc Fehling wrote: > > > I came up with a fix for this issue in the following PR #8637 >

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-25 Thread Marc Fehling
Hi Doug! Hi Wolfgang! On Sunday, August 25, 2019 at 3:25:06 AM UTC+2, Wolfgang Bangerth wrote: > > On 8/23/19 6:32 PM, Marc Fehling wrote: > > > > Your scenario indeed revealed a bug: Currently, we set and send > > `active_fe_indices` based on the refinement flags on the Triangulation >

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-24 Thread Wolfgang Bangerth
On 8/23/19 6:32 PM, Marc Fehling wrote: > > Your scenario indeed revealed a bug: Currently, we set and send > `active_fe_indices` based on the refinement flags on the Triangulation > object. > However, p4est has the last word on deciding which cells will be refined That's ultimately true, but

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-23 Thread Marc Fehling
Hi Doug! Your scenario indeed revealed a bug: Currently, we set and send `active_fe_indices` based on the refinement flags on the Triangulation object. However, p4est has the last word on deciding which cells will be refined -- and in your specific scenario p4est makes use of it. I came up

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-21 Thread Doug
I am using the master version from 2 days ago. Commit c4c4e5209a. Actually, it also fails for the 2D, but in a different scenario. I run this for mpirun=1 and mpirun=4. It fails in 2D for mpirun=1.debug, with the same error. It fails for 3D for mpirun=4.debug, mpirun=1.release. And fully

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-21 Thread Marc Fehling
Hi Doug! On Wednesday, August 21, 2019 at 4:00:49 AM UTC+2, Doug wrote: > > 8134: void dealii::parallel::distributed::SolutionTransfer VectorType, DoFHandlerType>::unpack_callback(const typename > dealii::parallel::distributed::Triangulation space_dimension>::cell_iterator&, typename >

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-20 Thread Doug
Hello Marc, I am trying to get the test working and unfortunately it is going a lot less smoothly than expected. Please find attached the test as solution_transfer_05.cc. Only h-refinement occurs. I have attached the before and after grids, which look OK. It is still that error 8134:

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-19 Thread Wolfgang Bangerth
On 8/19/19 11:40 AM, Doug wrote: > Yes, will do. I have already fixed that example into a working code. I > just need to clean it up a little and I'll submit a pull request. > > Also, I'm encountering this issue where tests fail because a zero error > ends up being around 1e-13 due to round-off

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-19 Thread Doug
Yes, will do. I have already fixed that example into a working code. I just need to clean it up a little and I'll submit a pull request. Also, I'm encountering this issue where tests fail because a zero error ends up being around 1e-13 due to round-off errors. This not only happens for my

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-19 Thread Wolfgang Bangerth
On 8/18/19 10:44 PM, Doug wrote: > > I'd be happy to add this test to the list if that's something you are > interested in. I just have to take the time to read through how to. I think this would be fantastic -- this area is new, so there aren't that many tests yet. Anything that does what it

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-18 Thread Doug
Wow, I did not expect to get such a quick fix of an answer over the week-end. Thank you both for taking the time to answer. The key thing here really was to use set_future_fe_index() instead of set_active_fe_index(). Would it make sense to discontinue one of them in the future? Otherwise, I

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-18 Thread Marc Fehling
Hi Doug, when dealing with distributed meshes, ownership of cells change and we may not know which finite element lives on cells that the process got recently assigned to. Thus, we need to transfer each cell's `active_fe_index`, which we do automatically during coarsening and refinement.

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-17 Thread Wolfgang Bangerth
Doug, > I am trying to use the parallel::distributed::SolutionTransfer LinearAlgebra::distributed::Vector, hp::DoFHandler> class and I > can't seem to use it similarly to the serial version. > > I looked through the tests/distributed_grids/solution_transfer_0*.cc tests > and > none of them

[deal.II] Parallel distributed hp solution transfer

2019-08-17 Thread Doug
Hello, I am trying to use the parallel::distributed::SolutionTransfer, hp::DoFHandler> class and I can't seem to use it similarly to the serial version. I looked through the tests/distributed_grids/solution_transfer_0*.cc tests and none of them seem to be testing for the hp refinement. The