Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-09 Thread Wolfgang Bangerth
Thanks to all of you, I can write a small test that seems to be doing what I want following Marc's suggestion. But I have a few questions (the test code is attached) *  cell->set_dof_values(local_data, m_completely_distributed_solution); Does set_dof_values set entries in

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-09 Thread Kaushik Das
Hello Prof. Bangerth and others, Thanks to all of you, I can write a small test that seems to be doing what I want following Marc's suggestion. But I have a few questions (the test code is attached) - cell->set_dof_values(local_data, m_completely_distributed_solution); Does set_dof_values

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-04 Thread Wolfgang Bangerth
Kaushik Marc and others have already answered the technical details, so just one overall comment: Let me explain what I am trying to do and why. I want to solve a transient heat transfer problem of the additive manufacturing (AM) process. In AM processes, metal powder is deposited in

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-04 Thread Bruno Turcksin
Kaushik, Oh wow this is a small world :D Unfortunately, PETSc solver requires a PETSc vector but I think it should be straightforward to add compress(min) to the PETSc vector. So that's a possibility if copying the solution takes too much time. Bestm Bruno Le dim. 3 janv. 2021 à 21:42,

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-03 Thread Kaushik Das
Hi Bruno, Thanks for the help. I just saw in your online profile that you were in Texas A, I was a post-doc at Aerospace TAMU 2009-12, small world! Can use I a dealii::LinearAlgebra::distributed::Vector with the PETSc solver? If not then, I think I have to copy the solution from a PETSc vector to

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-03 Thread Bruno Turcksin
Kaushik, I am working on the exact same problem for the same application :-) PETSc Vector do not support compress(min) You need to use a dealii::LinearAlgebra::distributed::Vector instead. Best, Bruno Le sam. 2 janv. 2021 à 18:38, Kaushik Das a écrit : > > Hi Marc, > I tried using cell data

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-02 Thread Kaushik Das
Hi Marc, I tried using cell data transfer as you suggested. But I am having trouble in calling compress after getting the transferred data to a PETSc Vector: My test code is attached. My confusion is mainly when to call to compress after the cell data transfer.

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2021-01-02 Thread Kaushik Das
Thank you. I will try CellDataTransfer method as you suggested. The test code that I attached earlier has a mistake. cell->set_active_fe_index(0) should be protected by a if (cell->is_locally_owned()). I have attached a corrected test here. Thanks, Kaushik On Thu, Dec 31, 2020 at 8:05 PM Marc

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-31 Thread Marc Fehling
Kaushik, in addition to what I just wrote, your example from above has revealed a bug in the `p::d::SolutionTransfer` class that Wolfgang and I were discussing in the course of this chatlog. Thank you very much for this! We are working on a solution for this issue. I would encourage you to

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-31 Thread Marc Fehling
Hi Kaushik, Yes, this is possible by changing a cell from FE_Nothing to FE_Q using p-refinement. You can do this with the method described in #11132 : Imitate what p::d::SolutionTransfer is doing with the more versatile tool

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-30 Thread Kaushik Das
Hi all, Thank you for your reply. Let me explain what I am trying to do and why. I want to solve a transient heat transfer problem of the additive manufacturing (AM) process. In AM processes, metal powder is deposited in layers, and then a laser source scans each layer and melts and bonds the

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-29 Thread Wolfgang Bangerth
On 12/28/20 5:11 PM, Marc Fehling wrote: In case a FE_Nothing has been configured to dominate, the solution should be continuous on the interface if I understood correctly, i.e., zero on the face. I will write a few tests to see if this is actually automatically the case in user

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Marc Fehling
The FiniteElementDomination logic in the codim=0 case would indeed make up a cheap a priori check in this context. In case a FE_Nothing has been configured to dominate, the solution should be continuous on the interface if I understood correctly, i.e., zero on the face. I will write a few

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Wolfgang Bangerth
The problem here is that the solution is not continuous across the face of a FE_Q and a FE_Nothing element. If a FE_Nothing is turned into a FE_Q element, the solution is suddenly expected to be continuous, and we have no rule in deal.II yet how to continue in the situation. In my opinion,

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Marc Fehling
Hi Wolfgang, your explanation makes indeed more sense in the context of piecewise polynomials :) The problem here is that the solution is not continuous across the face of a FE_Q and a FE_Nothing element. If a FE_Nothing is turned into a FE_Q element, the solution is suddenly expected to be

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-28 Thread Wolfgang Bangerth
On 12/27/20 8:48 PM, Marc Fehling wrote: 2) I did not know you were trying to interpolate a FENothing element into a FEQ element. This should not be possible, as you can not interpolate information from simply 'nothing', and some assertion should be triggered while trying to do so. The other

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-27 Thread Marc Fehling
Hi Kaushik, 1) Yes, this is possible, but tricky: `SolutionTransfer` is not capable of this feature, and you need to do it manually with the more versatile class `CellDataTransfer`. A way to do it has been discussed in #11132 . 2) I did not know

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-23 Thread Marc Fehling
Hi Kaushik, Be careful on what you are doing here: You prepare your solution to be transferred on refinement, but at the point where you wish to interpolate your solution the mesh differs from the one your SolutionTransfer object expects to encounter. That is because you changed the assigned

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-23 Thread Kaushik Das
Hi Marc: Thank you again for your help. I have another problem. A small test code is attached. I have one cell of FEQ element. I refine that into four cells and then assign FE_q to two of them and FE_nothing to the other two child cells. Then when I try to transfer the solution, the code aborts.

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Kaushik Das
Thank you, Mark. I just built dealii from the source (deal.II-9.3.0-pre). And my little test is passing now. Thanks for the help. -Kaushik On Wed, Dec 9, 2020 at 4:36 PM Kaushik Das wrote: > Thank you Mark. > I am using the dealii lib that I got from apt-get from > deal.ii-9.2.0-backports. > I

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Kaushik Das
Thank you Mark. I am using the dealii lib that I got from apt-get from deal.ii-9.2.0-backports. I used PETSc and the abort was on even 1 cpus. I tried 2, 3, 6 cpus and all aborted similarly. I will get the latest master branch and build that. Thanks, Kaushik On Wed, Dec 9, 2020 at 4:23 PM Marc

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Marc Fehling
>From your stacktrace I can see you are using PETSc and deal.II 9.2.0 which already incorporates the specified patch. Would you try to build the actual master branch anyways? On Wednesday, December 9, 2020 at 2:11:59 PM UTC-7 Marc Fehling wrote: > Hi Kaushik, > > I am unable to reproduce your

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Marc Fehling
Hi Kaushik, I am unable to reproduce your problem with the code you provided on the latest build of deal.II and Trilinos. - On how many processes did you run your program? - Did you use PETSc or Trilinos? - Could you try to build deal.II on the latest master branch? There is a

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-09 Thread Kaushik Das
Hi Marc and Bruno, I was able to reproduce this abort on an even simpler test. Please see the attached file. Initial grid: /* * --- * | 0 | 0 | * --- * | 1 | 1 | 0 - FEQ, 1 - FE_Nothing * --- */ /* Set refine flags: * --- * | R | R |FEQ * --- *

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Marc Fehling
Hi Kaushik, the `p::d::SolutionTransfer` class should be able to deal with `FENothing` elements in your example. The tricky cases are when you're coarsening a `FENothing` element with others as Bruno already pointed out (h-coarsening), or if you change a `FENothing` element to a different

Re: [deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Kaushik Das
Hi Bruno: Thanks for pointing that out. I tried to not refine FE_nothing cells by modifying the refine loop: (The modified test is attached). for (auto : dgq_dof_handler.active_cell_iterators()) if (cell->is_locally_owned() && cell->active_fe_index () != 0) { if (counter

[deal.II] Re: Parallel distributed hp solution transfer with FE_nothing

2020-12-08 Thread Bruno Turcksin
Hi, Are you sure that your test makes sense? You randomly assign FE indices to cells then you refine and coarsen cells. But what does it mean to coarsen 4 cells together when one of them is FE_Nothing? What would you expect to happen? Best, Bruno On Monday, December 7, 2020 at 5:54:10 PM