Re: [deal.II] Re: Tips on writing "versatile" assembly function

2021-01-05 Thread Doug Shi-Dong
That's interesting. Seems like it was more about inlining than branch 
prediction. Surprising how much difference it made.

On Tuesday, January 5, 2021 at 6:18:38 PM UTC-5 Timo Heister wrote:

> What I forgot to say:
> We used to have something like
>
> if (use_anisotropic_viscosity==true)
> cell(i,j) += viscosity_tensor * 
> else
> cell(i,j) += viscosity_constant * 
>
> and improved the performance by making two separate assemblers (note
> that there is no function call/vtable lookup here, just an "if"). I
> don't remember how big the difference was, so I went back and found
> the PR:
> https://github.com/geodynamics/aspect/pull/2139
> Rene claims 25% fewer instructions.
>
>
>
> On Tue, Jan 5, 2021 at 4:48 PM blais...@gmail.com  
> wrote:
> >
> > Hi Timo,
> >
> > I understand. It makes a lot of sense.
> > Thanks!
> > Bruno
> >
> > On Tuesday, January 5, 2021 at 4:34:19 p.m. UTC-5 Timo Heister wrote:
> >>
> >> Hi Bruno,
> >>
> >> We mitigate the performance problem by making the decision per cell in 
> ASPECT:
> >> We have a set of "assemblers" that are executed one after each other
> >> per cell. This means the vtable access cost is small compared to the
> >> actual work.
> >> See
> >> 
> https://github.com/geodynamics/aspect/blob/b9add5f53172aac18bdbb19d14ca266e88c491dd/include/aspect/simulator/assemblers/interface.h#L493-L515
> >>
> >> On Tue, Jan 5, 2021 at 4:28 PM blais...@gmail.com  
> wrote:
> >> >
> >> > Bruno,
> >> >
> >> > Thanks, you are right. As always, measure first and then optimize 
> after. No point in optimising stuff that costs nothing...
> >> >
> >> >
> >> > On Tuesday, January 5, 2021 at 3:15:06 p.m. UTC-5 
> bruno.t...@gmail.com wrote:
> >> >>
> >> >> Bruno,
> >> >>
> >> >> If you are worry about the cost of looking up though the vtable, I 
> think that you are stuck using template. So either use 2 or 3 and CRTP. But 
> first of all, I think that you should profile your code and verify that 
> this is a problem. There is no point in spending time refactoring your 
> code, if your are going to gain less than 1%...
> >> >>
> >> >> Best,
> >> >>
> >> >> Bruno
> >> >>
> >> >> On Monday, January 4, 2021 at 3:31:45 PM UTC-5 blais...@gmail.com 
> wrote:
> >> >>>
> >> >>> Dear all,
> >> >>> I wish you all an happy new year!
> >> >>>
> >> >>> One problem we always end up facing with FEM problems is that, as 
> program grow, more and more features are added to the equations. This leads 
> to multiple variation of the same equations (for example, Navier-Stokes 
> with Newtonian and non-Newtonian viscosity, etc.). In turn, this leads to 
> assembly functions for the non-linear systems of equations which branch out 
> to multiple possibilities.
> >> >>>
> >> >>> I was wondering what is the best approach to deal with multiple 
> "switches" in an assembly function. The naïve approach would be to use if 
> conditions, but I have a feeling that if they appear down in the assembly 
> (for example the loop on dofs), this would lead to significantly higher 
> assembly cost because the loops would spend time just checking if.
> >> >>>
> >> >>> From experience, I have seen the following approaches:
> >> >>> 1- If or switches in the assembly routine. Simplest/most versatile 
> way, but adds significant overhead
> >> >>> 2- Template the assembly function with the parameters. I think this 
> would lead to zero additional cost, but as the number of parameters grows, 
> this can become more and more complex since the various possibilities have 
> to be known at compile time.
> >> >>> 3- Use generic interface objects for common elements (for example a 
> viscosity class to calculate viscosity, etc.) and use inheritance to 
> specialise the model. I think this could be inefficient because of the need 
> to extensively look up through the vtable.
> >> >>>
> >> >>> What is the general consensus in the deal.II community on this 
> problem? What's the best way to deal with multiple possibilities in the 
> same assembly function? I would be very interested in hearing the 
> perspective of the ASPECT author since it has such a large range of sub 
> models.
> >> >>>
> >> >>>
> >> > --
> >> > The deal.II project is located at http://www.dealii.org/
> >> > For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> >> > ---
> >> > You received this message because you are subscribed to the Google 
> Groups "deal.II User Group" group.
> >> > To unsubscribe from this group and stop receiving emails from it, 
> send an email to dealii+un...@googlegroups.com.
> >> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/a501667f-4cb7-4894-9364-fb2eaa47c0ecn%40googlegroups.com
> .
> >>
> >>
> >>
> >> --
> >> Timo Heister
> >> http://www.math.clemson.edu/~heister/
> >
> > --
> > The deal.II project is located at http://www.dealii.org/
> > For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> > ---
> > You received this message 

Re: [deal.II] Re: Tips on writing "versatile" assembly function

2021-01-05 Thread Doug Shi-Dong
Hello Prof. Blais,

Optimizing code is always fun, so I've had this discussion multiple times 
with a colleague. Dr. Turcksin comment was also our conclusion. Templating 
seems to be the way to go, but only on options/variations where 
mispredicted branches actually slow down your performance since branch is 
known at compile time. However, from my understanding, current processors 
are pretty good with branch prediction from if-statements, that is if you 
turn on either Newtonian or non-Newtonian viscosity, and you don't 
flip-flop between them, the cost is relatively low.

Another interesting technique you might find interesting is: branchless 
programming. It basically computes the "if-statement" and discards its 
results when unnecessary. This can be useful when your if-statement is deep 
inside your loops and the computed operation is inexpensive. It's also easy 
to implement with very little refactoring.

Lastly, this might not be as helpful, but C++20 has the "un/likely" 
attribute that can give hints the compiler to favour a branch.

Best regards,

Doug
On Tuesday, January 5, 2021 at 4:48:00 PM UTC-5 blais...@gmail.com wrote:

> Hi Timo,
>
> I understand. It makes a lot of sense.
> Thanks!
> Bruno
>
> On Tuesday, January 5, 2021 at 4:34:19 p.m. UTC-5 Timo Heister wrote:
>
>> Hi Bruno, 
>>
>> We mitigate the performance problem by making the decision per cell in 
>> ASPECT: 
>> We have a set of "assemblers" that are executed one after each other 
>> per cell. This means the vtable access cost is small compared to the 
>> actual work. 
>> See 
>>
>> https://github.com/geodynamics/aspect/blob/b9add5f53172aac18bdbb19d14ca266e88c491dd/include/aspect/simulator/assemblers/interface.h#L493-L515
>>  
>>
>> On Tue, Jan 5, 2021 at 4:28 PM blais...@gmail.com  
>> wrote: 
>> > 
>> > Bruno, 
>> > 
>> > Thanks, you are right. As always, measure first and then optimize 
>> after. No point in optimising stuff that costs nothing... 
>> > 
>> > 
>> > On Tuesday, January 5, 2021 at 3:15:06 p.m. UTC-5 bruno.t...@gmail.com 
>> wrote: 
>> >> 
>> >> Bruno, 
>> >> 
>> >> If you are worry about the cost of looking up though the vtable, I 
>> think that you are stuck using template. So either use 2 or 3 and CRTP. But 
>> first of all, I think that you should profile your code and verify that 
>> this is a problem. There is no point in spending time refactoring your 
>> code, if your are going to gain less than 1%... 
>> >> 
>> >> Best, 
>> >> 
>> >> Bruno 
>> >> 
>> >> On Monday, January 4, 2021 at 3:31:45 PM UTC-5 blais...@gmail.com 
>> wrote: 
>> >>> 
>> >>> Dear all, 
>> >>> I wish you all an happy new year! 
>> >>> 
>> >>> One problem we always end up facing with FEM problems is that, as 
>> program grow, more and more features are added to the equations. This leads 
>> to multiple variation of the same equations (for example, Navier-Stokes 
>> with Newtonian and non-Newtonian viscosity, etc.). In turn, this leads to 
>> assembly functions for the non-linear systems of equations which branch out 
>> to multiple possibilities. 
>> >>> 
>> >>> I was wondering what is the best approach to deal with multiple 
>> "switches" in an assembly function. The naïve approach would be to use if 
>> conditions, but I have a feeling that if they appear down in the assembly 
>> (for example the loop on dofs), this would lead to significantly higher 
>> assembly cost because the loops would spend time just checking if. 
>> >>> 
>> >>> From experience, I have seen the following approaches: 
>> >>> 1- If or switches in the assembly routine. Simplest/most versatile 
>> way, but adds significant overhead 
>> >>> 2- Template the assembly function with the parameters. I think this 
>> would lead to zero additional cost, but as the number of parameters grows, 
>> this can become more and more complex since the various possibilities have 
>> to be known at compile time. 
>> >>> 3- Use generic interface objects for common elements (for example a 
>> viscosity class to calculate viscosity, etc.) and use inheritance to 
>> specialise the model. I think this could be inefficient because of the need 
>> to extensively look up through the vtable. 
>> >>> 
>> >>> What is the general consensus in the deal.II community on this 
>> problem? What's the best way to deal with multiple

Re: [deal.II] Re: Finding vertices on boundaries

2020-10-22 Thread Doug
Thanks for the clarification! I initially thought ghost neighbours would
only include face neighbours. That solves the problem.


On Thu, Oct 22, 2020, 09:33 Wolfgang Bangerth 
wrote:

> On 10/21/20 7:46 PM, Doug Shi-Dong wrote:
> > Thanks, yes, that should work well for 2D. I actually just thought of a
> corner
> > case (pun intended) in 3D. It is possible for a cell's vertex to be on a
> > boundary, without any of its neighbour actually have a boundary face.
>
> Yes, precisely: None of the face neighbors have to be on the boundary. But
> at
> least one of the vertex neighbors have to be -- which is why the ghost
> layer
> consists of all of the vertex neighbors of locally owned cells.
>
> You can create the same situation in 2d, by the way: Start with a single
> vertex and put 5 or more quadrilateral cells around it. Then put another
> layer
> of cells around those. Then remove one of the cells that are adjacent to
> the
> original vertex and look at the cell opposite from it.
>
>
> > Any advice on how to approach this since the current processor would not
> find
> > it within its ghost cells?
>
> Yes, it will -- see above.
> W.
>
> --
> 
> Wolfgang Bangerth  email: bange...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CALC9H_mkpeFN75S0kLZNEweBWS8YaHS34h3wSAyf8E5Q4oDkew%40mail.gmail.com.


Re: [deal.II] Re: Finding vertices on boundaries

2020-10-21 Thread Doug Shi-Dong
Thanks, yes, that should work well for 2D. I actually just thought of a 
corner case (pun intended) in 3D. It is possible for a cell's vertex to be 
on a boundary, without any of its neighbour actually have a boundary face.

Imagine a 3D cube with a hollow inner cube. The cells directly diagonal to 
the corner cells will have a vertex on the boundary, but none of its 
neigbor actually lie on the boundary.

Any advice on how to approach this since the current processor would not 
find it within its ghost cells?

Doug
On Wednesday, October 21, 2020 at 9:24:35 PM UTC-4 Wolfgang Bangerth wrote:

> On 10/21/20 1:05 PM, Doug Shi-Dong wrote:
> > 
> > It seems like I might have to loop over the neighbors of every cell to 
> make 
> > sure that account for all the corners/edges DoFs that might be on a 
> boundary, 
> > but that seems inefficient.
> > 
> > Anyone knows of a better way within deal.II to deal with this?
>
> Each vertex of the locally owned part of the domain is surrounded by 
> either 
> locally owned cells or ghost cells. As a consequence, if you loop over all 
> locally owned cells and all ghost cells, you will find that vertex as one 
> vertex on a boundary face.
>
> You are right that it is difficult to do this by starting at one vertex. 
> The 
> easier approach is to first loop over all locally-owned + ghost cells, 
> loop 
> over their faces, see whether a face is on the boundary, and then collect 
> all 
> vertices that you are interested in in a std::set or similar. Then, later 
> on 
> when you are curious about one specific vertex, you only need to look 
> whether 
> it's in the std::set.
>
> Best
> W.
>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/90f3dc38-05bd-49b3-aef4-50563b22f158n%40googlegroups.com.


Re: [deal.II] Re: Finding vertices on boundaries

2020-10-21 Thread Doug Shi-Dong
Hello everyone,

Sorry to revive this thread, but I have a similar question that follows up 
his question. In the case of a distributed grid, there might be 
vertices/DoFs on a corner (or even edges in 3D) of a cell. 

For example, see attached figure (at the trailing edge of the airfoil in 
the square, in the yellow square). Domain 0 is not aware that one of its 
corner is on a boundary. To aggravate the issue, the DoFHandler with FE_Q 
distributes the DoFs such that Domain 0 owns that corner DoF.

It seems like I might have to loop over the neighbors of every cell to make 
sure that account for all the corners/edges DoFs that might be on a 
boundary, but that seems inefficient.

Anyone knows of a better way within deal.II to deal with this?

Best regards,

Doug

On Friday, May 1, 2020 at 7:21:42 PM UTC-4 Wolfgang Bangerth wrote:

> On 5/1/20 5:07 PM, Shahab Golshan wrote:
> > I have also another question on the function 
> */find_cells_adjacent_to_vertex/*.
> > As input argument, it needs the vertex index, not the vertex as a point. 
> How 
> > can I find the index of a vertex (to use in this function) from the 
> vertex 
> > location (point).
>
> cell->vertex_index(v)
>
> is your friend, where v=0...2^d :-)
>
> Best
> W.
>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ef658a63-82c1-46df-8c49-a3ea4a9a063fn%40googlegroups.com.


Re: [deal.II] Nested reverse and forward-mode Sacado memory leak

2020-08-11 Thread Doug Shi-Dong

Thanks for the answer! I am currently using the master branch of Trilinos, 
to make sure that a newer version did not fix this.

I had some second-guessing with how I had configured Sacado itself since 
there seems to be some options that changes how the reverse mode reuses the 
memory through RAD_REINIT 
<https://github.com/trilinos/Trilinos/blob/17cdb33f3664f4901ab9d0935af609f516031a86/packages/sacado/src/README_RAD#L228-L247>.
 
However, I've been unable to compile with this flag's options. Hopefully 
they can shed some light on the issue.

Sounds good. I'll create a pull-request to update the documentation in  
https://github.com/dealii/dealii/blob/master/include/deal.II/differentiation/ad/ad_number_types.h#L87
and in 
https://github.com/dealii/dealii/blob/master/doc/doxygen/headers/automatic_and_symbolic_differentiation.h#L303
referring to the AD number type.

I've actually implemented sacado_fad_fad first because I understood it 
theoretically. Turns out, it was a bottleneck in my code, so I need 
sacado_rad_fad. Might just switch the AD package.

Thanks again, following-up with a documentation PR 
https://github.com/dealii/dealii/pull/10818

Doug

On Tuesday, August 11, 2020 at 4:33:37 PM UTC-4 Jean-Paul Pelteret wrote:

> Hi Doug,
>
> Yes, sadly I have experienced the same but wasn’t able to allot the time 
> to figure out if it was indeed a bug in Sacado (and maybe something that’s 
> already been fixed upstream in a newer version of Trilinos than I was 
> using) or something that we did. I’m really glad that you managed to supply 
> them with a MWE that definitely reproduces the issue, since this 
> disambiguates their code from our drivers. Thank you for that. For your 
> interest, I’ve attached the MWE that I made at the time. I can only suppose 
> that back then I wasn’t confident that I’d recreated the conditions for the 
> leak, because the valgrind logs that I still have didn’t report and memory 
> as being “definitely lost”. I guess the trick would have been to have stuck 
> all of this in a loop, which I clearly didn’t think to do :-/
>
> Since you’ve confirmed that you’ve experienced the same, I think that it 
> would be worthwhile to add a note to this effect in our documentation. If 
> you'd like to do so yourself, then free to make a suggestion where you 
> think it would be appropriate to be. My first inclination would have been 
> where the enumerated type sacado_rad_fad is documented, i.e.
>
> https://github.com/dealii/dealii/blob/master/include/deal.II/differentiation/ad/ad_number_types.h#L87
> but of course that’s completely up for discussion. Either way, I think 
> that this should be documented because this issue may persist for some 
> time, and it would be good to have a warning in place until we know which 
> version of Trilinos has a fix in place. Perhaps we should also add a link 
> there to your issue on their GitHub page so that the reader can easily find 
> where to check the status of the issue?
>
> I hope that you’ve spotted the sacado_fad_fad alternative that should 
> work without issue. In many cases I would expect it to be less performant 
> than the (buggy) Rad-Fad implementation but they both still have, at the 
> very least, some use in debugging or prototyping.
>
> Best,
> Jean-Paul
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3551c670-3e01-4cd4-afa6-8d06e92c03den%40googlegroups.com.


[deal.II] Nested reverse and forward-mode Sacado memory leak

2020-08-10 Thread Doug Shi-Dong
Hello everyone,

I found that Sacado's nested reverse and forward mode AD leads to a memory 
leak when use multiple times. I have opened an issue on their end:
https://github.com/trilinos/Trilinos/issues/7741
and the steps to reproduce are pretty simple.

However, I've found that the community here is pretty good at responding. I 
was wondering if anyone here has previously encountered/fixed this. I know 
that J-P Perlteret's code-gallery example 
Quasi_static_Finite_strain_Compressible_Elasticity with this AD type turned 
on by using
set Automatic differentiation order = 2 
<https://github.com/dealii/code-gallery/blob/815a7adee42bcfb4a2d8c3dd2f7de580f67e38f6/Quasi_static_Finite_strain_Compressible_Elasticity/parameters.prm#L8>
does lead to a memory leak.

Basically, for every cell's computation, a little bit of memory leaks, 
which quickly blows up for large problems. 

Please let me know what you think. It's likely completely in the hand of 
the Sacado group unless I start digging in their code. However, if it is 
something that will likely not get fixed, we might want to put a warning on 
the "Supported automatic differentiation libraries" page.

Best regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9efa2b48-dd67-4a2b-ab92-0849b307b99dn%40googlegroups.com.


Re: [deal.II] How to transform points on faces from unit cells to real cells?

2020-06-22 Thread Doug Shi-Dong
Yes I can confirm, I am using QProjector with a dummy face Quadrature right 
now it works as intended.

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6fa6dfd6-ebd7-422d-83cd-378f73b0c768o%40googlegroups.com.


Re: [deal.II] How to transform points on faces from unit cells to real cells?

2020-06-22 Thread Doug Shi-Dong
Agreed. Sorry for the confusion. Looked back at my code and it's probably 
the most straightforward way.

For some other unrelated reason I didn't want to use FEValues and directly 
deal with the Mapping class.

Doug

On Monday, June 22, 2020 at 5:05:57 PM UTC-4, Simon Sticko wrote:
>
> Hi,
>
> Lex, given your previous question:
>
> https://groups.google.com/forum/#!topic/dealii/xghVE7Km78o
>
> If you are already using an FEFaceValues object with a dummy-quadrature 
> (created from FiniteElement::get_unit_face_support_points())
> then you can use one of the following functions on FEFaceValues to get the 
> location of the support points in real space:
>
> quadrature_point()
> get_quadrature_points()
>
> Best,
> Simon
>
> On Monday, June 22, 2020 at 10:53:10 PM UTC+2, Lex Lee wrote:
>>
>> Hello Doug,
>>
>>
>> Thanks for your help.
>>
>> However, I need to say, I just had done exactly what you described before 
>> I posted this question.
>>
>> "FiniteElement::get_unit_face_support_points()” returns Points;
>>
>> “transform_unit_to_real_cell()” requires Point as input.
>>
>> The dimensions of points in cells is different from that on faces. That’s 
>> one of the reasons why I posted this question.
>>
>>
>> If I misunderstand you, please kindly let me know.
>>
>>
>>
>> Regards,
>> Lex 
>>
>>
>> On Jun 21, 2020, at 7:05 PM, Doug Shi-Dong  wrote:
>>
>> Hello Lex,
>>
>> transform_unit_to_real_cell() takes a point as unit cell's point an input 
>> and returns the physical point location. Therefore, you can give the unit 
>> face support points from the FiniteElement::get_unit_face_support_points() 
>> (assuming your FiniteElement has support points). And pass it to 
>> transform_unit_to_real_cell().
>>
>> Doug
>>
>> On Sunday, June 21, 2020 at 8:10:30 PM UTC-4, Lex Lee wrote:
>>>
>>> Hello Doug,
>>>
>>> Thanks for your kind help.
>>>
>>> I am trying to understand your suggestion in the original email.
>>>
>>> Are you suggesting me using the member function 
>>> "transform_unit_to_real_cell()" to get all support points in a unit cell? 
>>> In the step, sort out the face points I need? Am I correct?
>>>
>>>
>>> Regards,
>>>
>>> Lex
>>>
>>> On Sunday, June 21, 2020 at 5:00:35 PM UTC-7, Doug Shi-Dong wrote:
>>>>
>>>> Hello Lex,
>>>>
>>>> You were close.
>>>>
>>>>
>>>> https://www.dealii.org/current/doxygen/deal.II/classMapping.html#ae5df63553eb8ed170c3b90524853dd48
>>>>
>>>> The project_real_point_to_unit_point_on_face() is useful is you take a 
>>>> "volume" point within the cell. Since you are mapping from unit to real, 
>>>> you would always know whether your point is on the face or not.
>>>>
>>>> Therefore, transform_unit_to_real_cell() should do the trick. Unless 
>>>> you somehow actually need to project a "volume" point, but projecting 
>>>> within unit cell is easy, which you would then follow with 
>>>> transform_unit_to_real_cell().
>>>>
>>>> Best regards,
>>>>
>>>> Doug
>>>>
>>>> On Saturday, June 20, 2020 at 8:16:22 PM UTC-4, Lex Lee wrote:
>>>>>
>>>>> Hello Deal.II Users,
>>>>>
>>>>>
>>>>> I want to get the physical (geometry) coordinates for support points 
>>>>> on cell faces. Also I know that there are such member functions that can 
>>>>> me 
>>>>> map points between reference and real cells in this link: 
>>>>> https://www.dealii.org/current/doxygen/deal.II/classMapping.html . 
>>>>>
>>>>> However, via the link mentioned above, I only find a member function 
>>>>> called project_real_point_to_unit_point_on_face 
>>>>> <https://www.dealii.org/current/doxygen/deal.II/classMapping.html#a078f4e617fdb287e1dc7a5efa227b0ae>
>>>>>  (real 
>>>>> to unit) , no functions like project_unit_point_to_real_point_on_face 
>>>>> (unit to real).  
>>>>>
>>>>> Could someone kindly tell me how I can solve this problem?  
>>>>>
>>>>>
>>>>> Regards,
>>>>>
>>>>> Lex
>>>>>
>>>>
>> -- 
>> The deal.II

[deal.II] Re: How to transform points on faces from unit cells to real cells?

2020-06-21 Thread Doug Shi-Dong
Hello Lex,

transform_unit_to_real_cell() takes a point as unit cell's point an input 
and returns the physical point location. Therefore, you can give the unit 
face support points from the FiniteElement::get_unit_face_support_points() 
(assuming your FiniteElement has support points). And pass it to 
transform_unit_to_real_cell().

Doug

On Sunday, June 21, 2020 at 8:10:30 PM UTC-4, Lex Lee wrote:
>
> Hello Doug,
>
> Thanks for your kind help.
>
> I am trying to understand your suggestion in the original email.
>
> Are you suggesting me using the member function 
> "transform_unit_to_real_cell()" to get all support points in a unit cell? 
> In the step, sort out the face points I need? Am I correct?
>
>
> Regards,
>
> Lex
>
> On Sunday, June 21, 2020 at 5:00:35 PM UTC-7, Doug Shi-Dong wrote:
>>
>> Hello Lex,
>>
>> You were close.
>>
>>
>> https://www.dealii.org/current/doxygen/deal.II/classMapping.html#ae5df63553eb8ed170c3b90524853dd48
>>
>> The project_real_point_to_unit_point_on_face() is useful is you take a 
>> "volume" point within the cell. Since you are mapping from unit to real, 
>> you would always know whether your point is on the face or not.
>>
>> Therefore, transform_unit_to_real_cell() should do the trick. Unless you 
>> somehow actually need to project a "volume" point, but projecting within 
>> unit cell is easy, which you would then follow with 
>> transform_unit_to_real_cell().
>>
>> Best regards,
>>
>> Doug
>>
>> On Saturday, June 20, 2020 at 8:16:22 PM UTC-4, Lex Lee wrote:
>>>
>>> Hello Deal.II Users,
>>>
>>>
>>> I want to get the physical (geometry) coordinates for support points on 
>>> cell faces. Also I know that there are such member functions that can me 
>>> map points between reference and real cells in this link: 
>>> https://www.dealii.org/current/doxygen/deal.II/classMapping.html . 
>>>
>>> However, via the link mentioned above, I only find a member function 
>>> called project_real_point_to_unit_point_on_face 
>>> <https://www.dealii.org/current/doxygen/deal.II/classMapping.html#a078f4e617fdb287e1dc7a5efa227b0ae>
>>>  (real 
>>> to unit) , no functions like project_unit_point_to_real_point_on_face 
>>> (unit to real).  
>>>
>>> Could someone kindly tell me how I can solve this problem?  
>>>
>>>
>>> Regards,
>>>
>>> Lex
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fd8f078d-f0d4-4319-8977-a9a8e4672fb5o%40googlegroups.com.


[deal.II] Re: How to transform points on faces from unit cells to real cells?

2020-06-21 Thread Doug Shi-Dong
Hello Lex,

You were close.

https://www.dealii.org/current/doxygen/deal.II/classMapping.html#ae5df63553eb8ed170c3b90524853dd48

The project_real_point_to_unit_point_on_face() is useful is you take a 
"volume" point within the cell. Since you are mapping from unit to real, 
you would always know whether your point is on the face or not.

Therefore, transform_unit_to_real_cell() should do the trick. Unless you 
somehow actually need to project a "volume" point, but projecting within 
unit cell is easy, which you would then follow with 
transform_unit_to_real_cell().

Best regards,

Doug

On Saturday, June 20, 2020 at 8:16:22 PM UTC-4, Lex Lee wrote:
>
> Hello Deal.II Users,
>
>
> I want to get the physical (geometry) coordinates for support points on 
> cell faces. Also I know that there are such member functions that can me 
> map points between reference and real cells in this link: 
> https://www.dealii.org/current/doxygen/deal.II/classMapping.html . 
>
> However, via the link mentioned above, I only find a member function 
> called project_real_point_to_unit_point_on_face 
> <https://www.dealii.org/current/doxygen/deal.II/classMapping.html#a078f4e617fdb287e1dc7a5efa227b0ae>
>  (real 
> to unit) , no functions like project_unit_point_to_real_point_on_face 
> (unit to real).  
>
> Could someone kindly tell me how I can solve this problem?  
>
>
> Regards,
>
> Lex
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4e646543-6811-4609-bcb8-b92fd3eba56co%40googlegroups.com.


[deal.II] Re: Strategy to snap the boundary of a triangulation to a manifold

2020-06-09 Thread Doug Shi-Dong
Prof. Blais,

Bonjour fellow Montreal-er (McGill), land of the bagel and poutine.

I have also given some thoughts to robustly generate our meshes with GMSH.

I think it's great to see that we've concluded to similar steps as you 
described:

   1. Provide GMSH some CAD/NURBS parametrization from which can generate a 
   hex mesh (or tets which you manually subdivide).
   2. Read the mesh with linear elements into the Triangulation as well as 
   the NURBS parametrization as the Manifold.
   3. Represent the entire mesh as with a high-order polynomial-base by 
   associating the Triangulation with a MappingFEField with 
   FESystem(FE_Q(order)), where the DoF represent vertices locations. 
   (optional I guess)
   4. Use Manifold::project_to_manifold() as referred by 
   Prof. Bangerth to obtain displacements of the surface mesh. Surface points 
   already on the Manifold will have 0 displacements.
   5. Use elasticity equations just like in doi = {10.2514/6.2009-949} to 
   robustly displace the volume mesh, or use the optimization-based method 
   from the actual GMSH group doi = {10.1016/j.jcp.2012.08.051}.

A student in our group is tackling some of those steps later on. 

>From your previous post, it seemed that you have completed step 1 and 2. 
Step 4 really seems to be the missing ingredient.

I currently have step 3 and 5 for shape optimization, where the initial 
mesh is generated within deal.II and displacements in Step 4 are given by 
an optimizer. 
(ugly code: 
https://github.com/dougshidong/PHiLiP/tree/full_space_optimization/src/mesh)

I'd love to hear an update when you succeed

Doug

On Monday, June 8, 2020 at 12:22:08 PM UTC-4, Bruno Blais wrote:
>
> Dear all,
> I hope you are doing well.
>
> In my endless quest for robust mesh generation of hex meshes using GMSH, I 
> have managed to come up with a very robust strategy to generate hex-only 
> meshes
> My only issue (which is a major one) is that this implies that my 
> decomposition from tet to hex adds nodes that are not "snapped" to the 
> boundary, but that are only linear interpolation of the other node on the 
> triangular faces.
> Consequently, my quest remains unfulfilled.
>
> Meshing through high-order and snapping the additional node to a 
> high-order mesh from within GMSH is very troublesome and not very robust 
> (and also very time consuming). However, an idea came to mind.
> I was wondering if there could be an easy way to "snap" my faces to the 
> manifold to which they belong.
>
> My problem is thus the following:
> - Given a triangulation and a manifold
> - Some nodes are exactly on the manifolds (the original nodes of the tets) 
> and some are not (the added nodes in the subdivision)
> - What would be the best way to deform mesh so that the non-conforming 
> node get deformed to the position which would be implied by the manifold? I 
> think I could also make the process more robust by solving an additional 
> elasticity equation during the deformation to deform the entire mesh 
> instead of just the nodes close to the manifold.
>
>
> Would any of you have a suggestion on how best to achieve the deformation 
> of the nodes to match the manifold?
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/420fcb84-7c56-4469-8ee8-58ae4570edfeo%40googlegroups.com.


[deal.II] Re: LinearOperator MPI Parallel Vector

2020-04-22 Thread Doug Shi-Dong
More so, I just found that the AffineConstraints function have not been 
instantiated for a mix of TrilinosWrappers::SparseMatrix and 
LA::dist::Vector, hence it likely has not been tried out/tested.

Seems like it's just not a thing to use LA::dist::Vector other than for 
matrix-free?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0d1517d0-cda6-4ebd-ae03-0435e42e96dc%40googlegroups.com.


[deal.II] LinearOperator MPI Parallel Vector

2020-04-22 Thread Doug Shi-Dong
Hello,

I have been using the LinearAlgebra::distributed::Vector class for MPI 
parallelization since the way it works is more familiar to what I had 
worked with and seemed more flexible.

However, for parallelization, I have to either use a Trilinos or PETSc 
matrix since the native deal.II SparseMatrix is only serial (correct me if 
I'm wrong). Seems like I can do matrix-vector multiplications just fine 
between LA::dist::Vector and the wrapped matrices. However, when it gets to 
LinearOperator, it looks like a TrilinosWrappers::SparseMatrix wrapped 
within a LinearOperator only works with a TrilinosWrappers::MPI::Vector, 
and same thing for PETSc.

I am wondering what the community is using as their go-to parallel matrices 
and vectors, and if you've been mixing them. E.g. matrix-free with 
Trilinos/PETSc vectors, or PETSc matrices with LA::dist::Vector. From what 
I've seen from some tutorials, there is a way to code it up such that 
either Trilinos or PETSc wrappers are used interchangeably, but the 
LA::dist::Vector does not seem be nicely interchangeable with the 
Trilinos/PETSc ones. 

I was kind of hoping to be able to use LA::dist::Vector for everything, am 
I expecting too much from it? Maybe I just need to fix the LinearOperator 
implementation to mix-and-match the data structure? If I do commit to 
Trilinos matrices/vectors, will I have trouble doing some matrix-free or 
GPU stuff in the far future?

Best regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/af2ac47d-ada5-4d1e-8119-d06743ebd0a7%40googlegroups.com.


Re: [deal.II] Constrained linear operator on distributed data structures

2019-12-31 Thread Doug Shi-Dong
Hello Dr. Wells,

Funnily enough, I've got most of it working and it seems to work great in 
parallel, with Dirichlet  boundary conditions (inhomogeneous constraints) 
and hanging nodes (the linear operator part of the constraint). In serial, 
it works fine with the Dirichlet BC.

The one scenario where it *doesn't* work well for me is actually in *serial 
with hanging nodes*. 

I'll report back here with the issue in the upcoming month or on Github 
with an issue if I can pinpoint where it comes from. 

Doug

On Tuesday, December 31, 2019 at 3:47:23 PM UTC-5, David Wells wrote:
>
> Hi Doug, 
>
> If I recall correctly, we added that comment since this is a part of 
> the library that is not very well tested. The code is probably fine. 
>
> Thanks, 
> David 
>
> On Tue, Dec 24, 2019 at 2:33 PM Doug Shi-Dong  > wrote: 
> > 
> > Hello everyone, 
> > 
> > I see that some functions within the constrained_linear_operator.h have 
> a note saying 
> > 
> > Currently, this function may not work correctly for distributed data 
> structures. 
> > 
> > I am currently using it in parallel and seems to be working fine for a 
> very simple linear elasticity problem with hanging nodes and Dirichlet 
> boundary condition. I was wondering what I should be careful about. In 
> which scenarios would those functions fail? 
> > 
> > I couldn't find any issues on GitHub regarding those, or any mention of 
> a specific problem when git blaming. 
> > 
> > Happy holidays! 
> > 
> > Doug 
> > 
> > -- 
> > The deal.II project is located at http://www.dealii.org/ 
> > For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "deal.II User Group" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to dea...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/3d00528c-a107-4594-9068-61c7532dfb75%40googlegroups.com.
>  
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9ac711fa-f0fe-400e-872e-c4b89c8c346e%40googlegroups.com.


[deal.II] Re: Converting LinearAlgebra::distributed::Vector to TrilinosWrappers::MPI::Vector

2019-12-24 Thread Doug Shi-Dong
Replying to my own post since I found the solution in case someone else 
ever encounters this.

It was a silly bug from my end. I basically used the empty constructor of 
LinearAlgebra::ReadWriteVector to transfer the LA::dist::Vector into the 
TrilinosWrappers::MPI::Vector. As a consequence, the resulting Trilinos 
vector was empty and would of course not have the same parallel 
distribution as the matrix it was supposed to be multiplied with.

Doug

On Wednesday, December 4, 2019 at 6:25:49 PM UTC-5, Doug Shi-Dong wrote:
>
> Hello,
>
> I am trying to solve a block system using the LinearOperators and Trilinos 
> matrices and vectors.
>
> Currently, all my matrices uses TrilinosWrappers::SparseMatrix, but my 
> vectors use the deal.II LinearAlgebra::distributed::Vector because of how 
> easily it handles ghosted vectors.
>
> When time comes to solve my typical linear system, I directly use the 
> Epetra_Vector, and give it a View, BlockMap, and vector pointer, as done in 
> step-33. For example,
>
> Epetra_Vector b(View,
> system_matrix.trilinos_matrix().RangeMap(),
> right_hand_side.begin());
>
> Note that the RangeMap of this matrix corresponds to the right_hand_side 
> locally relevant degrees of freedom.
>
> However, now that I am using the deal.II LinearOperator, I need to use the 
> TrilinosWrappers::MPI::Vector, to be consistent with the 
> TrilinosWrappers::SparseMatrix. Now this is partly fine, since I can 
> transfer dealii::LA::dist::Vector my right_hand_side to a 
> TrilinosWrappers::MPI::Vector 
> right_hand_side_trilinos using LinearAlgebra::ReadWriteVector. However, the 
> issue is that the BlockMap of right_hand_side_trilinos does not 
> correspond to the system_matrix.trilinos_matrix().RangeMap(). As a result, 
> I can't perform matrix-vector multiplication between the system_matrix and 
> the right_hand_side_trilinos.
>
> The question is basically: How can I convert a 
> LinearAlgebra::distributed::Vector into a TrilinosWrappers::MPI::Vector 
> such that the deal.II Vector's locally relevant degrees of freedom 
> correspond to the Trilinos' Vector BlockMap in order to perform the 
> appropriate matrix-vector multiplication?
>
> Another note: the matrix I'm looking to use is rectangular, and therefore 
> DomainMap() != RangeMap().
>
> Best regards,
>
> Doug
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/de6ced63-698b-46ea-a490-ee1572c93185%40googlegroups.com.


[deal.II] Constrained linear operator on distributed data structures

2019-12-24 Thread Doug Shi-Dong
Hello everyone,

I see that some functions within the constrained_linear_operator.h have a 
note saying

Currently, this function may not work correctly for distributed data 
structures.

I am currently using it in parallel and seems to be working fine for a very 
simple linear elasticity problem with hanging nodes and Dirichlet boundary 
condition. I was wondering what I should be careful about. In which 
scenarios would those functions fail?

I couldn't find any issues on GitHub regarding those, or any mention of a 
specific problem when git blaming.

Happy holidays!

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3d00528c-a107-4594-9068-61c7532dfb75%40googlegroups.com.


[deal.II] Converting LinearAlgebra::distributed::Vector to TrilinosWrappers::MPI::Vector

2019-12-04 Thread Doug Shi-Dong
Hello,

I am trying to solve a block system using the LinearOperators and Trilinos 
matrices and vectors.

Currently, all my matrices uses TrilinosWrappers::SparseMatrix, but my 
vectors use the deal.II LinearAlgebra::distributed::Vector because of how 
easily it handles ghosted vectors.

When time comes to solve my typical linear system, I directly use the 
Epetra_Vector, and give it a View, BlockMap, and vector pointer, as done in 
step-33. For example,

Epetra_Vector b(View,
system_matrix.trilinos_matrix().RangeMap(),
right_hand_side.begin());

Note that the RangeMap of this matrix corresponds to the right_hand_side 
locally relevant degrees of freedom.

However, now that I am using the deal.II LinearOperator, I need to use the 
TrilinosWrappers::MPI::Vector, to be consistent with the 
TrilinosWrappers::SparseMatrix. Now this is partly fine, since I can 
transfer dealii::LA::dist::Vector my right_hand_side to a 
TrilinosWrappers::MPI::Vector 
right_hand_side_trilinos using LinearAlgebra::ReadWriteVector. However, the 
issue is that the BlockMap of right_hand_side_trilinos does not correspond 
to the system_matrix.trilinos_matrix().RangeMap(). As a result, I can't 
perform matrix-vector multiplication between the system_matrix and the 
right_hand_side_trilinos.

The question is basically: How can I convert a 
LinearAlgebra::distributed::Vector into a TrilinosWrappers::MPI::Vector 
such that the deal.II Vector's locally relevant degrees of freedom 
correspond to the Trilinos' Vector BlockMap in order to perform the 
appropriate matrix-vector multiplication?

Another note: the matrix I'm looking to use is rectangular, and therefore 
DomainMap() != RangeMap().

Best regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/18b9cfb7-fdfe-4c6e-9fc2-6bfaac6e5ed2%40googlegroups.com.


Re: [deal.II] Instantiation of vector_tools_integrate_difference

2019-11-06 Thread Doug Shi-Dong
Not sure why it builds fine locally, but that fixed it! Thank you!

Doug

On Wednesday, November 6, 2019 at 11:06:47 AM UTC-5, Wolfgang Bangerth 
wrote:
>
> On 11/6/19 8:49 AM, Doug Shi-Dong wrote: 
> > 
> > Any hints about what could be causing this? 
>
> I thought we had already fixed that a couple of days ago. Can you update 
> to master? 
>
> Best 
>   W. 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/661926b8-b974-4765-9599-000179b511d7%40googlegroups.com.


[deal.II] Re: Docker MPI

2019-10-23 Thread Doug Shi-Dong
Dr. Turcksin,

Thank you for the answer. Makes sense why I couldn't find it use apt 
search. CMake's FindMPI was unable to find it though. Might just need to 
give it some hints on the spack MPI location.

Either way, I realised that I'm using some features with bugs that were 
fixed in the current development version.

Are you aware of anyone hosting a "development" Docker build for their own 
continuous integration. I know this can be very expensive to keep up with 
the commits. Otherwise, I guess I should just build my own Docker with my 
current deal.II fork.

Best regards,

Doug

On Tuesday, October 22, 2019 at 8:18:22 AM UTC-4, Bruno Turcksin wrote:
>
> Doug,
>
> You are using an image build using spack so everything is installed using 
> spack instead of the package manager. If you type `spack find` inside a 
> container, you will see that mpich is installed.
>
> Best,
>
> Bruno
>
> On Monday, October 21, 2019 at 9:53:36 PM UTC-4, Doug Shi-Dong wrote:
>>
>> Hello everyone,
>>
>> I would like to use Travis CI to test pull requests, and I am currently 
>> following the steps described in 
>> https://github.com/dealii/dealii/wiki/Docker-Images
>>
>> Except I am using the following Docker image: 
>> dealii/dealii:v9.1.1-gcc-mpi-fulldepsspack-debugrelease
>>
>> It seems that this Docker imagine does not contain an MPI package in its 
>> installation. Here is a log of my Travis CI build
>>
>> https://travis-ci.org/dougshidong/PHiLiP/builds/600536273
>>
>> The command apt search mpi shows that no packages such as openmpi or 
>> mpich is available. However, oddly enough, DEAL_II_WITH_MPI is ON, when my 
>> CMake checks for it.
>>
>> If no MPI is available, how do I compile the code within this Docker 
>> container?
>>
>> Best regards,
>>
>> Doug
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/58176320-e644-439e-8a56-21484acbc86b%40googlegroups.com.


[deal.II] Docker MPI

2019-10-21 Thread Doug Shi-Dong
Hello everyone,

I would like to use Travis CI to test pull requests, and I am currently 
following the steps described in 
https://github.com/dealii/dealii/wiki/Docker-Images

Except I am using the following Docker image: 
dealii/dealii:v9.1.1-gcc-mpi-fulldepsspack-debugrelease

It seems that this Docker imagine does not contain an MPI package in its 
installation. Here is a log of my Travis CI build

https://travis-ci.org/dougshidong/PHiLiP/builds/600536273

The command apt search mpi shows that no packages such as openmpi or mpich 
is available. However, oddly enough, DEAL_II_WITH_MPI is ON, when my CMake 
checks for it.

If no MPI is available, how do I compile the code within this Docker 
container?

Best regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/2c98401f-e7f0-4283-861f-7fb120831e5b%40googlegroups.com.


[deal.II] Re: Query regarding DoFTools::dof_indices_with_subdomain_association()

2019-10-10 Thread Doug Shi-Dong
Hello Vachan,

Sounds like you're implementing nodal DG, hence why you only need values 
and gradients at the *quadrature* points from the neighbor. Something you 
might want to consider rather than communicating them at solution points, 
in case you ever decide to overintegrate.

You could still ask for the unit support points and figure out which of 
>> them are geometrically on a given face.
>
>
> This is definitely possible, but would probably be inefficient if I code 
> for it. Isn't there a function in DoFTools which does this? Because not 
> marking all dofs of ghost cells as relevant will give much savings in 
> communication, I was wondering if DoFTools already doesn't have an 
> implementation for this.
>
>
I don't think there is currently an implementation. You should be able to 
mark your degrees of freedom on the subdomain interface 
using FE_DGQ::has_support_on_face(...). Basically, if you loop over the 
ghost elements, you can loop over the dof indices of the cell, and use that 
function to figure out if its on a face. Efficiency shouldn't be an issue, 
especially since this is pre-processing.

 

> However, note that as soon as gradients are involved all the degrees of 
>> freedom contribute to values on faces.
>
>
> I don't have much experience in parallel programming, but I think we can 
> circumvent this by computing gradients at all dof locations in a subdomain 
> and again only communicating data of dofs lying on subdomain interfaces. I 
> might need some correction on this :).
>

Yes, that works if you decide to first compute all your gradients and store 
them. Note that in 3D, you're doing 3x the communication for the face 
gradients, so your efficiency only shows up when using elements of order 4 
or more. 

Depending on where you want to take your code in the future, I'd be careful 
with early code optimization like this. I would just stick with 
communicating all the neighbor's dofs and optimize the code if you really 
see a slowdown. 

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/da5650a9-9042-475b-8ce8-671d615ab9ab%40googlegroups.com.


[deal.II] Re: Integrate the solution over a subdomain

2019-10-08 Thread Doug Shi-Dong

On Tuesday, October 8, 2019 at 8:43:23 AM UTC-4, Yang Liu wrote:
>
>
> Dear DEAL.II users,
>
> The DEAL.II software provides the function 
> VectorTools::compute_mean_value() to integrate the solution over the whole 
> domain. 
>
> But now I would like to integrate the solution over a subdomain. 
>
> fCan anyone give me some suggestions? Thanks in advance!
>

Hello,

I guess it's not currently part of the library since the subdomain are 
determined by the partitioner, so it's rather odd that you need to 
integrate a specific subdomain. What you can do however, is basically copy 
this code

https://github.com/dealii/dealii/blob/0ecc60213ae1236616047153348e5996052eb2e2/include/deal.II/numerics/vector_tools.templates.h#L9035-L9105

and add a parameter such that

 compute_mean_value(const Mapping &   mapping,
 const DoFHandler ,
 const Quadrature &  quadrature,
 const VectorType &   v,
 const unsigned int   component,
 const unsigned int  
 subdomain_to_integrate)

and change line 9062

if (cell->is_locally_owned())

to

if (cell->is_locally_owned() && cell->subdomain_id == 
subdomain_to_integrate)

Not sure if there is a more elegant way. 

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/426afa6d-0df6-4fa7-9636-476f2a14977e%40googlegroups.com.


[deal.II] Re: High order mesh from Gmsh

2019-09-20 Thread Doug Shi-Dong
Hello everyone,

Sorry to revive/hijack this thread, but it seems to have the exact 
information/people that might be able to help.

My current goal is to perform some shape optimization with some 
h(p)-refinement.

I am currently using Prof. Heltai's method described in this issue 
<https://github.com/dealii/dealii/issues/5506#issuecomment-352429820> 
summarized 
below to extract a MappingFEField from a known Triangulation and Manifold. 
This initial Triangulation can(must) be linear, and I would use a 
TransfiniteInterpolation to curve the initial grid.

   - have a DoFHandler with spacedim FE_Q/FE_Bernstein describing the 
   *absolute* geometry
   - attach manifolds (as expensive as you wish)
   - call VectorTools::get_position_vector
   - create MappingFEField with the computed position vector
   - discard manifolds and only use the MappingFEField afterwards

However, I would now like to refine the mesh. This means that the grid 
refinement process needs a Manifold to query the new points. 

Therefore, it seems like I would need to derive some kind of 
PolynomialManifold (as mentionned in previous post) from the Manifold 
class, which would take a reference to the MappingFEField in its 
constructor. I would then use MappingFEField to compute the necessary 
values in the overridden virtual function of the Manifold class. As a 
result, I would have a PolynomialManifold that would allow my triangulation 
to be refined using the current polynomial mapping of the element.

This would go back and forth between the refining process and the 
deformation process. 

   - When a deformation occurs, simply displace the Triangulation->vertices 
   and the MappingFEField->euler_vector. The PolynomialManifold, having a 
   reference to the mapping would automatically update.
   - When refinement occurs, extract a new MappingFEField using Prof. 
   Heltai's method, construct a new PolynomialManifold, and assign it to the 
   Triangulation.

Does that sound like a feasible plan? Is my implementation description of 
the PolynomialManifold as simple as Prof. Heltai hinted at in the previous 
post?

Best regards,

Doug


On Thursday, September 10, 2015 at 2:02:48 AM UTC-4, Praveen C wrote:
>
> Dear all
>
> I would like to use high order meshes for compressible flow computations 
> around objects like airfoils. I would like to use Gmsh for this since it is 
> improving its high order meshing features. There is a series of workshops 
> on high order CFD, see the next one here
>
> http://how4.cenaero.be
>
> and they also provide/recommend using Gmsh. E.g., in the Joukowski airfoil 
> test case
>
> http://how4.cenaero.be/content/bl1-laminar-joukowski-airfoil-re1000
>
> they provide some python code which generates meshes in Gmsh format. If 
> you run Laminar.py it generates mesh with degree 4 quadrilaterals. In each 
> quad, 5x5 = 25 points are generated.
>
> To use this in deal.II, I could create a preprocessor
>
> 1. Read in high order mesh. I cannot use GridIn for this since it probably 
> does not read high order elements.
> 2. Create a triangulation using create_triangulation
> 2. Create a MappingQEulerian field
> 3. Save triangulation to file using GridOut
> 4. Save the MappingQEulerian field into another file
>
> Then in my application program, I will read in the mesh and deformation 
> fields.
>
> Is this possible to do ?
>
> Even if this were possible, what happens for parallel computations, if I 
> want to use parallel::distributed::Triangulation ? Mesh is ok, but the 
> deformation field has to be partitioned.
>
> Thanks
> praveen
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fe9356db-d85a-4d77-a299-2a0f8df5f28a%40googlegroups.com.


Re: [deal.II] Instantiation MappingFEField with hp::DoFHandler

2019-09-18 Thread Doug Shi-Dong


On Wednesday, September 18, 2019 at 11:28:45 PM UTC-4, Wolfgang Bangerth 
wrote:
>
>
> Doug, 
>
> It does not greatly surprise me that it doesn't compile -- what isn't 
> tested 
> generally doesn't work. 
>

Makes sense. 

We'd of course be very happy to accept any patches to make things work. The 
> general approach to making stuff work is to *always* use hp::FEValues 
> objects 
> to evaluate solution fields. You might want to look at some of the 
> functions 
> in VectorTools that can either take a ::DoFHandler or hp::DoFHandler to 
> see 
> how they work and how the code is written to accommodate both kinds of DoF 
> handler classes. 
>

I think I should be able to get away with always constructing a high-order 
geometric grid for now, but I I'm guessing that it's not going to be the 
only hurdle I encounter with hp. Seems like the same philosophy applies to 
collections e.g. I should *always* use a FECollection of FESystem of 
FiniteElement since that seems to encompass all possibilities.

If I do end up needing those features, should I open an issue and ask to 
get it assigned to myself? And then refer to that issue for pull requests?

A good strategy to implement these kinds of changes is to split things up 
> into 
> a number of incremental changes. Each of these individually doesn't get 
> you to 
> the end point, but fixes one roadblock at a time. I find it much easier to 
> work this way (using many individual pull requests to get to the final 
> goal) 
> than to try and write one monumentally large one, principally because each 
> step individually can be sent through the test suite. (There are 23 tests 
> right now that use this class, so it's reasonably well tested.) If no 
> tests 
> break, then it can't have been so wrong, and one can move on to the next 
> problem. I've definitely implemented things many times that took a 
> sequence of 
> 10 pull requests, each making incremental progress. Writing several new 
> tests 
> in the process is definitely also helpful. I'd suggest you try that 
> approach.  


Thank you for the tip. I did not realise that pull requests could be used 
for "partial" features (as long as they don't break the code). This is my 
first large(ish) project where other students contribute to the code, and 
I'm trying to mimic the forking workflow of open-source projects. Will 
definitely recommend them to do pull requests more often to reduce merge 
conflicts.
 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f59549dd-144c-4687-8cda-3adb141b526f%40googlegroups.com.


[deal.II] Re: DG explicit time integration for linear advection equation with MeshWorker (suggestions)

2019-09-18 Thread Doug
Hello Vachan,

Quick note. I'm only suggesting looping yourself because I am not totally 
familiar with MeshWorker. Looping myself exposed the mechanics a bit more 
for me, which I wanted.

I was pointing to step-33 for the loop, not the operators. I don't remember 
seeing a deal.II example implementing deal.II in operator form. 

So, I suggested looping though the cells to compute the operators that are 
not common to all matrices. This is usually the mass matrix if your cells 
are not linear (aka cartesian mesh). The other operators can be made the 
same for all the cells, except for some cofactor Jacobian matrix to 
transfer derivatives and normals from reference space to physical space. If 
you only ever plan on using a cartesian mesh, then all your local matrices 
will be the same. There is no point in assembling a global matrix that will 
operate on your solution. If I were you, I'd take a look at chapter 3 & 6 
of Hesthaven's book where he builds the same local operators that will be 
used on each cell. Therefore, you wouldn't have an array of differentiation 
matrices. Just one small differentiation matrix that applies to all cells. 
Feel free to look at how I do the [mass matrix](
https://github.com/dougshidong/PHiLiP/blob/master/src/dg/dg.cpp#L752)

Step-33 is CG by the way, therefore, two cells at the same level (no 
hanging nodes) don't need to be added to the residual. Therefore, they 
don't show that case. Feel free to look at my [loops](
https://github.com/dougshidong/PHiLiP/blob/master/src/dg/dg.cpp#L246), 
where I loop over all the cells, and all the faces. For internal faces, 
only one cell will be responsible to evaluate the flux and add it to both 
sides.

If someone wants to chime in with a better way to loop over all the faces 
just once, I would love to know about it too.

You can also keep on eye out on the code I linked. It currently does not 
pre-compute the operators, but it's only a matter of moving the loops out 
to pre-processing. 

Praveen,

I did inspire my loops out of your dflo ;). Note that if you use a 
distributed grid, the user_index() will not be consistent across MPI 
boundaries. Therefore, you might end up with doubly-counted faces. Here is 
a [commit](
https://github.com/dougshidong/PHiLiP/commit/d0d7e6bd663b0cef032c7f953d2b2dddce4950a7#diff-d2da12f45bddf4e965acafc118ff8366)
 to 
your if with an if-statement that works for refine/distributed grids.

On Wednesday, September 18, 2019 at 12:33:43 AM UTC-4, vachan potluri wrote:
>
> Doug and Praveen,
>
> Thanks for your answers. I had a look at step-33. As far as I understand, 
> although the looping through cells is not through MeshWorker, the assembly 
> is still global! So, for a non-cartesian mesh, I think you are suggesting 
> using such a loop over all cells to calculate local matrices. Will these 
> cell-local matrices stored as an array of matrices in the conservation law 
> class?
>
> I now have one more question. In assemble_face_term function of step-33, 
> the normal numerical flux is calculated. This function is called for every 
> cell-neighbor pair from the assemble_system function. This means, if I am 
> not wrong, at every interior face quadrature point, the numerical flux is 
> calculated twice. This might be very costly. Is there a way to avoid this? 
> One can probably force only one cell to calculate numerical flux at a face 
> based on the face normal's orientation. But then how can we communicate the 
> calculated flux to the other cell. Or even better, is it possible to loop 
> over faces? We could then add contributions to both the cells sharing this 
> face without double computation.
>
> Thanks again!
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6f1ae5d3-a150-457a-8a6d-fa393910b15e%40googlegroups.com.


Re: [deal.II] Instantiation of Vector< Sacado::Fad::DFad >

2019-09-17 Thread Doug

>
> so 'a' is of type Sacado::Fad::DFad. It then needs to call 
>
>numbers::is_finite (Sacado::Fad::DFad), which doesn't exist. 
> It probably just tries to go through all of the overloads of 
> numbers::is_finite() and wants to see whether it can convert the 
> argument Sacado::Fad::DFad to the argument type of these 
> overloads. The error message you show then would just explain why this 
> one possibility (namely, numbers::is_finite(std::complex&)) 
> does not work. But that doesn't mean that this is the right overload 
> anyway -- I suspect that your compiler produces similar error messages 
> above or below the one you show for all of the other overloads, right? 
>
>
True, the error message gets long pretty quickly, but the undefined 
is_finite was another issue. Even if is_finite exists, the complex 
constructor is still an issue.
 

> I *think* that the solution is to simply provide an overload for 
>numbers::is_finite (const Sacado::Fad::DFad ) 
> Can you try this? You could declare it in your own .cc file before you 
> #include  
>

 Attached is a tiny .cc file that simply instantiates the vector. Copy 
pasted here here convenience.

#include 
namespace dealii{
namespace numbers{
bool is_finite(const Sacado::Fad::DFad ) {
¦   (void) x;
¦   return true;
}   
}}
#include 
#include 
int main (int /*argc*/, char * /*argv*/[]){
using namespace dealii;
dealii::LinearAlgebra::distributed::Vector vector_double;
using ADtype = Sacado::Fad::DFad;
dealii::LinearAlgebra::distributed::Vector vector_ad;
return 0;
}

I also provide an function for numbers::is_finite (const 
Sacado::Fad::DFad ) to avoid the first set of errors. However, I 
still get the error below

In file included from 
/home/ddong/Libraries/dealii/install/include/deal.II/base/aligned_vector.h:22:0,
 from 
/home/ddong/Libraries/dealii/install/include/deal.II/lac/vector.h:22,
 from 
/home/ddong/Codes/PHiLiP/src/instantiate_vector_ad.cpp:13:
/home/ddong/Libraries/dealii/install/include/deal.II/lac/la_parallel_vector.templates.h:
 
In instantiation of ‘dealii::LinearAlgebra::distributed::Vector& dealii::LinearAlgebra::distributed::Vector::operator*=(Number) [with Number = Sacado::Fad::DFad; 
MemorySpace = dealii::MemorySpace::Host]’:
/home/ddong/Codes/PHiLiP/src/instantiate_vector_ad.cpp:26:1:   required 
from here
/home/ddong/Libraries/dealii/install/include/deal.II/lac/la_parallel_vector.templates.h:1651:7:
 
error: no matching function for call to 
‘std::complex::complex(const Sacado::Fad::DFad&)’
   AssertIsFinite(factor);
In file included from /usr/include/trilinos/Teuchos_ConfigDefs.hpp:94:0,
 from /usr/include/trilinos/Teuchos_PromotionTraits.hpp:45,
 from 
/usr/include/trilinos/Sacado_Fad_Exp_GeneralFadTraits.hpp:139,
 from /usr/include/trilinos/Sacado.hpp:52,
 from 
/home/ddong/Codes/PHiLiP/src/instantiate_vector_ad.cpp:1:
/usr/include/c++/7/complex:1512:3: note: candidate: constexpr 
std::complex::complex(const std::complex&)
   complex::complex(const complex& __z)
   ^~~
/usr/include/c++/7/complex:1512:3: note:   no known conversion for argument 
1 from ‘const Sacado::Fad::DFad’ to ‘const std::complex&’
/usr/include/c++/7/complex:1219:26: note: candidate: constexpr 
std::complex::complex(const std::complex&)
   _GLIBCXX_CONSTEXPR complex(const complex& __z)
  ^~~
/usr/include/c++/7/complex:1219:26: note:   no known conversion for 
argument 1 from ‘const Sacado::Fad::DFad’ to ‘const 
std::complex&’
/usr/include/c++/7/complex:1209:26: note: candidate: constexpr 
std::complex::complex(double, double)
   _GLIBCXX_CONSTEXPR complex(double __r = 0.0, double __i = 0.0)
  ^~~
/usr/include/c++/7/complex:1209:26: note:   no known conversion for 
argument 1 from ‘const Sacado::Fad::DFad’ to ‘double’
/usr/include/c++/7/complex:1207:26: note: candidate: constexpr 
std::complex::complex(std::complex::_ComplexT)
   _GLIBCXX_CONSTEXPR complex(_ComplexT __z) : _M_value(__z) { }
  ^~~
/usr/include/c++/7/complex:1207:26: note:   no known conversion for 
argument 1 from ‘const Sacado::Fad::DFad’ to 
‘std::complex::_ComplexT {aka __complex__ double}’
/usr/include/c++/7/complex:1202:12: note: candidate: constexpr 
std::complex::complex(const std::complex&)
 struct complex
^~~
/usr/include/c++/7/complex:1202:12: note:   no known conversion for 
argument 1 from ‘const Sacado::Fad::DFad’ to ‘const 
std::complex&’
/usr/include/c++/7/complex:1202:12: note: candidate: constexpr 
std::complex::complex(std::complex&&)
/usr/include/c++/7/complex:1202:12: note:   no known conversion for 
argument 1 from ‘const Sacado::Fad::DFad’ to 
‘std::complex&&’
In file included from 
/home/ddong/Libraries/dealii/install/include/deal.II/base/aligned_vector.h:22:0,
 from 

Re: [deal.II] Instantiation of Vector< Sacado::Fad::DFad >

2019-09-17 Thread Doug

>
> What I meant is: Can you show the compiler error message that illustrates 
> where the assertion is located, what the template arguments are, how it 
> came 
> that we called that function with these template arguments, etc? 
>
>
/home/ddong/Libraries/dealii/include/deal.II/base/numbers.h:583:3: note:  
 no known conversion for argument 1 from ‘const Sacado::Fad::DFad’ 
to ‘const std::complex&’
In file included from 
/home/ddong/Libraries/dealii/include/deal.II/base/cuda.h:21:0,
 from 
/home/ddong/Libraries/dealii/include/deal.II/base/memory_space.h:22,
 from 
/home/ddong/Libraries/dealii/include/deal.II/lac/la_parallel_vector.h:21,
 from 
/home/ddong/Libraries/dealii/source/lac/la_parallel_vector.cc:16:
/home/ddong/Libraries/dealii/include/deal.II/base/exceptions.h:1671:42: 
error: no matching function for call to 
‘std::complex::complex(const Sacado::Fad::DFad&)’
  dealii::ExcNumberNotFinite(std::complex(number)))

It is basically part of the same call to AssertIsFinite to generate the 
Exception. Note that it could hit this assertion much more directly 
since 
/home/ddong/Libraries/dealii/include/deal.II/lac/la_parallel_vector.templates.h 
itself calls AssertIsFinite. e.g. line 1450

In the end, it is basically trying to convert the Sacado type into a 
complex number, which is undefined, whenever it tries to perform some 
vector operations.

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5fae8c77-687b-4127-b6b3-e62a05b24949%40googlegroups.com.


[deal.II] Instantiation MappingFEField with hp::DoFHandler

2019-09-17 Thread Doug
Hello again,

I am trying to use MappingFEField to represent my high-order mesh points 
(which will then be deformed through optimization). Similarly to this 
thread High order mesh from Gmsh 
<https://groups.google.com/forum/#!searchin/dealii/high$20order$20mesh|sort:date/dealii/8ta3DmMCwsE/0AKQVI_0AQAJ>.
 
However, I am using hp-finite element and I would expect different element 
geometric orders to exist for cells with different solution orders.

I see that MappingFEField is currently only instantiated with DoFHandler, 
and not hp::DoFHandler, so I tried to add it to the instantiation file 
"/fe/mapping_fe_field.inst.in". Unfortunately, it seems like the current 
MappingFEField implementation assumes that the finite element is not a 
FECollection, etc. and tries to directly operate on it. 

What would be the best way to have different element geometric orders, 
where I can control the metric degrees of freedom?

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a97c4d3c-3557-40fa-b9fc-654bf3516551%40googlegroups.com.


[deal.II] Re: DG explicit time integration for linear advection equation with MeshWorker (suggestions)

2019-09-17 Thread Doug
Hello Vachan,

What you are describing is very similar to Hesthaven's framework of build 
matrix operators and apply them to your degrees of freedom. The main 
advantage of this operator form is to have a common mass (if linear 
elements), differentiation, flux, and boundary operators for all your 
elements since the operators are defined on the reference element. 
Therefore, using MeshWorker loop to go through every single element to 
build those matrices would result in building redundant local matrices and 
storing them in a global matrix. Can be very expensive memory wise.

Yes, you could use MeshWorker to compute only your normal fluxes. 

So, it's feasible, but I wouldn't use the MeshWorker loop to build those 
matrices . You are welcome to look at step-33 to see how you can loop 
through your elements, and then build your different mass matrices. Then 
use a similar loop to apply your, interpolation, differentiation, lifting 
matrices on the local solution to assemble your right-hand side.

Yes, if you build those operators, you will save a significant amount of 
time *for explicit time stepping* since you will basically have 
concatenated multiple operators together. Interpolation, differentiation, 
integration, etc. into a single matrix-vector product.

I think know what you are referring to with that sparse flux matrix. If 
it's a local matrix, you would want to store the lifting operator, which is 
the mass inverse times your sparse flux matrix, which is not necessarily a 
sparse matrix (unless mass is diagonal). Either way, I wouldn't worry too 
much about that operation taking a long time. Your other operations on the 
volume take much longer, and if you decide to go with nonlinear fluxes, the 
time it takes to apply this lifted flux vector is meaningless. deal.II has 
a SparseMatrix class, in case you want to use it, and you can provide the 
set the SparsityPattern if you want. 

If you were planning on building multiple global matrices, then yes, your 
approach makes sense using SparseMatrix; be careful about memory. If you 
only plan on doing linear advection, your problems should be simple enough 
that you wouldn't care about memory nor computational time. I haven't seen 
people build those global matrices, but it should be faster than assembling 
every loop. It's the old trade-off between memory and computation.

Doug

On Tuesday, September 17, 2019 at 3:03:52 AM UTC-4, vachan potluri wrote:
>
> Hello all,
>
>
> I am a beginner in dealii. I want to solve a linear, transient advection 
> equation explicitly in two dimensions using DG. The resulting discrete 
> equation will have a mass matrix as the system matrix and a sum of terms 
> which depend on previous solution (multiplied by mass, differentiation, 
> flux and boundary matrix) as the rhs.
>
> [image: linearAdvection2D.png]
>
> Instead of using MeshWorker::loop for every single time step, I think the 
> following approach would be better. I am using a ghost cell approach to 
> specify the boundary condition: the boundary condition can be specified by 
> an appropriately calculated normal numerical flux.
>
>
>1. Before the any time steps, use a MeshWorker::loop each for each of 
>the four matrices: mass, differentiation, flux and boundary
>2. During each update
>   1. Again use MeshWorker::loop, but this time only to calculate the 
>   normal numerical flux.
>   2. Use the normal numerical flux and the previous solution to 
>   obtain the RHS using appropriate matrix-vector products
>   3. Solve the system
>
> I have few question regarding this approach.
>
>1. Is it feasible
>2. Can it give significant improvement in performance over the case 
>when assembling is done for every time step
>3. (Assuming answers to above questions are positive) For higher 
>orders, the flux and boundary matrices will be very sparse. The normal 
>numerical flux (which will be a vector) will also be sparse. Can the 
>matrix-vector product involving these combinations be optimised by using 
>appropriate sparsity pattern? Can a sparsity pattern be specified for a 
>vector too?
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/39f7baf9-e7c7-4a4a-b50e-b54ab5af0d7f%40googlegroups.com.


Re: [deal.II] Instantiation of Vector< Sacado::Fad::DFad >

2019-09-13 Thread Doug
On Monday, September 9, 2019 at 10:13:32 AM UTC-4, Wolfgang Bangerth wrote:
>
> 1.
>
> This seems like we are abusing the real_type for something it wasn't 
> intended 
> to do. Can you open a github issue with the error message you get with the 
> unmodified code? 

 

> 2.
>
> I suspect that that would be useful in its own right. Can you open an 
> issue or 
> pull request for this as well? 
>
>
>  
Should those be two different issues?
 

>
> > *3. I don't know how to fix* 
>

> Where exactly is this conversion necessary? 


This occurs with AssertIsFinite in include/base/exceptions.h. The condition 
to be checked uses the include/base/numbers.h function is_finite() 
function. However, the exception thrown (ExcNumberNotFinite) 
uses std::complex(number)) to generate the signature. 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6231eb16-d704-4862-9647-82dd0f40c49c%40googlegroups.com.


[deal.II] Instantiation of Vector< Sacado::Fad::DFad >

2019-09-06 Thread Doug
Hello again,

I am trying to instantiate a Vector with an AD type such as Vector< 
Sacado::Fad::DFad > by changing

for (SCALAR : REAL_AND_COMPLEX_SCALARS) to for (SCALAR : ALL_SCALAR_TYPES)

in the instantiation file

dealii/source/lac/la_parallel_vector.inst.in

However, it seems 3 issues come up.

1. I can fix this one, so skip ahead if you want.

A bunch of static/dynamic casting of ADvar(unsigned int) 
through real_type(partitioner->local_size()), but Trilinos doesn't have the 
*unsigned* int. Simply need to cast the unsigned int value to a long before 
casting. For example real_type((long)partitioner->local_size()).

2. I can also fix.

dealii/include/deal.II/base/exceptions.h

Requires the definition of 
   dealii::numbers::is_finite(number)

where I can provide a definition in dealii/include/deal.II/base/numbers.h 
for the AD types.

*3. I don't know how to fix*

std::complex(number) needs to be defined. 

Now, this translates 
to std::complex::complex(Sacado::Rad::ADvar 
>&), which of course doesn't exist.

I understand why it's casting into a complex to ensure that we can use the 
exception for all scalar arguments. But this is limiting the behaviour such 
that I can't use AD types.

Any suggestions about how to go with this?


This is somewhat linked in the grand scheme of things with

https://groups.google.com/forum/#!searchin/dealii/automatic$20differentiation|sort:date/dealii/9YohjQr1aro/QdtzHHoWAwAJ

where the goal will be to automatically differentiate the entire Jacobian 
and the sensitivities of the residual with respect to the grid points.

Best regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/48459ebb-b016-4757-be9a-6a4766ad1621%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-31 Thread Doug

On Tuesday, August 27, 2019 at 3:51:20 AM UTC-4, Marc Fehling wrote:
>
>
> The patch has been merged. Let me know if this fix does the trick for you.
>

It does! Thank you again for the quick support.

Doug 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/909139d6-c061-48c7-9211-2c862baa22cf%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-26 Thread Doug
Thank you very much for the quick fix! Looking forward to pull this once it 
goes through all the checks.

Doug

On Sunday, August 25, 2019 at 9:44:29 AM UTC-4, Marc Fehling wrote:
>
>
> I came up with a fix for this issue in the following PR #8637 
> <https://github.com/dealii/dealii/pull/8637> that uses the CellStatus 
> flags to determine active_fe_indices while coarsening.
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8fba971e-54c9-4820-9795-707f0deb584d%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-21 Thread Doug
I am using the master version from 2 days ago. Commit c4c4e5209a. Actually, 
it also fails for the 2D, but in a different scenario. I run this for 
mpirun=1 and mpirun=4. 

It fails in 2D for mpirun=1.debug, with the same error. It fails for 3D for 
mpirun=4.debug, mpirun=1.release. And fully succeeds for mpirun=4.release...

I have attached the output from ctest.

On Wednesday, August 21, 2019 at 5:39:11 AM UTC-4, Marc Fehling wrote:
>
> Hi Doug!
>
> On Wednesday, August 21, 2019 at 4:00:49 AM UTC+2, Doug wrote:
>>
>> 8134: void dealii::parallel::distributed::SolutionTransfer> VectorType, DoFHandlerType>::unpack_callback(const typename 
>> dealii::parallel::distributed::Triangulation> space_dimension>::cell_iterator&, typename 
>> dealii::parallel::distributed::Triangulation> space_dimension>::CellStatus, const 
>> boost::iterator_range<__gnu_cxx::__normal_iterator> std::vector > > >&, std::vector&) 
>> [with int dim = 3; VectorType = 
>> dealii::LinearAlgebra::distributed::Vector; DoFHandlerType = 
>> dealii::hp::DoFHandler<3, 3>; typename 
>> dealii::parallel::distributed::Triangulation> space_dimension>::cell_iterator = 
>> dealii::TriaIterator >; typename 
>> dealii::parallel::distributed::Triangulation> space_dimension>::CellStatus = dealii::Triangulation<3, 3>::CellStatus]
>>
>
> It seems that the assertion fails only in the 3D case, which is quite 
> interesting. Are you using the deal.II-9.1 release version, or do you work 
> with the current master branch? I'll try to reproduce the error once I'm at 
> my desk.
>
> It looks like either p::d::SolutionTransfer picks the wrong finite element 
> while packing the solution on cells to be coarsened, or hp::DoFHandler 
> chooses the wrong finite element for the parent cell in case of coarsening. 
> I'll investigate.
>
> Marc
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ebdf5f1a-6e10-45e1-ad66-6cbc589cfd05%40googlegroups.com.
UpdateCTestConfiguration  from 
:/home/ddong/Libraries/dealii/build/DartConfiguration.tcl
UpdateCTestConfiguration  from 
:/home/ddong/Libraries/dealii/build/DartConfiguration.tcl
Test project /home/ddong/Libraries/dealii/build
Constructing a list of tests
Done constructing a list of tests
Updating test list for fixtures
Added 0 tests to meet fixture requirements
Checking test dependency graph...
Checking test dependency graph end
test 8132
Start 8132: mpi/solution_transfer_05.mpirun=1.debug

8132: Test command: /usr/bin/cmake 
"-DTRGT=solution_transfer_05.mpirun1.debug.diff" 
"-DTEST=mpi/solution_transfer_05.mpirun=1.debug" "-DEXPECT=PASSED" 
"-DBINARY_DIR=/home/ddong/Libraries/dealii/build/tests/mpi" 
"-DGUARD_FILE=/home/ddong/Libraries/dealii/build/tests/mpi/solution_transfer_05.debug/interrupt_guard.cc"
 "-P" "/home/ddong/Libraries/dealii/build/share/deal.II/scripts/run_test.cmake"
8132: Test timeout computed to be: 600
8132: make[3]: *** [solution_transfer_05.debug/mpirun=1/output] Error 1
8132: make[2]: *** [CMakeFiles/solution_transfer_05.mpirun1.debug.diff.dir/all] 
Error 2
8132: make[1]: *** 
[CMakeFiles/solution_transfer_05.mpirun1.debug.diff.dir/rule] Error 2
8132: make: *** [solution_transfer_05.mpirun1.debug.diff] Error 2
8132: Test mpi/solution_transfer_05.mpirun=1.debug: RUN
8132: ===   OUTPUT BEGIN  
===
8132: Scanning dependencies of target solution_transfer_05.debug
8132: Building CXX object 
CMakeFiles/solution_transfer_05.debug.dir/solution_transfer_05.cc.o
8132: Linking CXX executable 
solution_transfer_05.debug/solution_transfer_05.debug
8132: Built target solution_transfer_05.debug
8132: Generating solution_transfer_05.debug/mpirun=1/output
8132: mpi/solution_transfer_05.mpirun=1.debug: BUILD successful.
8132: mpi/solution_transfer_05.mpirun=1.debug: RUN failed. -- Return code 
134
8132: mpi/solution_transfer_05.mpirun=1.debug: RUN failed. -- Result: 
/home/ddong/Libraries/dealii/build/tests/mpi/solution_transfer_05.debug/mpirun=1/failing_output
8132: mpi/solution_transfer_05.mpirun=1.debug: RUN failed. -- Partial 
output:
8132: JobId ddong-XPS-13-9360 Tue Aug 20 21:50:34 2019
8132: DEAL::   2D solution transfer
8132: 
8132: mpi/solution_transfer_05.mpirun=1.debug: RUN failed. -- Additional 
output on stdout/stderr:
8132: 
8132:  written to before.eps

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-20 Thread Doug
Hello Marc,

I am trying to get the test working and unfortunately it is going a lot 
less smoothly than expected. Please find attached the test 
as solution_transfer_05.cc. Only h-refinement occurs. I have attached the 
before and after grids, which look OK. It is still that error

8134: 
8134: An error occurred in line <405> of file 
 in 
function
8134: void dealii::parallel::distributed::SolutionTransfer::unpack_callback(const typename 
dealii::parallel::distributed::Triangulation::cell_iterator&, typename 
dealii::parallel::distributed::Triangulation::CellStatus, const 
boost::iterator_range<__gnu_cxx::__normal_iterator > > >&, std::vector&) 
[with int dim = 3; VectorType = 
dealii::LinearAlgebra::distributed::Vector; DoFHandlerType = 
dealii::hp::DoFHandler<3, 3>; typename 
dealii::parallel::distributed::Triangulation::cell_iterator = 
dealii::TriaIterator >; typename 
dealii::parallel::distributed::Triangulation::CellStatus = dealii::Triangulation<3, 3>::CellStatus]
8134: The violated condition was: 
8134: dof_handler->get_fe(fe_index).dofs_per_cell == 
it_dofvalues->size()
8134: Additional information: 
8134: The transferred data was packed with a different number of dofs 
than the currently registered FE object assigned to the DoFHandler has.
8134: 

Note that if I decide to refine the cells (uncomment line 150) instead of 
coarsening them (line 149), the solution transfer is successful. Playing 
around with various refinement/coarsening, it seems a bit arbitrary when it 
does work and when it won't. Am I calling anything out of order?

Doug

On Sunday, August 18, 2019 at 9:44:50 PM UTC-4, Marc Fehling wrote:
>
> Hi Doug,
>
> when dealing with distributed meshes, ownership of cells change and we may 
> not know which finite element lives on cells that the process got recently 
> assigned to. Thus, we need to transfer each cell's `active_fe_index`, which 
> we do automatically during coarsening and refinement. However, you set 
> `active_fe_indices` after refinement happened, which works in the serial 
> case, but no longer in the parallel one. Before executing refinement, you 
> need to set `future_fe_indices` that describe to which finite element your 
> cell will be assigned to, and you need to do that before refinement 
> happened! This should resolve both issues.
>
> Further, you initialize `LinearAlgebra::distrtibuted::Vector` objects 
> without any parallel distribution by using this constructor. 
> <https://www.dealii.org/current/doxygen/deal.II/classLinearAlgebra_1_1distributed_1_1Vector.html#a3be6c4ce529bb9b6c13eb831d0a86f55>
>  
> Try using one a different one.
>
> Please see `tests/mpi/solution_transfer_04.cc 
> <https://github.com/dealii/dealii/blob/master/tests/mpi/solution_transfer_04.cc>`
>  
> and `tests/mpi/p_coarsening_and_refinement.cc 
> <https://github.com/dealii/dealii/blob/master/tests/mpi/p_refinement_and_coarsening.cc>`
>  
> for working examples (I guess we should provide one using an actual 
> `SolutionTransfer` object as well), that should hopefully be applicable to 
> your problem. This is a recently added feature: If you have any suggestions 
> for improvement or encounter more problems, feel free to message us!
>
> Marc
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8601d9b7-76f3-42c6-8972-d91ff3b640bc%40googlegroups.com.
// -
//
// Copyright (C) 1998 - 2018 by the deal.II authors
//
// This file is part of the deal.II library.
//
// The deal.II library is free software; you can use it, redistribute
// it, and/or modify it under the terms of the GNU Lesser General
// Public License as published by the Free Software Foundation; either
// version 2.1 of the License, or (at your option) any later version.
// The full text of the license can be found in the file LICENSE.md at
// the top level directory of deal.II.
//
// -



#include 

#include 
#include 

#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 

#include 

#include 

#include 
#include 

#include 

#include 
#include 

#include "../tests.h"

// a linear function that should be transferred exactly with Q1 and Q2
// elements
temp

Re: [deal.II] Parallel distributed hp solution transfer

2019-08-19 Thread Doug
Yes, will do. I have already fixed that example into a working code. I just 
need to clean it up a little and I'll submit a pull request.

Also, I'm encountering this issue where tests fail because a zero error 
ends up being around 1e-13 due to round-off errors. This not only happens 
for my test, but for a lot of other deal.II tests. I see that most of 
deal.II's testing framework relies on picking up tests based on a .cc + 
.output file, making all the tests "regression"-like. I usually have some 
tolerance for comparing doubles, which I thought would be the common way to 
do this. 

How do you provide tests in the deal.II testsuite to account for different 
round-off errors that might occur?

On Monday, August 19, 2019 at 1:17:10 PM UTC-4, Wolfgang Bangerth wrote:
>
> On 8/18/19 10:44 PM, Doug wrote: 
> > 
> > I'd be happy to add this test to the list if that's something you are 
> > interested in. I just have to take the time to read through how to. 
>
> I think this would be fantastic -- this area is new, so there aren't 
> that many tests yet. Anything that does what it is supposed to do will 
> therefore make for a good test, and since you already know how tests 
> look, it shouldn't even be very difficult to convert your code example 
> into a working test. Want to give this a try? 
>
> Best 
>   W. 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a797b28a-9b74-49d2-aa74-ebcb25b02103%40googlegroups.com.


Re: [deal.II] Parallel distributed hp solution transfer

2019-08-18 Thread Doug
Wow, I did not expect to get such a quick fix of an answer over the 
week-end. Thank you both for taking the time to answer.

The key thing here really was to use set_future_fe_index() instead of 
set_active_fe_index(). Would it make sense to discontinue one of them in 
the future? Otherwise, I would suggest adding the first paragraph you typed 
up in the documentation of the hp finite element module. Test 
tests/mpi/p_coarsening_and_refinement.cc 
<https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fblob%2Fmaster%2Ftests%2Fmpi%2Fp_refinement_and_coarsening.cc=D=1=AFQjCNFdorxQOizAcJxrFmOZN9v3C7n7IA>
 was 
indeed the perfect example to showcase the p-refinement.

Regarding the Vector constructor, I had replicated the bug even with a 
single core, so any bug related to this did not show up. In my code, I do 
give an MPI communicator.

I have modified the example to make it work. Note that in my original test, 
I was replicating a test where both DG and CG elements are refinement 
simultaneously for two different solutions. Since this 
set_future_fe_index() now occurs on grid refinement instead of occuring at 
the exact moment of set_active_fe_index(), I wouldn't expect to do the 
refinement for both DoF handlers at the same time.

I'd be happy to add this test to the list if that's something you are 
interested in. I just have to take the time to read through how to.

Anyways, it all works as expected right now. Thank you again.

Doug

On Sunday, August 18, 2019 at 9:44:50 PM UTC-4, Marc Fehling wrote:
>
> Hi Doug,
>
> when dealing with distributed meshes, ownership of cells change and we may 
> not know which finite element lives on cells that the process got recently 
> assigned to. Thus, we need to transfer each cell's `active_fe_index`, which 
> we do automatically during coarsening and refinement. However, you set 
> `active_fe_indices` after refinement happened, which works in the serial 
> case, but no longer in the parallel one. Before executing refinement, you 
> need to set `future_fe_indices` that describe to which finite element your 
> cell will be assigned to, and you need to do that before refinement 
> happened! This should resolve both issues.
>
> Further, you initialize `LinearAlgebra::distrtibuted::Vector` objects 
> without any parallel distribution by using this constructor. 
> <https://www.dealii.org/current/doxygen/deal.II/classLinearAlgebra_1_1distributed_1_1Vector.html#a3be6c4ce529bb9b6c13eb831d0a86f55>
>  
> Try using one a different one.
>
> Please see `tests/mpi/solution_transfer_04.cc 
> <https://github.com/dealii/dealii/blob/master/tests/mpi/solution_transfer_04.cc>`
>  
> and `tests/mpi/p_coarsening_and_refinement.cc 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fblob%2Fmaster%2Ftests%2Fmpi%2Fp_refinement_and_coarsening.cc=D=1=AFQjCNFdorxQOizAcJxrFmOZN9v3C7n7IA>`
>  
> for working examples (I guess we should provide one using an actual 
> `SolutionTransfer` object as well), that should hopefully be applicable to 
> your problem. This is a recently added feature: If you have any suggestions 
> for improvement or encounter more problems, feel free to message us!
>
> Marc
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/82d7a96c-221d-4771-b79f-45f34c4bc8e0%40googlegroups.com.


[deal.II] Parallel distributed hp solution transfer

2019-08-17 Thread Doug
Hello,

I am trying to use the parallel::distributed::SolutionTransfer, hp::DoFHandler> class and 
I can't seem to use it similarly to the serial version.

I looked through the tests/distributed_grids/solution_transfer_0*.cc tests 
and none of them seem to be testing for the hp refinement. The code I'm 
working on might be a bit large. Instead, I recreated the errors by copying 
the tests/hp/solution_transfer.cc, which contains SolutionTransfer for hp, 
and modified it to use the parallel distributed version of it.

Please find attached a proposed test. Line 160-175 are commented out and 
does the p-refinement. Therefore, the error below is for h-refinement only, 
but with an hp object. Oddly enough, my code works fine for h-refinement, 
but I get the same error below when I do p-refinement.

110: 
110: An error occurred in line <401> of file 
 in 
function
110: void dealii::parallel::distributed::SolutionTransfer::unpack_callback(const typename 
dealii::parallel::distributed::Triangulation::cell_iterator&, typename 
dealii::parallel::distributed::Triangulation::CellStatus, const 
boost::iterator_range<__gnu_cxx::__normal_iterator > > >&, std::vector&) 
[with int dim = 2; VectorType = 
dealii::LinearAlgebra::distributed::Vector; DoFHandlerType = 
dealii::hp::DoFHandler<2, 2>; typename 
dealii::parallel::distributed::Triangulation::cell_iterator = 
dealii::TriaIterator >; typename 
dealii::parallel::distributed::Triangulation::CellStatus = dealii::Triangulation<2, 2>::CellStatus]
110: The violated condition was: 
110: dof_handler->get_fe(fe_index).dofs_per_cell == it_dofvalues->size()
110: Additional information: 
110: The transferred data was packed with a different number of dofs 
than thecurrently registered FE object assigned to the DoFHandler has.
110: 

Now, if I uncomment those lines, and therefore do p-refinement in the 
attached test, I get

110: 
110: An error occurred in line <382> of file 
 in 
function
110: void dealii::parallel::distributed::SolutionTransfer::unpack_callback(const typename 
dealii::parallel::distributed::Triangulation::cell_iterator&, typename 
dealii::parallel::distributed::Triangulation::CellStatus, const 
boost::iterator_range<__gnu_cxx::__normal_iterator > > >&, std::vector&) 
[with int dim = 2; VectorType = 
dealii::LinearAlgebra::distributed::Vector; DoFHandlerType = 
dealii::hp::DoFHandler<2, 2>; typename 
dealii::parallel::distributed::Triangulation::cell_iterator = 
dealii::TriaIterator >; typename 
dealii::parallel::distributed::Triangulation::CellStatus = dealii::Triangulation<2, 2>::CellStatus]
110: The violated condition was: 
110: cell->child(child_index)->active_fe_index() == fe_index
110: Additional information: 
110: This exception -- which is used in many places in the library -- 
usually indicates that some condition which the author of the code thought 
must be satisfied at a certain point in an algorithm, is not fulfilled. An 
example would be that the first part of an algorithm sorts elements of an 
array in ascending order, and a second part of the algorithm later 
encounters an element that is not larger than the previous one.
110: 
110: There is usually not very much you can do if you encounter such an 
exception since it indicates an error in deal.II, not in your own program. 
Try to come up with the smallest possible program that still demonstrates 
the error and contact the deal.II mailing lists with it to obtain help.
110: 

I went through some of the deal.II source code and it's a bit out of my 
depth. Please let me know if I am somehow not p-refining correctly, or if I 
can be of any help to fix this.

Best regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/1836ac96-4f6c-4d90-ad2f-5e68cbabe623%40googlegroups.com.
// -
//
// Copyright (C) 1998 - 2018 by the deal.II authors
//
// This file is part of the deal.II library.
//
// The deal.II library is free software; you can use it, redistribute
// it, and/or modify it under the terms of the GNU Lesser General
// Public License as published by the Free Software Foundation; either
// version 2.1 of the License, or (at your option) any later ve

[deal.II] FESystem for non-mixed DG finite element

2019-05-03 Thread Doug
Hello again,

I am now looking into adding vector-valued equations to my current 
framework. The goal will be to do some hp-adaptive Navier-Stokes flow 
simulation using Discontinuous Galerkin. I have read "Handling vector 
valued problems", went through most of the vector-valued examples, and saw 
lecture 19 and 20 from Prof. Bangerth. The common denominator seems to be: 
use FESystem to combine multiple FiniteElement together. 

However, I will not need to use mixed finite elements, and therefore, my 
FiniteElement will always be the same for all components of the solution 
for a given cell. As a result, I would expand my vector solution as a 
linear combination of vector-valued coefficients (degrees of freedom), and 
scalar trial functions. As for the trial function, the vector trial 
function can be taken "component-wise" as explained in step-8. Basically, 
Eq. 7-9 of Bassi and Rebay's paper show how I would only need scalar basis 
functions defined by a single FiniteElement

Bassi, F., and Rebay, S., “High-order accurate discontinuous finite element 
solution of the 2D Euler equations,” *Journal of Computational Physics*, 
vol. 138, 1997, pp. 251–285.


Therefore, it doesn't seem like I would need a FESystem. If anything, the 
FESystem makes the code more complex and less readable in my case, since 
each entry of the vector-valued bases would just be copies of each other. 

Is the use of FESystem somehow simplifying my code in any aspect? 
The two main things I can think of is 

   - Easy access to the "global solution vector" index through the 
   DoFHandler and its re-ordering functions.
   - Performing operator-like operations on similar equations (e.g. 3 
   momentum equations)

Am I missing something? 

Thank you again for your time. Hoping I can get up to speed to contribute 
back in the future.

Best regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Non-tensor polynomials with bilinear elements

2019-05-03 Thread Doug
Prof. Bangerth,

Ok, so my initial understanding of deal.II was accurate. It all makes sense 
now. Although FE_DGP is spanning {1,x,y} in reference space, it is not spanning 
those bases in physical space. Instead, it is spanning some combination of 
{1,x,y,xy} with 3 degrees of freedom, that only spans {1,x,y} in physical space 
if the mapping is affine. As a result, loss of convergence order entails since 
it cannot properly represent linear functions in physical space.

On the other hand, a bilinear basis on the reference space can indeed represent 
physical linear functions if a bilinear mapping is used! Hence why FE_DGQ 
work. 

If we were working with triangles, the Legendre basis {1,x,y} would be able to 
represent linear functions in physical space since the mapping is automatically 
affine for straight sided elements, hence my wrong assumption that Legendre 
polynomials should produce optimal orders of convergence.

Thank you for the response, it was immensely helpful. 

Doug 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Non-tensor polynomials with bilinear elements

2019-05-02 Thread Doug
Hello,

I am not sure if FE_DGP and FE_DGPMonomial is behaving as expected.

I had recently commented on a thread about distorted meshes (
https://groups.google.com/forum/#!topic/dealii/ZSHIDbp7yfE). I am using DG 
to solve a linear advection problem with a manufactured solution. A 2D 
square with perfectly straight cartesian elements,  MappingQ, FE_DGP 
(Legendre polynomials), would give me optimal (p+1) order of convergence 
with h-refinement.

However, if the h-refinement is followed by a small random distortion 
through (distort_random), the optimal convergence is lost. If a distorted 
mesh is h-refined subsequently (such that the distortion is only on the 
coarse grid), then optimal convergence orders are achieved. As Prof. 
Bangerth mentionned, this refinement results in elements that tend to 
parallelograms, which have a linear Jacobian.

Now, if I use FE_DGQ (tensor Lagrange) or FE_DGQLegendre (tensor-product 
Legendre), I do obtain optimal orders of convergence, no matter the 
distortion.

Based on this, I assume that FEValues, which takes a FiniteElement and a 
Mapping, uses them to evaluate the Jacobian field instead of using a 
bilinear mapping as the documentation of Mapping suggests. So FE_DGP would 
only be able to parametrize the cell geometry with the polynomial space {1, 
x, y} instead of {1, x, y, xy}.

The documentation of FE_DGP does mention similar to this, but I thought 
this only applied to the solution, not the geometric mapping. I would 
expect the geometric mapping to always use a tensor product basis in order 
to be able to represent any straight-sided quads, especially since deal.II 
only handles quads. Since the FEValues seems to evaluate Jacobian based on 
the provided FiniteElement, FE_DGP is basically useless since it can only 
handle truly linear elements.

It's not really an issue for me since I will just switch to a 
tensor-product basis. However, I am curious whether or not my understanding 
of what is happening is correct, and if this behaviour is expected.

Best Regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Diffusion equation with DG fails with distorted mesh

2019-04-26 Thread Doug
Hello Gary,

Since we have two independent implementation of DG, I am curious to know if 
you achieve optimal error convergence order of (p+1) on *distorted* grids 
for either linear advection, diffusion, or convection-diffusion. 

Best Regards,

Doug

On Friday, April 26, 2019 at 11:48:58 AM UTC-4, Gary Uppal wrote:
>
> Prof. Bangerth,
>
> Thanks for the reply. I get problems on less distorted meshes as well. 
> For example, in the mesh with a hole shown below.  I eventually need 
> multiple holes and an advection term. Actually, the advection seems ok and 
> I mainly get issues when the diffusion is large. If the distorted mesh is 
> the issue, is there a better way to construct the geometry? Is there not a 
> way to get better accuracy even with the distorted mesh? I've tried 
> refining the mesh and lowering the time step, but that doesn't seem to 
> change the results. 
>
> Thank you,
> Gary   
> [image: with_hole.jpg]
>
> On Fri, 26 Apr 2019 at 04:50, Doug > 
> wrote:
>
>> Prof. Bangerth,
>>
>> I am encountering a similar "issue" regarding distorted grids. I am using 
>> DG for linear advection with a manufactured solution to verify the code.
>>
>> I recover optimal convergence orders (p+1) for straight meshes when 
>> refining in h. I recover optimal orders (p+1) for a sine-transformed grid 
>> with some skewed angles up to around 45 degrees (as seen attached).
>>
>> However, I lose my optimal (p+1) order as soon as I apply any small 
>> amount of random distortion to my final refined grid (see attached). Note 
>> that the distortion factor of 0.15 is used, but this loss of order also 
>> occurs for small distortion factors down to 0.01. Instead of (p+1), I may 
>> recover (p) or even (p-1). This only applies when the final grid is 
>> randomly distorted. If I start from a coarse-ish distorted grid (say 100 
>> cells), and refine globally such that its children aren't distorted, I then 
>> recover my optimal orders again.
>>
>> Since my distorted cell still have reasonable angles no matter its 
>> refinement level, degenerate cells wouldn't be an issue. Also, since I am 
>> not performing adaptive refinement, there are no hanging nodes such that 
>> the triangulation becomes irregular.
>>
>> Would you know happen to know if some other condition is violated from 
>> random distortions such that error estimates do not hold?
>>
>> Best regards,
>>
>> Doug
>>
>> On Thursday, April 25, 2019 at 11:15:31 PM UTC-4, Wolfgang Bangerth wrote:
>>>
>>> On 4/25/19 2:50 PM, Gary Uppal wrote: 
>>> > 
>>> > I am trying to solve the diffusion equation with Discontinuous 
>>> Galerkin 
>>> > elements. The solution looks good with a regular structured mesh, but 
>>> if I 
>>> > distort the mesh, the solution blows up and does not converge. Is 
>>> there an 
>>> > obvious reason this would happen? I later need a mesh with holes in 
>>> it, so I 
>>> > cannot always use the structured mesh. 
>>> > 
>>> > I use the interior penalty method and get the diffusion matrix using 
>>> > MeshWorker as in Step-39. I compute the mass matrix and solve the 
>>> diffusion 
>>> > equation with an implicit backward Euler method and am using periodic 
>>> boundary 
>>> > conditions. Snapshots of the solution with structured and distorted 
>>> meshes are 
>>> > shown below. Any help is appreciated! 
>>>
>>> Gary -- the wrong solutions happen on cells that are very nearly 
>>> degenerate 
>>> (i.e., are almost triangular). The finite element theory says that the 
>>> error 
>>> between the exact and the numerical solution is bounded by a constant 
>>> times 
>>> some power of h, where the constant depends on the minimal and maximal 
>>> angles 
>>> at the vertices of the cells. Theory then also predicts that this 
>>> constant 
>>> goes to infinity if the maximal angle at one vertex comes close to 180 
>>> degrees 
>>> -- which is exactly what is happening in your case. 
>>>
>>> So choose a mesh that is less distorted and you should be fine. 
>>>
>>> Best 
>>>   W. 
>>>
>>>
>>> -- 
>>>  
>>> Wolfgang Bangerth  email: bang...@colostate.edu 
>>> www: 
>>> http://www.math.colostate.edu/~bangerth/ 
>>>
>&

Re: [deal.II] FEValues pre-computed values

2019-02-11 Thread Doug
Prof. Bangerth,

Just to clarify, the Jacobian, its transpose inverse and its determinant 
will be re-computed every time reinit() is called? Even if the mesh has not 
changed from the previous time step?

If so, I might end up pre-computing and storing the Jacobian transpose 
inverse and the Jacobian determinant to avoid re-computation. However, its 
effect on the total time should be relatively small compared to flux 
computations. "Premature optimization is the root of all evil".

Doug

On Sunday, February 10, 2019 at 2:15:04 PM UTC-5, Wolfgang Bangerth wrote:
>
>
> Doug, 
>
> > I am currently looking into the implementation FEValues. Here is my 
> understanding 
> > 
> >  1. FEValues takes some Mapping, FiniteElement, and Quadrature to 
> evaluate and 
> > store the values with the required UpdateFlags. 
> >  2. FEValues::initialize through the FiniteElement::get_data will then 
> > pre-compute all the required data that are possibly computable on 
> the 
> > reference cell. 
> >  3. FEValues::reinit will then compute/fetch the data on a physical 
> cell, 
> > using the Jacobian information by calling the various 
> fill_fe_values() 
> > functions. 
> > 
> > I see that most of the examples use implicit methods to solve the 
> various 
> > problems. Therefore, the cost of initialize() and reinit() on each cell 
> is 
> > quite low compared to solving the actual system at every step. However, 
> for 
> > explicit methods such as RK, we wouldn't want to re-compute the Jacobian 
> or 
> > reference gradients (\nabla_{physical} \phi = J^{-T} \nabla _{reference} 
> \phi) 
> > at every time step. Therefore, the FEValues declaration would be outside 
> the 
> > "assemble_system()" seen in most tutorials, and passed as an argument as 
> > "assemble_system(FEValues fe_v)". Unfortunately, I am getting lost in 
> the code 
> > where things values are pre-computed versus fetching them. 
>
> I think that's not necessary. The creation of an FEValues object is 
> expensive, 
> but on meshes with a non-trivial number of cells, the creation of the 
> FEValues 
> object is still cheap compared to what you're going to do with the object 
> on 
> the cells. (I.e., it is expensive compared to the operations on one cell; 
> but 
> the creation is cheap compared to the operations on *all* cells.) 
>
>
> > My question is: What is being re-computed every time I call 
> > FEValues::reinit(cell)? 
>
> initialize() computes everything that can be computed on the reference 
> cell. 
> This includes the values and gradients of the shape functions with regard 
> to 
> the reference coordinates. 
>
> reinit() then only computes everything that depends on the actual cell. 
> This 
> means for example computing the Jacobian of the mapping on a given cell, 
> and 
> multiplying it by the (precomputed) gradients of the shape functions on 
> the 
> reference cell to obtain the gradient on the real cell. 
>
>
> > I am mainly worried about it re-computing Jacobians (its determinant and 
> its 
> > inverse) and evaluating its product with the reference gradients 
> > (\nabla_{reference} phi) every time reinit(cell) is called. Otherwise, 
> the 
> > solution will be to pre-compute and store those outside in some other 
> > container and avoid using FEValues. 
>
> Yes. This (the Jacobian of the mapping and its determinant) is a quantity 
> that 
> changes from each cell to the next, and so it is computed in reinit() 
> because 
> it simply can't be precomputed without knowledge of the actual cell. 
>
> In explicit methods, you will likely visit the same mesh over and over, 
> and so 
> it does make sense to compute things such as the gradient of the shape 
> functions on each cell (rather than the Jacobian or its determinant, which 
> goes into the computation) once at the beginning of the program and then 
> re-use it where necessary. You can of course do this pre-computation using 
> FEValues :-) 
>
> An alternative is the matrix-free framework that can do some of these 
> things 
> as well. You may want to look at the corresponding example programs. 
>
> Best 
>   W. 
>
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] FEValues pre-computed values

2019-02-08 Thread Doug
Hello again,

I am currently looking into the implementation FEValues. Here is my 
understanding

   1. FEValues takes some Mapping, FiniteElement, and Quadrature to 
   evaluate and store the values with the required UpdateFlags.
   2. FEValues::initialize through the FiniteElement::get_data will then 
   pre-compute all the required data that are possibly computable on the 
   reference cell.
   3. FEValues::reinit will then compute/fetch the data on a physical cell, 
   using the Jacobian information by calling the various fill_fe_values() 
   functions.

I see that most of the examples use implicit methods to solve the various 
problems. Therefore, the cost of initialize() and reinit() on each cell is 
quite low compared to solving the actual system at every step. However, for 
explicit methods such as RK, we wouldn't want to re-compute the Jacobian or 
reference gradients (\nabla_{physical} \phi = J^{-T} \nabla _{reference} 
\phi) at every time step. Therefore, the FEValues declaration would be 
outside the "assemble_system()" seen in most tutorials, and passed as an 
argument as "assemble_system(FEValues fe_v)". Unfortunately, I am getting 
lost in the code where things values are pre-computed versus fetching them.

My question is: What is being re-computed every time I call 
FEValues::reinit(cell)?

I am mainly worried about it re-computing Jacobians (its determinant and 
its inverse) and evaluating its product with the reference gradients 
(\nabla_{reference} phi) every time reinit(cell) is called. Otherwise, the 
solution will be to pre-compute and store those outside in some other 
container and avoid using FEValues.

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Automatic Differentiation of the Finite Element Jacobian

2019-02-08 Thread Doug
Thank you both for the amazingly quick reply.

We will very likely implement this work at some point in our development in 
a few months. We have quite a bit of other basics to implement first since 
we have just started reading through the documentation. We plan on using 
the deal.ii library as the workhorse behind an hp-adaptive, parallel flow 
solver used for aerodynamic shape optimization. The derivatives with 
respect to the mesh points will be needed for optimization purposes.

I see that in a previous thread 
(https://groups.google.com/forum/#!searchin/dealii/tpetra%7Csort:date/dealii/I8aLPu5anrM/gBzaaqivDAAJ),
 
you plan on switching over to Tpetra, which will also be useful for us 
since we might instantiate those vectors with AD types. Do you have an idea 
of the time horizon on which this will be developped?

Doug


On Wednesday, February 6, 2019 at 9:22:17 AM UTC-5, Wolfgang Bangerth wrote:
>
>
> Doug, 
> in addition to what J-P already wrote: 
>
> > We had chosen to use deal.ii since most of it looked templated in terms 
> of 
> > accepting the float type. However, I noticed that the FEValuesBase 
> returns a 
> > 'double', instead of a generic 'value_type' for the Jacobian term. 
>
> That depends on the context. 
>
> Shape functions are indeed defined as phi(x) returning simply a double. 
> That's 
> because of an underlying assumption that x is a location parameterized as 
> doubles. So functions such as shape_value(), shape_grad(), etc really just 
> return doubles. (This includes the Jacobian of the shape functions, i.e., 
> their gradients.) In other words, it's currently not possible to do 
> autodifferentiation of the shape functions with regard to the position x. 
>
> But functions such as get_function_gradient() which compute quantities 
> such as 
>
>u_h(x) = \sum_j U_j phi(x_j) 
>
> are templatized on the data type of the vector U of unknowns. So they 
> should 
> allow computing Jacobians of u_h with regard to the coefficients U_j. This 
> is 
> what one needs to do to compute residual vectors from energy functionals, 
> and 
> Hessian matrices from residual vectors. 
>
> The latter functionality is implemented and routinely used. In other 
> words... 
>
>
> > Our goal would be to differentiate through the entire residual (and 
> possibly 
> > the time step) with respect to the solution variables and *mesh points*. 
>
> ...the former of this works, whereas the latter probably does not. 
>
>
> As J-P already mentioned, the reason for all of this is not a conscious 
> design 
> decision, but that nobody who needed this in the past has implemented it. 
> We'd 
> be quite happy to accept patches in this direction. It is often useful if 
> you 
> posted an example of what you want to do and discussed implementation 
> strategies before starting the work -- this makes merging this 
> functionality 
> later on easier. I see that you've read through a lot of code already, 
> though, 
> so you'll have a good sense of what it'll take. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Automatic Differentiation of the Finite Element Jacobian

2019-02-05 Thread Doug
Hello,

We had chosen to use deal.ii since most of it looked templated in terms of 
accepting the float type. However, I noticed that the FEValuesBase returns 
a 'double', instead of a generic 'value_type' for the Jacobian term.

Our goal would be to differentiate through the entire residual (and 
possibly the time step) with respect to the solution variables and *mesh 
points*. Therefore, the general element Jacobian (and its determinant) 
would need to be differentiated. I see that the Jacobian entries already 
have differentiation functions, however, that would entail manually 
assembling the derivative of the residual with respect to the mesh point. 
We would like to instead have a function that assembles the residual 
right-hand side and pass the AD types to that high-level function to obtain 
dRdX.

Looking at the current implementation in fe_values, mapping_q_generic, and 
local integrators, I see that it looks relatively easy to template the 
value type since it is already partially templated.

Is there a reason why those functions were not templated? Is there anything 
we should be careful about before we start templating those functions?

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.