Re: [deal.II] Re: Flag cells to be refined more than one levels?

2023-11-15 Thread Peter Munch
Hi Greg,

good that you find a working solution! However, I would suggest against 
using `VectorTools::interpolate_to_different_mesd` but use 
`SolutionTransfer` or `parallel::distributed::SolutionTransfer` to move the 
refinement information from a coarse mesh to a fine mesh. You can take a 
look at the tutorial how these classes are used.

Hope this helps,
Peter
On Saturday, 11 November 2023 at 02:24:23 UTC+1 Wolfgang Bangerth wrote:

> On 11/10/23 17:43, Yuan Wang wrote:
> > And I just found that the bug comes from feeding Vector to
> > interpolate_to_different_mesh() while int is not a valid
> > VectorType::value_type therein. Changing int to double fixes it.
>
> Yes, that is right. We do not seem to enforce this, but we are generally 
> expecting Vector to only be instantiated for T=some floating point type.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3e1b26f7-73fe-45de-922b-81d9fa0d38e3n%40googlegroups.com.


[deal.II] Re: MGTransferGlobalCoarsening with Nedelec element

2023-11-15 Thread Peter Munch
Hi Ce Qin,

There is a WIP PR for adding a fallback path to MGTransferMF: 
https://github.com/dealii/dealii/pull/12924. The PR is already 2 years old 
and has merge conflicts with master. But I guess the changes would be 
similar on master.

Best,
Peter

On Wednesday, 15 November 2023 at 08:30:00 UTC+1 qinc...@gmail.com wrote:

> Dear all,
>
> I want to implement a Multigrid solver with global coarsening for 
> Maxwell's equations.
> However, upon reviewing the documentation for MGTransferMF, it appears that
> non-primitive elements are not currently supported. Is there any way to 
> implement
> global coarsening MG solver with the Nedelec element?
>
> Thanks in advance.
>
> Best regards,
> Ce Qin
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/55745114-59e1-442d-8da0-761ea3bb40d2n%40googlegroups.com.


[deal.II] Re: Flag cells to be refined more than one levels?

2023-11-08 Thread Peter Munch
Hi Greg,

I guess what you could do is to treat the vector as vector associated with 
a DoFHandler with FE_DGQ(0). During each refinement you "interpolate" the 
vector to the new mesh and decrease the value by one. This approach 
naturally extends to a parallel setting.

Alternately, you could create a single vector associated the coarse cells 
and while looping over active cells of the refined mesh you would 
recursively call the parent cell until you reach the coarsest cell whose id 
you can use to access the vector. In the parallel setting, the vector 
associated to the coarse cells would need to replicated between processes.

Hope this help,
PM

On Thursday, 9 November 2023 at 03:32:42 UTC+1 ygre...@gmail.com wrote:

> Dear all,
>
> This might be a somewhat odd feature request but I was wondering whether 
> the functionality described below can be readily achieved in deal.II 
> already:
>
> Say we are given a coarse mesh with every cell never refined, and a vector 
> of non-negative integers associated with each cell. The integer is the 
> level of isotropic refinement to be applied to each cell. The number can be 
> 0, which means not doing anything here; it can be 1, which corresponds to 
> what set_refine_flag() does. But it can also be any number bigger than 1. 
> Say we have a quadrilateral cell and the number associated with it is 2, 
> then the cell is to be refined isotropically once into four children and 
> then every child is refined again, resulting in 16 grandchildren.
>
> If we were to hard code this, would it be sound to simply rely on 
> set_refine_flag() and execute the refinement one level after another for 
> M levels, with M being the maximum number of the given vector? And perhaps 
> to do this we would need iterators triangulation.begin(n), 
> triangulation.end(n) and isotropic_child() involved somehow?
>
> Any help and/or pro tips would be greatly appreciated!
>
> Best regards,
> Yuan
> ​
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/78e80d9e-8b6f-4a99-beee-5f2df247383dn%40googlegroups.com.


Re: [deal.II] Re: step-37 multiple DoFHandler's smoother problems

2023-06-27 Thread Peter Munch
OK. The problem are these 
lines: 
https://github.com/dealii/dealii/blob/8d68b3192ed1c8b928dfce03ca1d615201859904/include/deal.II/matrix_free/operators.h#L1225-L1235.
 
The number of rows are the sum of the number of DoFs of both attached 
DoFHandlers...

Personally, I would not use these matrix-free operators but create the 
operator myself around a MatrixFree object. Just like in step-75. Would 
that be possible?

Out of curiosity, what is your goal?

Best,
PM


On Tuesday, June 27, 2023 at 7:38:48 PM UTC+2 sljoh...@gmail.com wrote:

> Here is step-37 with minimal changes. All that was changed was in the 
> creation of a DG DoFHandler and the fe_system and constrains and what not 
> to go along with it. Then of the functions only the LapalceProblem 
> setup_system() function was altered and nothing else. It was just altered 
> to also be reinit with the dof_handler_DG.
>
> Obviously the LaplaceOperator in my actual code actually uses variables 
> that depend on the DG DoFHandler but that was left out of this example.
>
> Thank you again so much for your time and for all you guys do for this 
> library.
>
> Thanks,
> Sean Johnson
>
> On Tuesday, June 27, 2023 at 10:58:53 AM UTC-6 Sean Johnson wrote:
>
>> Yes of course. I can get one with minimal changes made to step-37. Full 
>> code do you want the entirety of the changed step-37 code or just the 
>> LaplaceProblem class or just the setup_system() function?
>>
>> On Tuesday, June 27, 2023 at 10:53:04 AM UTC-6 peterr...@gmail.com wrote:
>>
>>> I'll try to take a look at the issue tomorrow. Could you attach the full 
>>> code. I have a feeling what might be an issue. 
>>>
>>> Peter
>>>
>>> On Tue, Jun 27, 2023, 18:18 Sean Johnson  wrote:
>>>
 Okay. Thank you.

 On Tuesday, June 27, 2023 at 10:02:16 AM UTC-6 Wolfgang Bangerth wrote:

> On 6/27/23 09:53, Sean Johnson wrote: 
> > I keep seeing other questions getting at least one response. Is 
> there 
> > something else that I should provide in addition to the question or 
> that is 
> > missing or is there somewhere this question has already been 
> addressed that I 
> > cannot find? I am really all ears 
>
> Sean, it's all about who has the expertise to answer questions and who 
> has the 
> time. I don't know anything about matrix-free stuff, so your questions 
> require 
> someone else to have the time and expertise. 
>
> Best 
> W. 
>
> -- 
>  
>
> Wolfgang Bangerth email: bang...@colostate.edu 
> www: http://www.math.colostate.edu/~bangerth/ 
>
>
> -- 
 The deal.II project is located at http://www.dealii.org/
 For mailing list/forum options, see 
 https://groups.google.com/d/forum/dealii?hl=en
 --- 
 You received this message because you are subscribed to the Google 
 Groups "deal.II User Group" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to dealii+un...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/dealii/dab3cc77-bd64-445a-82f5-da55b00a812en%40googlegroups.com
  
 
 .

>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a8ec30df-c93d-46c6-a971-cbbe3e52b299n%40googlegroups.com.


Re: [deal.II] Re: step-37 multiple DoFHandler's smoother problems

2023-06-27 Thread Peter Munch
I'll try to take a look at the issue tomorrow. Could you attach the full
code. I have a feeling what might be an issue.

Peter

On Tue, Jun 27, 2023, 18:18 Sean Johnson  wrote:

> Okay. Thank you.
>
> On Tuesday, June 27, 2023 at 10:02:16 AM UTC-6 Wolfgang Bangerth wrote:
>
>> On 6/27/23 09:53, Sean Johnson wrote:
>> > I keep seeing other questions getting at least one response. Is there
>> > something else that I should provide in addition to the question or
>> that is
>> > missing or is there somewhere this question has already been addressed
>> that I
>> > cannot find? I am really all ears
>>
>> Sean, it's all about who has the expertise to answer questions and who
>> has the
>> time. I don't know anything about matrix-free stuff, so your questions
>> require
>> someone else to have the time and expertise.
>>
>> Best
>> W.
>>
>> --
>> 
>> Wolfgang Bangerth email: bang...@colostate.edu
>> www: http://www.math.colostate.edu/~bangerth/
>>
>>
>> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/dab3cc77-bd64-445a-82f5-da55b00a812en%40googlegroups.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAHw3Rk7sp_B7Rh5dQWk6BJ%3DBuxA2hJZoX35CbP1tOi7XRBQNaQ%40mail.gmail.com.


[deal.II] Re: Could not find partitioner that fits vector

2023-05-07 Thread Peter Munch
Thanks!
Peter

On Sunday, 7 May 2023 at 09:27:34 UTC+2 yy.wayne wrote:

> I'll try add build function to  MGTransferBlockMatrixFree  then.
> Thank you.
>
> Best,
> Wayne
>
> 在2023年5月7日星期日 UTC+8 15:23:12 写道:
>
>> >  I wonder will the unmatched partitioners occur even coding is correct?
>>
>> Yes. The requirements of MG and the level operators are not identical. If 
>> you don't do anything special, the vector are set up for MG and not the 
>> level operators. With the help of external partitioners you can fix this 
>> "problem".
>>
>> I guess you could introduce a new 
>> function MGTransferBlockMatrixFree::build(). Shouldn't to be too much work, 
>> since all you need to do is to delegate the task 
>> to MGTransferMatrixFree::build(). Happy to accept a patch!
>>
>> Peter
>>
>> On Sunday, 7 May 2023 at 08:18:37 UTC+2 yy.wayne wrote:
>>
>>> Thank you Peter,
>>>
>>> Currently I'm using MGTransferBlockMatrixFree where build with external 
>>> partitioners is not supported.
>>> But before moving to modify Block mg transfers, I wonder will the 
>>> unmatched partitioners occur even coding is correct?
>>> My problem is preconditioned with global coarsening h-MG, with 6 dofs 
>>> per node for real and imaginary vectors in 3d,
>>> BlockVectors are applied except for trilinos direct solver on coarsest 
>>> level, where only Vectors rather than BlockVector 
>>> is supported. Transfer between Vector and BlockVector is a potential 
>>> risk though.
>>>
>>> Best,
>>> Wayne
>>>
>>> 在2023年5月6日星期六 UTC+8 16:41:40 写道:
>>>
 Hi Wayne,

 my best guess it that the partitioners in the multigrid operator and 
 the matrix-free operators not match, with the result that a vector with 
 wrong ghosting coming from the multigrid operator is used as src/dst in 
 the 
 matrix-free loop resulting in the assert you show. Previously, one would 
 need to fix the the ghosting in the operator (see 
 https://github.com/dealii/dealii/blob/a091f25c5ff77687fbd8a65f5b336b235f201100/include/deal.II/matrix_free/operators.h#L489-L498)
  
 but nowadays one can pass in external partitioners to MG: see, e.g., 
 https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/multigrid_throughput.cc#L1802.
  
 Similarly one can do this in the context of global-coarsening MG.

 Hope this helps!

 Peter

 On Saturday, 6 May 2023 at 10:22:58 UTC+2 yy.wayne wrote:

> It might fails too for 1st basis order too ...
> But all fails occur when mpi process > 2. If I run with only 2 mpi 
> processes, no error shows.
>
> 在2023年5月6日星期六 UTC+8 15:22:23 写道:
>
>> Hello my friends,
>>
>> I'm testing a matrix-free code that, preconditioned by a geometric 
>> multigrid.
>> The code is not stable that it fails for some fe_degree + 
>> coarse_level + total_mesh levels
>> combination. 
>> For example, any n_mpi_process + 1st order basis elements run 
>> succesfully.
>> But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 2) 
>> or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during 
>> SolverGMRES.solve()*. 
>> The error in debug mode is:
>> 
>> An error occurred in line <3403> of file 
>> 
>>  
>> in function
>> unsigned int dealii::internal::VectorDataExchange> VectorizedArrayType>::find_vector_in_mf(const VectorType&, bool) const 
>> [with VectorType = dealii::LinearAlgebra::distributed::Vector; 
>> int 
>> dim = 3; Number = double; VectorizedArrayType = 
>> dealii::VectorizedArray]
>> The violated condition was: 
>> false
>> Additional information: 
>> Could not find partitioner that fits vector
>>
>> It might be my own coding mistake. I'm using 9.3.0 may upgrade dealii 
>> to 9.4.0 help ?
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/207df089-3f6c-4b84-8747-c4c91e5712b3n%40googlegroups.com.


[deal.II] Re: Could not find partitioner that fits vector

2023-05-07 Thread Peter Munch
>  I wonder will the unmatched partitioners occur even coding is correct?

Yes. The requirements of MG and the level operators are not identical. If 
you don't do anything special, the vector are set up for MG and not the 
level operators. With the help of external partitioners you can fix this 
"problem".

I guess you could introduce a new 
function MGTransferBlockMatrixFree::build(). Shouldn't to be too much work, 
since all you need to do is to delegate the task 
to MGTransferMatrixFree::build(). Happy to accept a patch!

Peter

On Sunday, 7 May 2023 at 08:18:37 UTC+2 yy.wayne wrote:

> Thank you Peter,
>
> Currently I'm using MGTransferBlockMatrixFree where build with external 
> partitioners is not supported.
> But before moving to modify Block mg transfers, I wonder will the 
> unmatched partitioners occur even coding is correct?
> My problem is preconditioned with global coarsening h-MG, with 6 dofs per 
> node for real and imaginary vectors in 3d,
> BlockVectors are applied except for trilinos direct solver on coarsest 
> level, where only Vectors rather than BlockVector 
> is supported. Transfer between Vector and BlockVector is a potential risk 
> though.
>
> Best,
> Wayne
>
> 在2023年5月6日星期六 UTC+8 16:41:40 写道:
>
>> Hi Wayne,
>>
>> my best guess it that the partitioners in the multigrid operator and the 
>> matrix-free operators not match, with the result that a vector with wrong 
>> ghosting coming from the multigrid operator is used as src/dst in the 
>> matrix-free loop resulting in the assert you show. Previously, one would 
>> need to fix the the ghosting in the operator (see 
>> https://github.com/dealii/dealii/blob/a091f25c5ff77687fbd8a65f5b336b235f201100/include/deal.II/matrix_free/operators.h#L489-L498)
>>  
>> but nowadays one can pass in external partitioners to MG: see, e.g., 
>> https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/multigrid_throughput.cc#L1802.
>>  
>> Similarly one can do this in the context of global-coarsening MG.
>>
>> Hope this helps!
>>
>> Peter
>>
>> On Saturday, 6 May 2023 at 10:22:58 UTC+2 yy.wayne wrote:
>>
>>> It might fails too for 1st basis order too ...
>>> But all fails occur when mpi process > 2. If I run with only 2 mpi 
>>> processes, no error shows.
>>>
>>> 在2023年5月6日星期六 UTC+8 15:22:23 写道:
>>>
 Hello my friends,

 I'm testing a matrix-free code that, preconditioned by a geometric 
 multigrid.
 The code is not stable that it fails for some fe_degree + coarse_level 
 + total_mesh levels
 combination. 
 For example, any n_mpi_process + 1st order basis elements run 
 succesfully.
 But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 2) 
 or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during 
 SolverGMRES.solve()*. 
 The error in debug mode is:
 
 An error occurred in line <3403> of file 
  
 in function
 unsigned int dealii::internal::VectorDataExchange>>> VectorizedArrayType>::find_vector_in_mf(const VectorType&, bool) const 
 [with VectorType = dealii::LinearAlgebra::distributed::Vector; int 
 dim = 3; Number = double; VectorizedArrayType = 
 dealii::VectorizedArray]
 The violated condition was: 
 false
 Additional information: 
 Could not find partitioner that fits vector

 It might be my own coding mistake. I'm using 9.3.0 may upgrade dealii 
 to 9.4.0 help ?



-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f4918777-9579-485a-8c81-de835cc08ef1n%40googlegroups.com.


[deal.II] Re: Could not find partitioner that fits vector

2023-05-06 Thread Peter Munch


Hi Wayne,

my best guess it that the partitioners in the multigrid operator and the 
matrix-free operators not match, with the result that a vector with wrong 
ghosting coming from the multigrid operator is used as src/dst in the 
matrix-free loop resulting in the assert you show. Previously, one would 
need to fix the the ghosting in the operator (see 
https://github.com/dealii/dealii/blob/a091f25c5ff77687fbd8a65f5b336b235f201100/include/deal.II/matrix_free/operators.h#L489-L498)
 
but nowadays one can pass in external partitioners to MG: see, e.g., 
https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/multigrid_throughput.cc#L1802.
 
Similarly one can do this in the context of global-coarsening MG.

Hope this helps!

Peter

On Saturday, 6 May 2023 at 10:22:58 UTC+2 yy.wayne wrote:

> It might fails too for 1st basis order too ...
> But all fails occur when mpi process > 2. If I run with only 2 mpi 
> processes, no error shows.
>
> 在2023年5月6日星期六 UTC+8 15:22:23 写道:
>
>> Hello my friends,
>>
>> I'm testing a matrix-free code that, preconditioned by a geometric 
>> multigrid.
>> The code is not stable that it fails for some fe_degree + coarse_level + 
>> total_mesh levels
>> combination. 
>> For example, any n_mpi_process + 1st order basis elements run succesfully.
>> But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 2) 
>> or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during 
>> SolverGMRES.solve()*. 
>> The error in debug mode is:
>> 
>> An error occurred in line <3403> of file 
>>  
>> in function
>> unsigned int dealii::internal::VectorDataExchange> VectorizedArrayType>::find_vector_in_mf(const VectorType&, bool) const 
>> [with VectorType = dealii::LinearAlgebra::distributed::Vector; int 
>> dim = 3; Number = double; VectorizedArrayType = 
>> dealii::VectorizedArray]
>> The violated condition was: 
>> false
>> Additional information: 
>> Could not find partitioner that fits vector
>>
>> It might be my own coding mistake. I'm using 9.3.0 may upgrade dealii to 
>> 9.4.0 help ?
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/cf899629-f7b0-4ddc-9e7b-bd4ef4574f4fn%40googlegroups.com.


Re: [deal.II] Re: exadg installation error ‘class dealii::CellAccessor<3, 3>’ has no member named ‘as_dof_handler_iterator’

2023-04-13 Thread Peter Munch
As stated above. This is not a deal.II issue. The function has been 
introduced since the last release and is only part of the master branch. I 
have added some ifdefs in https://github.com/exadg/exadg/pull/402. However, 
I am not sure if this enough to make the code compatible with deal.II 9.4...

Next time please report ExaDG-related issues at the ExaDG page so that the 
developers there are aware of it.

Peter

On Thursday, 13 April 2023 at 07:42:00 UTC+2 bha...@iitb.ac.in wrote:

> Hello,
>
> I hope you are able to find the attached files.
>
> On 2023-04-12 22:01, Wolfgang Bangerth wrote:
>
> On 4/12/23 07:52, 'Sooraj Bharan Soman' via deal.II User Group wrote: 
>
> this is where I encountered the error the output is in file error.txt. I 
> have highlighted these errors in the pdf error.pdf. Kindly let me know if I 
> need to provide any information.
>
>
> I think you've missed a number of the output files mentioned in your email.
> Best
>  W.
>
> -- 
> 
> Wolfgang Bangerth  email: bang...@colostate.edu
>www: http://www.math.colostate.edu/~bangerth/
>
> -- 
>
> Thank You,
>
> Sooraj Bharan Soman
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/2fbad5e0-02d1-4896-afda-b5beded6b6c5n%40googlegroups.com.


[deal.II] Re: exadg installation error ‘class dealii::CellAccessor<3, 3>’ has no member named ‘as_dof_handler_iterator’

2023-04-10 Thread Peter Munch
Hej Sooraj,

CellAccessor::as_dof_handler_iterator() was added since the 9.3 release. It 
looks like ExaDG is not compatible with the 9.3. release but only with 
master.

I would suggest that you report this at the ExaDG project 
(https://github.com/exadg/exadg/discussions) 
and than we decide if we want to make ExaDG compatible with 9.3 again. 

Peter

On Monday, 10 April 2023 at 13:01:32 UTC+2 bha...@iitb.ac.in wrote:

> Hello,
>
> I am trying to install exadg, but it fails with the following error.
>
> /home/anachronism/computation/exadg/include/exadg/grid/grid_utilities.h:121:48:
>  
> error: ‘class dealii::CellAccessor<3, 3>’ has no member named 
> ‘as_dof_handler_iterator’
>   121 | face_pair_dof_hander.cell[0] = 
> it.cell[0]->as_dof_handler_iterator(dof_handler);
>   |   
>  ^~~
> /home/anachronism/computation/exadg/include/exadg/grid/grid_utilities.h:122:48:
>  
> error: ‘class dealii::CellAccessor<3, 3>’ has no member named 
> ‘as_dof_handler_iterator’
>   122 | face_pair_dof_hander.cell[1] = 
> it.cell[1]->as_dof_handler_iterator(dof_handler);
>
> I am configuring exadg using
>
> cmake .. -DDEAL_II_DIR=~/computation/deal.II -DBUILD_SHARED_LIBS=ON 
> -DPICKUP_TESTS=ON
>
> How can I solve this issue and proceed further? dealii detailed log file 
> and exadg cmake cache files are attached. Please let me know in case any 
> additional information is required.
>
> Thank you
> Sooraj
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9866f551-ba21-419a-b39a-408059486ff9n%40googlegroups.com.


[deal.II] Re: Marking cells at interface between subdomains

2023-03-21 Thread Peter Munch


Hi Jose,

not sure. You could use 
Triangulation::global_active_cell_index_partitioner() to initialize a 
distributed vector, access it via CellAccessor::global_active_cell_index(), 
and update the ghost values.

Peter
On Tuesday, March 21, 2023 at 10:47:14 AM UTC+1 jose.a...@gmail.com wrote:

> Hello dealii community,
>
> I am working on a problem with several subdomains. At the interface 
> between them a boundary integral is to be evaluated. I am identifying the 
> interface by comparing the material_id of neighboring cells (or their 
> active_fe_index as I am using a different FESystem per subdomain). In order 
> to speed up the search during assembly, a Vector is previously 
> filled with 1.0 at the cells where the material_id/active_fe_index differ. 
> This approach works in serial but in parallel the material_id() call of a 
> neighbor cell outside the locally owned subdomain always returns 0 (An 
> assertion is missing here). As such, not only the interface between 
> subdomains is marked but also the interface between locally owned 
> subdomains, as shown in the attached picture
>
> My question is, if there is an equivalent to locally owned and relevant 
> dofs for the case of cells, i.e., locally owned and relevant cells such 
> that the material_id/active_fe_index of the neighboring cell outside the 
> locally owned subdomain can be read? Alternatively, is there a built-in 
> method/member which can be used for my purpose or someone has already done 
> it through another approach?
>
> Attached is also the MWE used to obtain the attached screenshot.
>
> Cheers,
> Jose
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ea5b77e3-50f4-48bb-8269-238e4712a62en%40googlegroups.com.


[deal.II] Re: Remove constraints at some DoF

2023-03-18 Thread Peter Munch
I don't see a remove function in AffineConstraints. What you could do is to 
create a new AffineConstrains instance based on the old one by ignoring the 
unwanted entry. See 
also: 
https://github.com/dealii/dealii/blob/a654d1fe45cc11c9dbbbcaaef6f53320f459baa2/include/deal.II/lac/affine_constraints.templates.h#L526-L541

> I only see mixed order formulation but not  mixed element type in 
tutorials.

Maybe you could take a look at the `tests/` folders. Maybe there is an 
example!?

Hope this helps,
Peter

On Saturday, March 18, 2023 at 4:09:57 AM UTC+1 yy.wayne wrote:

> Another question relates to singular field that, do we support mixed 
> finite elements (with cg elements and dg elements at different location)? 
> I only see mixed order formulation but not  mixed element type in 
> tutorials.
>
> 在2023年3月18日星期六 UTC+8 09:45:50 写道:
>
>> Hello my friends,
>>
>> I'm working on problems with singularity fields. Is it possible to remove 
>> a constraint line/entry at specific DoF? Say at the edge of a corner in 3d, 
>> the two faces intersect at the corner are already interpolate with zero 
>> boundary condition. However I want to un-constraint the DoF exactly at the 
>> corner rather on faces (for FE_Q elements). Is there a function to remove 
>> constraint on already constrained DoFs?
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d37828be-a8b7-4e6b-a34e-ec4234a45c11n%40googlegroups.com.


[deal.II] Re: Fully distributed triangulation with GMSH

2023-03-15 Thread Peter Munch
Hi Kumar,

take a look 
at 
https://www.dealii.org/developer/doxygen/deal.II/namespaceTriangulationDescription_1_1Utilities.html#aefc3e841bcfd37714a07d04e42c8ffca.

Hope this helps,
Peter

On Wednesday, 15 March 2023 at 01:44:17 UTC+1 kumar.sau...@gmail.com wrote:

> Hi, 
>
> I am trying to perform the mesh partition using the mesh generated from 
> GMSH. The generated mesh is quite big with around 500K elements. 
>
> I am new to deal.ii. But I get the impression that the 
> parallel::distributed::Triangulation works only from the 
> quadrilateral/hexahedral meshes and uses p4est backend.
>
> I tried GridTools::partition_triangulation which tends to repeat the 
> elements on all processor. This is not ideal as the number of elements is 
> too large.
>
> I want to ask if I can use parallel::fullydistributed::Triangulation to 
> partition the tetrahedrals without repeating on all the processors. If so, 
> is there any examples for the same. All the examples I saw tends to work 
> for Hypercube type of meshes.
>
> Thanks,
> Kumar Saurabh
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/87a1718f-438b-40c6-90f3-019f9f992265n%40googlegroups.com.


[deal.II] Re: anisotropic polynomial degree

2023-03-11 Thread Peter Munch
Hi Greg,

unfortunately we do not support anisotropic polynomials: we make the 
assumption that all faces (of the same type) and lines have the same number 
of DoFs. For what do you need such polynomial spaces?

Best,
Peter 

On Saturday, 11 March 2023 at 15:05:25 UTC+1 ygre...@gmail.com wrote:

> Hi there,
>
> I was wondering if finite elements with anisotropic polynomial degrees are 
> possible in deal.II. As an example, for a 3D element, can we construct a 
> tensor product polynomial space of {1,x,y,z,xy,xz,yz, z^2,xz^2,yz^2}, i.e., 
> order 1 in x- and y-directions and 2 in z-direction?
>
> I was looking at, for example, constructors of FE_DGQ() class [1]. The 
> second constructor [2] takes an arbitrary vector of polynomials to build 
> the tensor product polynomial space. This is close to what I want but it 
> seems the argument can only be one-dimensional polynomials, which means 
> equal order on all dimensions.
>
> Would really appreciate any insights and/or tips!
>
> Best,
> Greg
>
> --
> [1] https://dealii.org/current/doxygen/deal.II/classFE__DGQ.html
> [1] 
> https://dealii.org/current/doxygen/deal.II/fe__dgq_8cc_source.html#l00100
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/121a0e46-fedb-46d1-9e47-0741d3fa5650n%40googlegroups.com.


[deal.II] deal.II at SIAM CSE (Amsterdam)

2023-02-24 Thread Peter Munch


Dear all,

Next week is the SIAM CSE conference in Amsterdam - finally again in 
person. Many of the deal.II developer will be attending the conference and 
will give talks. We are wondering who else is attending the SIAM CSE this 
year?

We will have a common lunch/dinner during the days (maybe on Monday). Let 
us know if you would be interested in joining us. If you have other plans, 
we are looking forward to get to know you and chat to you at the coffee 
breaks and to see your presentation!

You can respond either to the list or via a private message 
(peter.mue...@uni-a.de).

Best,
Peter (on behalf of the developer team)

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d0002bcd-ae3a-4d5f-84b2-d4de7f54caden%40googlegroups.com.


[deal.II] Re: Specify vectorization length

2023-02-23 Thread Peter Munch
It should be part of the 9.4.1 point release.

Peter

On Thursday, 23 February 2023 at 15:20:14 UTC+1 yy.wayne wrote:

> I'm using deal.II 9.4.0. It seems bug are fixed...
> 在2023年2月23日星期四 UTC+8 22:16:47 写道:
>
>> What version of deal.II are you using. I thought we fixed the 
>> vector-valued version: 
>> https://github.com/dealii/dealii/commit/6351f255d16c3cf465752e2ce943726df6bf4ea4
>>
>> Peter
>>
>> On Thursday, 23 February 2023 at 15:13:42 UTC+1 yy.wayne wrote:
>>
>>> I think the problem exists for vector-valued problem.
>>> I modified the vectorization length of step-37 and it works fine.
>>> But for vector value problem, on line 6090 of 
>>> deal.II/matrix_free/fe_evaluation.h:
>>>
>>>- val_in[comp] * weight * jac[comp][comp]
>>>
>>> The jac is defined as const Tensor<2, dim, 
>>> dealii::VectorizedArray>, which doesn't
>>> match customized VectorizedArrayType.
>>>
>>> Similar on line 6101
>>>
>>>- transpose(invert(inv_t_jac))
>>>
>>> where inv_t_jac is also dealii::VectorizedArray type data.
>>> 在2023年2月23日星期四 UTC+8 18:43:07 写道:
>>>
 Thanks you Peter,

 The error message is quite long and tricky (beyond my terminal can 
 show) when I set
 VectorizedArray with width=1.
 Knowing that deal.II do support this function is great. I‘ll try debug 
 the code first and came back with
 further progress or questions :)

 Best,
 Wayne

 在2023年2月23日星期四 UTC+8 16:00:43 写道:

> Hi Wayne,
>
> What is you error message?
>
> > I replace all FEEvaluation, MatrixFree and VectorizedArray 
> construction with a VectorizedArray, but make returns some 
> intricate error. Is specify vectorization length possible? I think I've 
> seen one in dealii/test ...
>
> Yes, it is possible and the to specify the additional template 
> argument is the right approach.
>
> I have introduced this template parameter for the hyper.deal project: 
> https://github.com/hyperdeal/hyperdeal, where we create a tensor 
> product of MatrixFree instances (where one is vectorized and the other is 
> not).
>
> Take a look at the test: 
> https://github.com/dealii/dealii/blob/3b785178375a8ea233fada96cf987aed6d53583e/tests/matrix_free/ecl.h#L52
>
> The important aspect is that the vector length has to match the ones 
> supported by given ISA. See table 1 in the release paper of deal.II 9.2: 
> https://dealii.org/deal92-preprint.pdf
>
> Peter
>
> On Thursday, February 23, 2023 at 7:09:07 AM UTC+1 yy.wayne wrote:
>
>> Hi,
>> is it possible to specify vectorization length(> I'm running same code on local computer and server, whose cpu 
>> supports SSE2+4.10GHz
>> and AVX512+2.60GHz respectively. The speed up for iteration time is 
>> only 2 on server, which should be 8*2.6 / 2*4.1 > 2.5 times. Therefore 
>> I'd 
>> like to run code with different
>> vectorization length on same computer to test Vectorization's power.
>>
>> I replace all FEEvaluation, MatrixFree and VectorizedArray 
>> construction with a VectorizedArray, but make returns 
>> some 
>> intricate error. Is specify vectorization length possible? I think I've 
>> seen one in dealii/test ...
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/e04071d1-1143-4413-9053-f53762a61be1n%40googlegroups.com.


[deal.II] Re: Specify vectorization length

2023-02-23 Thread Peter Munch
What version of deal.II are you using. I thought we fixed the vector-valued 
version: 
https://github.com/dealii/dealii/commit/6351f255d16c3cf465752e2ce943726df6bf4ea4

Peter

On Thursday, 23 February 2023 at 15:13:42 UTC+1 yy.wayne wrote:

> I think the problem exists for vector-valued problem.
> I modified the vectorization length of step-37 and it works fine.
> But for vector value problem, on line 6090 of 
> deal.II/matrix_free/fe_evaluation.h:
>
>- val_in[comp] * weight * jac[comp][comp]
>
> The jac is defined as const Tensor<2, dim, 
> dealii::VectorizedArray>, which doesn't
> match customized VectorizedArrayType.
>
> Similar on line 6101
>
>- transpose(invert(inv_t_jac))
>
> where inv_t_jac is also dealii::VectorizedArray type data.
> 在2023年2月23日星期四 UTC+8 18:43:07 写道:
>
>> Thanks you Peter,
>>
>> The error message is quite long and tricky (beyond my terminal can show) 
>> when I set
>> VectorizedArray with width=1.
>> Knowing that deal.II do support this function is great. I‘ll try debug 
>> the code first and came back with
>> further progress or questions :)
>>
>> Best,
>> Wayne
>>
>> 在2023年2月23日星期四 UTC+8 16:00:43 写道:
>>
>>> Hi Wayne,
>>>
>>> What is you error message?
>>>
>>> > I replace all FEEvaluation, MatrixFree and VectorizedArray 
>>> construction with a VectorizedArray, but make returns some 
>>> intricate error. Is specify vectorization length possible? I think I've 
>>> seen one in dealii/test ...
>>>
>>> Yes, it is possible and the to specify the additional template argument 
>>> is the right approach.
>>>
>>> I have introduced this template parameter for the hyper.deal project: 
>>> https://github.com/hyperdeal/hyperdeal, where we create a tensor 
>>> product of MatrixFree instances (where one is vectorized and the other is 
>>> not).
>>>
>>> Take a look at the test: 
>>> https://github.com/dealii/dealii/blob/3b785178375a8ea233fada96cf987aed6d53583e/tests/matrix_free/ecl.h#L52
>>>
>>> The important aspect is that the vector length has to match the ones 
>>> supported by given ISA. See table 1 in the release paper of deal.II 9.2: 
>>> https://dealii.org/deal92-preprint.pdf
>>>
>>> Peter
>>>
>>> On Thursday, February 23, 2023 at 7:09:07 AM UTC+1 yy.wayne wrote:
>>>
 Hi,
 is it possible to specify vectorization length(>>> I'm running same code on local computer and server, whose cpu supports 
 SSE2+4.10GHz
 and AVX512+2.60GHz respectively. The speed up for iteration time is 
 only 2 on server, which should be 8*2.6 / 2*4.1 > 2.5 times. Therefore I'd 
 like to run code with different
 vectorization length on same computer to test Vectorization's power.

 I replace all FEEvaluation, MatrixFree and VectorizedArray construction 
 with a VectorizedArray, but make returns some intricate 
 error. Is specify vectorization length possible? I think I've seen one in 
 dealii/test ...

>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c9d08821-06f1-42b2-a636-9529ba1dab81n%40googlegroups.com.


[deal.II] Re: Specify vectorization length

2023-02-23 Thread Peter Munch
Hi Wayne,

What is you error message?

> I replace all FEEvaluation, MatrixFree and VectorizedArray construction 
with a VectorizedArray, but make returns some intricate 
error. Is specify vectorization length possible? I think I've seen one in 
dealii/test ...

Yes, it is possible and the to specify the additional template argument is 
the right approach.

I have introduced this template parameter for the hyper.deal 
project: https://github.com/hyperdeal/hyperdeal, where we create a tensor 
product of MatrixFree instances (where one is vectorized and the other is 
not).

Take a look at the 
test: 
https://github.com/dealii/dealii/blob/3b785178375a8ea233fada96cf987aed6d53583e/tests/matrix_free/ecl.h#L52

The important aspect is that the vector length has to match the ones 
supported by given ISA. See table 1 in the release paper of deal.II 
9.2: https://dealii.org/deal92-preprint.pdf

Peter

On Thursday, February 23, 2023 at 7:09:07 AM UTC+1 yy.wayne wrote:

> Hi,
> is it possible to specify vectorization length( I'm running same code on local computer and server, whose cpu supports 
> SSE2+4.10GHz
> and AVX512+2.60GHz respectively. The speed up for iteration time is only 2 
> on server, which should be 8*2.6 / 2*4.1 > 2.5 times. Therefore I'd like to 
> run code with different
> vectorization length on same computer to test Vectorization's power.
>
> I replace all FEEvaluation, MatrixFree and VectorizedArray construction 
> with a VectorizedArray, but make returns some intricate 
> error. Is specify vectorization length possible? I think I've seen one in 
> dealii/test ...
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/767b397f-7847-4e5b-ac63-95b2aad49da2n%40googlegroups.com.


Re: [deal.II] Convergence failure in step 0.

2023-02-14 Thread Peter Munch
Hi Wayne,

it seems that you are using multigrid on a locally refined mesh. Have you 
tried to use globally refined meshes? This one does not have inner 
boundaries so that the issue should not occur. Does it?

You might have forgotten to attach the constrains to the transfer 
operator: 
https://www.dealii.org/developer/doxygen/deal.II/classMGTransferBlockMatrixFree.html#afd5ace82516e421e37af4572aabf4098.
 
Also you might have forgotten to apply edge constrains. This is how we do 
it in one of our 
codes: 
https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/include/operator.h#L191-L226.
 
Did you set the edge 
matrices: 
https://www.dealii.org/developer/doxygen/deal.II/classMultigrid.html#a6d1eff544f75da8c308fdc42c6c0c74d?

As proof of concept, you could also try out the global-coarsening multigrid 
infrastructure (step-75); there, there are no internally boundaries per 
definition.

Peter

On Tuesday, 14 February 2023 at 09:25:28 UTC+1 yy.wayne wrote:

> Thanks for your reply Martin.
>
> The reason my Chebyshev preconditioner breaks is the (inverse) diagonal 
> vector I get from MatrixFree operator contains negative entries. 
> In my corresponding matrix-based code with exactly same geometry and 
> boundary condition, the matrix diagonal has no negaive entries.
>
> I apply a "compute_nonzero_tangential_flux_constraints_on_level" function 
> similar to the normal_flux one. The geometry of this problem is a shell. 
> When that BC is applied on outer boundary there's no negative entries, but 
> the error comes when applied on inner boundary. I'm still debugging to see 
> where the error comes from.
>
> I got a quesiton that do FEEvaluation auto deal with constraints? It seems 
> a matrixfree_operator.vmult solution 
> equals matrixbased_matrix.vmult + AffineConstraints.distribute.
>
> Best
> Wayne
>
> 在2023年2月14日星期二 UTC+8 15:33:18 写道:
>
>> To extend over what Wolfgang said, the most likely causes for failure 
>> with the Chebyshev preconditioner are:
>>
>> - the vector supplying the matrix diagonal (via the field 
>> AdditionalData::preconditioner) contains Inf or NaN entries, which in turn 
>> result from either a ill-formed differential operator or a bug in the code 
>> computing the diagonal, or
>>
>> - a coarse level holding only constrained degrees of freedom, which are 
>> not treated properly so that the SolverCG making an eigenvalue estimate 
>> needs to work with an unsolvable linear system (right hand side not in the 
>> image of the matrix).
>>
>> For the first case, you would check the diagonal on each level and the 
>> code leading to that, in the second the treatment of constraints.
>>
>> Best,
>> Martin
>> On 14.02.23 06:33, 'yy.wayne' via deal.II User Group wrote:
>>
>> Thank you Wolfgang. I didin't dive much into the Chebyshev preconditioner 
>> because I'm not familiar with it.  
>> I assume it's because the CG solver inside Chebyshev preconditioner 
>> didn't converge or what.
>> But as you stated, if NaN should only get from NaN elements and divided 
>> by zero, then I may find the error. 
>> I'll follow your debug suggestion.   
>>
>> Best
>> Wayne
>>
>> 在2023年2月14日星期二 UTC+8 12:48:42 写道:
>>
>>>
>>> > It's strange that the iteration breaks at step 0. If I try right 
>>> > preconditioning, it breaks at step 1 with nan. Besides, if there is 
>>> only on 
>>> > multigrid level, the problem is solved correctly with 1 iteration, so 
>>> matrices 
>>> > should be correct. The coarse solution is good enough for smoothing. 
>>> > 
>>> > The question is, in which case may a CG or GMRES solver breaks at step 
>>> 0 with nan? 
>>>
>>> NaN errors, like segmentation faults, are relatively easy to find 
>>> because you 
>>> know that every NaN you get is wrong. So you check that the matrix and 
>>> right 
>>> hand side and solution vector you give to the solver don't have any 
>>> NaNs. If 
>>> they don't, and you get NaNs out, then the problem likely lies with the 
>>> preconditioner, so you single step through the solver around the place 
>>> where 
>>> the preconditioner is applied and check in which operation the NaN 
>>> appears. 
>>>
>>> In your case, you seem to have already found out that it's the Chebyshev 
>>> preconditioner. The question is why. Is it because the matrix the 
>>> preconditioner uses has NaNs in it? Is it because the preconditioner 
>>> makes an 
>>> assumption that it can divide by certain matrix entries, but they are 
>>> zero? 
>>> Single stepping in a debugger will give you the answer :-) 
>>>
>>> Best 
>>> W. 
>>>
>>> -- 
>>>  
>>> Wolfgang Bangerth email: bang...@colostate.edu 
>>> www: http://www.math.colostate.edu/~bangerth/ 
>>>
>>>
>>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the 

[deal.II] Re: some problems about step37

2023-01-27 Thread Peter Munch
In this tutorial, multigrid is using Chebysev sweeps around a point Jacobi 
solver as smoother. Latter needs the (inverse) of the diagonal.

Hope this helps.
Peter

On Friday, January 27, 2023 at 11:07:21 AM UTC+1 13274...@163.com wrote:

> hello everyone! 
> I am learning about the step37 and want to use matrix free methods in my 
> program. Now I am confused about the two functions` use. compute_diagonal 
> and local_compute_diagnol. I am confused why we need to compute the 
> elements on diagnol. thanks for your reply! 
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/edfcb66b-a851-439c-8c70-62b69c317d2en%40googlegroups.com.


Re: [deal.II] Assemble system slowing down the program

2023-01-08 Thread Peter Munch
Change:

> float H_plus(Vector solution_elastic, const auto cell,const 
unsigned int q_point,
   * FEValues<2> fe_values_damage*);

to:

> float H_plus(Vector solution_elastic, const auto cell,const 
unsigned int q_point,
   * const FEValues<2> & fe_values_damage*);

Peter
On Sunday, 8 January 2023 at 11:11:34 UTC+1 ce21...@smail.iitm.ac.in wrote:

> Thank you, Prof. Munch.
>
> I tried to pass the FEValues object as a parameter to the H_plus function 
> as follows:
>
> I changed the function declaration from: 
> float H_plus(Vector solution_elastic, const auto cell,const 
> unsigned int q_point);
> to this:
> float H_plus(Vector solution_elastic, const auto cell,const 
> unsigned int q_point,
>* FEValues<2> fe_values_damage*);
>
> I made the same change in the function definition also.
>
> I also changed the call to my function from:
> H_plus(solution_elastic,cell,q_index) 
> to this: H_plus(solution_elastic,cell,q_index,*fe_values_damage*)
>
> But I am getting the following error:
> error: use of deleted function ‘dealii::FEValues<2>::FEValues(const 
> dealii::FEValues<2>&)’
>   761 |{ float H_call = 
> H_plus(solution_elastic,cell,q_index,fe_values_damage);
>
>
> On Sunday, January 8, 2023 at 2:32:24 PM UTC+5:30 peterr...@gmail.com 
> wrote:
>
>> You are creating a new instance of FEValues at each quadrature point. 
>> This is a very expensive operation, since there all the shape functions are 
>> evaluated. Try to reuse that by passing it to the function as a parameter.
>>
>> Hope that helps!
>> Peter
>>
>> On Sunday, 8 January 2023 at 09:58:41 UTC+1 ce21...@smail.iitm.ac.in 
>> wrote:
>>
>>> Following is my H_plus function:
>>>
>>> float PhaseField::H_plus(Vector solution_elastic
>>> , const auto cell,const unsigned int q_point)
>>> {
>>> QGauss<2> quadrature_formula_damage(fe_damage.degree + 1);
>>>
>>> FEValues<2> fe_values_damage(fe_damage,
>>> quadrature_formula_damage,
>>> update_gradients |
>>> update_quadrature_points );
>>>
>>> fe_values_damage.reinit(cell);
>>>
>>> int node = 0;
>>>
>>> /* Initialising all strains as zero */
>>> float e_xx = 0.000;
>>> float e_yy = 0.000;
>>> float e_xy = 0.000;
>>>
>>> /*calculating strains*/
>>> for (const auto vertex : cell->vertex_indices())
>>> {
>>> int a = (cell->vertex_dof_index(vertex, 0));
>>> e_xx = e_xx + 
>>> solution_elastic[a*2]*fe_values_damage.shape_grad(node, q_point)[0];
>>> e_yy = e_yy + 
>>> solution_elastic[a*2+1]*fe_values_damage.shape_grad(node, q_point)[1];
>>> e_xy = e_xy 
>>> +0.5*(solution_elastic[a*2]*fe_values_damage.shape_grad(node, q_point)[1]
>>> 
>>>   
>>>  +solution_elastic[a*2+1]*fe_values_damage.shape_grad(node, q_point)[0]);
>>> node = node +1;
>>> }
>>>
>>> const auto _q = fe_values_damage.quadrature_point(q_point);
>>> float H_plus_val;
>>>
>>> /*This is the actual quantity I want to evaluate for each quadrature 
>>> point*/
>>> H_plus_val = 0.5*lambda(x_q)*(pow((e_xx+e_yy),2))
>>> + 
>>> mu(x_q)*((pow(e_xx,2))+2*(pow(e_xy,2)) + (pow(e_yy,2)));
>>>
>>> return H_plus_val;
>>> }
>>>
>>> H_plus function is called in assemble_damage for each quadrature point
>>>
>>> I am also attaching some lines of code from assemble_damage where H_plus 
>>> is being called
>>>
>>> for (const auto  : dof_handler_damage.active_cell_iterators())
>>> {
>>>
>>> fe_values_damage.reinit(cell);
>>> cell_matrix_damage = 0;
>>> cell_rhs_damage= 0;
>>>
>>>
>>> float gc = 0.27; //energy release rate
>>> float l = 0.015;
>>> float H;
>>>
>>> for (const unsigned int q_index : 
>>> fe_values_damage.quadrature_point_indices())
>>> {float H_call = H_plus(solution_elastic,cell,q_index);
>>>
>>>
>>> if (H_call > H_vector[4*cell_number + q_index])
>>> {
>>> H = H_call;
>>> }
>>> else
>>> {
>>> H = H_vector[4*cell_number + q_index];
>>> }
>>> H_vector_new[4*cell_number + q_index] = H;
>>> for (const unsigned int i : fe_values_damage.dof_indices())
>>> {
>>>
>>>
>>> for (const unsigned int j : fe_values_damage.dof_indices())
>>> {
>>>
>>> const auto _q = fe_values_damage.quadrature_point(q_index);
>>>
>>>
>>> cell_matrix_damage(i, j) +=
>>> // contribution to stiffness from -laplace u term
>>> 
>>> Conductivity_damage(x_q)*fe_values_damage.shape_grad(i, q_index) * // 
>>> kappa*grad phi_i(x_q)
>>> fe_values_damage.shape_grad(j, q_index) * // grad 
>>> phi_j(x_q)
>>> fe_values_damage.JxW(q_index)// dx
>>> +
>>> // Contribution to stiffness from u term
>>>
>>> 
>>> 

Re: [deal.II] Assemble system slowing down the program

2023-01-08 Thread Peter Munch
You are creating a new instance of FEValues at each quadrature point. This 
is a very expensive operation, since there all the shape functions are 
evaluated. Try to reuse that by passing it to the function as a parameter.

Hope that helps!
Peter

On Sunday, 8 January 2023 at 09:58:41 UTC+1 ce21...@smail.iitm.ac.in wrote:

> Following is my H_plus function:
>
> float PhaseField::H_plus(Vector solution_elastic
> , const auto cell,const unsigned int q_point)
> {
> QGauss<2> quadrature_formula_damage(fe_damage.degree + 1);
>
> FEValues<2> fe_values_damage(fe_damage,
> quadrature_formula_damage,
> update_gradients |
> update_quadrature_points );
>
> fe_values_damage.reinit(cell);
>
> int node = 0;
>
> /* Initialising all strains as zero */
> float e_xx = 0.000;
> float e_yy = 0.000;
> float e_xy = 0.000;
>
> /*calculating strains*/
> for (const auto vertex : cell->vertex_indices())
> {
> int a = (cell->vertex_dof_index(vertex, 0));
> e_xx = e_xx + 
> solution_elastic[a*2]*fe_values_damage.shape_grad(node, q_point)[0];
> e_yy = e_yy + 
> solution_elastic[a*2+1]*fe_values_damage.shape_grad(node, q_point)[1];
> e_xy = e_xy 
> +0.5*(solution_elastic[a*2]*fe_values_damage.shape_grad(node, q_point)[1]
>   
>  +solution_elastic[a*2+1]*fe_values_damage.shape_grad(node, 
> q_point)[0]);
> node = node +1;
> }
>
> const auto _q = fe_values_damage.quadrature_point(q_point);
> float H_plus_val;
>
> /*This is the actual quantity I want to evaluate for each quadrature 
> point*/
> H_plus_val = 0.5*lambda(x_q)*(pow((e_xx+e_yy),2))
> + 
> mu(x_q)*((pow(e_xx,2))+2*(pow(e_xy,2)) + (pow(e_yy,2)));
>
> return H_plus_val;
> }
>
> H_plus function is called in assemble_damage for each quadrature point
>
> I am also attaching some lines of code from assemble_damage where H_plus 
> is being called
>
> for (const auto  : dof_handler_damage.active_cell_iterators())
> {
>
> fe_values_damage.reinit(cell);
> cell_matrix_damage = 0;
> cell_rhs_damage= 0;
>
>
> float gc = 0.27; //energy release rate
> float l = 0.015;
> float H;
>
> for (const unsigned int q_index : 
> fe_values_damage.quadrature_point_indices())
> {float H_call = H_plus(solution_elastic,cell,q_index);
>
>
> if (H_call > H_vector[4*cell_number + q_index])
> {
> H = H_call;
> }
> else
> {
> H = H_vector[4*cell_number + q_index];
> }
> H_vector_new[4*cell_number + q_index] = H;
> for (const unsigned int i : fe_values_damage.dof_indices())
> {
>
>
> for (const unsigned int j : fe_values_damage.dof_indices())
> {
>
> const auto _q = fe_values_damage.quadrature_point(q_index);
>
>
> cell_matrix_damage(i, j) +=
> // contribution to stiffness from -laplace u term
> 
> Conductivity_damage(x_q)*fe_values_damage.shape_grad(i, q_index) * // 
> kappa*grad phi_i(x_q)
> fe_values_damage.shape_grad(j, q_index) * // grad 
> phi_j(x_q)
> fe_values_damage.JxW(q_index)// dx
> +
> // Contribution to stiffness from u term
>
> 
> ((1+(2*l*H)/gc)*(1/pow(l,2))*fe_values_damage.shape_value(i, q_index) *   
>  // phi_i(x_q)
> fe_values_damage.shape_value(j, q_index) * // 
> phi_j(x_q)
> fe_values_damage.JxW(q_index)); // 
> dx
> }
>
> cell_rhs_damage(i) += (fe_values_damage.shape_value(i, q_index) * 
> // phi_i(x_q)
> (2/(l*gc))* H*
> fe_values_damage.JxW(q_index));// dx
> }
> }
>
> /*Adding the local k and local f to global k and global f*/
> cell->get_dof_indices(local_dof_indices);
> for (const unsigned int i : fe_values_damage.dof_indices())
> {
> for (const unsigned int j : fe_values_damage.dof_indices())
> system_matrix_damage.add(local_dof_indices[i],
> local_dof_indices[j],
> cell_matrix_damage(i, j));
>
> system_rhs_damage(local_dof_indices[i]) += cell_rhs_damage(i);
> }
> cell_number = cell_number + 1;
>
> }
>
> Thanks and regards
> Wasim
>
> On Saturday, January 7, 2023 at 11:59:49 PM UTC+5:30 blais...@gmail.com 
> wrote:
>
>> There might be many things that can be done to improve the speed of this 
>> function. You can ask yourselve the following question as guidance:
>> - Does the function allocate memory?
>> - Could it be inlined?
>> - Are you calling the function inside the DOF loops or inside the 
>> quadrature loop?
>>
>> Then I would time the function to measure if this is actually the real 
>> culprit or if it could be 

[deal.II] Re: Boundary conditions besides Dirichlet BC for Multigrid

2022-12-20 Thread Peter Munch
Hi Wayne,

you can attache your own constraints as shown here: 
https://github.com/dealii/dealii/blob/2376c62a4b89953b74801852983eb8556930d54d/tests/numerics/no_flux_18.cc#L1136-L1146

Hope this helps!
Peter

On Wednesday, 21 December 2022 at 02:44:42 UTC+1 yy.wayne wrote:

> Hi guys, I have a question on setting the boundary constraints for 
> mg_constrained_dofs object.
> In many problems where the only boundary conditions(for AffineConstraints) 
> are Dirichlet BC and hanging nodes, mg_constrained knows them by 
> MGConstrainedDoFs::make_zero_boundary_constraints(the Dirichlet boundary 
> id, not necessarily zero) and 
> MGConstrainedDoFs::initialize(dof_handler)(the hanging nodes), respectively.
> However, in case other BC like compute_normal_flux_constraints or 
> compute_no_normal_flux_constraints exists, mg_constrained_dofs has no way 
> to include it. Therefore the MGLevelObject mg_matrices is intrinsically 
> different to global system_matrix. Then mg_matrices[0] cannot be a good 
> approximation of coarsest grid. 
> Is there some hint for such cases?
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/688b2d12-1b11-4b78-92c0-f28b08e1c979n%40googlegroups.com.


[deal.II] Re: matrix-free vector FEM with Nedelec space

2022-12-19 Thread Peter Munch
Hi Abbas, Vinta, and Wayne,

it's nice to see that there is increasing interest regarding doing 
matrix-free computations with Nedelec elements! It's a very interesting and 
cool topic.

Unfortunately, there has been no progress regarding our Nedlec elements on 
our (matrix-free) side in the last few months. We have started to integrate 
RT support in MatrixFree. See the Niklas Wik's Master thesis: 
https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1676116=8895. 
The reason for starting with RT instead of Nedelec is that the groups, 
which are involved, are primarily interested in the solution of the 
incompressible Navier Stokes equations.

However, the generalization done for RT (anisotropic tensor product 
evaluation, lexicographic definition of shape functions done by Niklas Wik 
and Julius Witter) is also useful for the future generalization needed for 
Nedelec. Martin Kronbichler and Katharina Kormann have been working on the 
lexicographic definition of shape functions in the Nedelec element class. 
We will create for a PR for that soon.

Starting with that, we will continue to work on Nedelec elements in the 
MatrixFree infrastructure. At that stage maybe one of you would be also 
interested to take over some implementation tasks?

In one the threads, there was the discussion "DG vs. Nedelec". In Niklas 
Wik's Master thesis, there is a comparison of the performance of DG and of 
RT. Maybe that is interesting for you as well, since the performance 
characteristics of Nedelec will be similar.

Regards,
Peter

On Sunday, December 18, 2022 at 12:43:48 PM UTC+1 Vinta Yinda Wang wrote:

> Hello everyone!
> I want to implement the matrix-free method in computational 
> electromagnetic problems. However, the popular Nedelec basis functions are 
> not supported.
>
> Anyway, I am overjoyed to find that some contributors and users have 
> contributed to the same problems as I concern. The support for RT elements 
> had been implemented several months before.
>
> https://groups.google.com/g/dealii/c/aTxIA0aeX4c/m/JJUTK2eOCQAJ
> https://groups.google.com/g/dealii/c/J23wf7M-Blw/m/-m7XNVJqCQAJ
>
> I wonder if there are any progress in *Nedelec *space in this problem? Or 
> maybe I can develop on my own. How much work will it takes?
>
> Thank you all!
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/64712d6d-bc7d-4a4f-974b-6dfeffbedff5n%40googlegroups.com.


[deal.II] Re: Getting global DoF indices of a face

2022-11-20 Thread Peter Munch
Hi Alexander,

You are using FE_DGQ, which per definition only has DoFs in the cell. 
Switch to quadratic FE_Q and then your idea with the dummy DoFHandler to 
globally uniquely identify faces should work! Not sure why you create a 
system!?

Peter

On Friday, 18 November 2022 at 18:30:38 UTC+1 Alexander Kiselyov wrote:

> Dear deal.II users,
>
> I'm currently developing a way to uniquely identify a boundary face on
> a mesh. My idea has been to gather global DoF indices on a face and
> hash them together. However, `get_dof_indices` method of a face
> accessor doesn't return any useful information, and
> `FESystem::max_dofs_per_face()`, surprisingly, returns 0. See the
> attachment for a minimal example. The problem persists when running on
> a single process. The version of deal.II used is 9.4.0.
>
> What am I doing wrong here? Is there a better way to globally identify
> a face?
>
> Best regards,
> Alexander Kiselyov
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/54a20ebc-b6e2-479d-8eed-d236e1ee90cdn%40googlegroups.com.


[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-17 Thread Peter Munch
Hi Wayne,

currently MGTransferBlockGlobalCoarsening takes in the constructor only a 
single MGTransferGlobalCoarsening instance and uses this for all blocks 
(i.e., uses the same AffineConstraints instance for all 
blocks): 
https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.h#L691-L692
 

However, this can be simply generalized to add a second constructor that 
takes a vector of MGTransferGlobalCoarsening objects. This should not be 
too much work, since MGTransferBlockGlobalCoarsening is derived from 
MGTransferGlobalCoarsening, which contains all the logic. The new 
constructor would have false here 
https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.templates.h#L3379
 
and the 
function 
https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.templates.h#L3385-L3393
 
would need to generalized to access entries within the vector.

Hope this helps,
Peter

On Friday, 18 November 2022 at 02:53:29 UTC+1 yy.wayne wrote:

> Hi Peter, sorry for coming back to this discussion again. 
>
> I've applied the vmult function and compute diagonal function for a 
> complex MatrixFree object, and tested these functions. But there's one last 
> ingredient absent in the multi-component MatrixFree framework with 
> p-coarsening, transfers.
>
> Different to the your spirk article where it has two components with same 
> constraints, in my case the Dirichlet BC on real dofs and imaginary dofs 
> are different. However when constructing MGTransferBlockGlobalCoarsening or 
> MGTransferGlobalCoarsening, a MGTwoLevelTransfer object is needed but it 
> only receive a single dofhandler and constraint. The problem here is the 
> single constraint, since dofhandler for real and imaginary parts are same. 
> I haven't figured out the solution yet.
>
> Or I've notice that a GMG block transfer class MGTransferBlockMatrixFree 
> 
>  receives 
> a vector of  MGConstrainedDoFs. Your implementation of a complex 
> Helmholtz MatrixFree method with p-coarsening is very helpful, it address 
> most of my problems, but only differ in different constrains for two 
> components.
>
> Best,
> Wayne
> 在2022年11月4日星期五 UTC+8 09:40:50 写道:
>
>> Thanks for you patience and wonderful answer :)
>>
>> Best,
>> Wayne
>>
>> 在2022年11月4日星期五 UTC+8 00:06:28 写道:
>>
>>> Hi Wayne,
>>>
>>> in that case I would provide the same DoFHandler to MatrixFree twice and 
>>> two different AffineConstraint objects. See the version of 
>>> MatrixFree::reinit() that takes vectors 
>>> https://www.dealii.org/developer/doxygen/deal.II/classMatrixFree.html#abb081e301fadbfd94e0169611bae3838
>>> .
>>>
>>> Peter
>>>
>>> On Thursday, 3 November 2022 at 11:18:58 UTC+1 yy.wayne wrote:
>>>
 Hi Peter,

 Thanks for your comprehensive reply, really helpful. By assemble 
 actually I mean the MatrixFree integrator part which corresponds to 
 assembling in MatrixBased form :). But you resolve my question, since my 
 main concern is one scalar DoFHandler describing both real and imaginary 
 part. A last question is what will you do when real and imaginary parts 
 have different boundary conditions, if only on DoFHandler is provided? 

 Best,
 Wayne
 在2022年11月3日星期四 UTC+8 15:08:03 写道:

> Hi Wayne,
>
> > The solution in HeatProblem is a non-complex one then(half size of 
> src and dst processed in MatrixFree operator)?
>
> We are solving equation (10) from the paper; it is the  reformulated 
> version of equation (9), which is a complex system. The heat equation is 
> indeed not complex; but the diagonalization approach in the case of fully 
> implicit stage parallel IRK makes the system complex (inter term of 
> equation (2) consists of complex blocks that need to be solved).
>
> > In your github code, you actually assemble a block matrix system(in 
> MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
> There's no multi dof_handlers.
>
> I am not sure which lines you are referring to, but if I assemble the 
> matrix it only happens for the coarse-grid problem; also I don't think I 
> did setup the matrix for the complex system and instead used simply 
> Chebyshev iterations around point Jacobi. I have another project, where 
> we 
> actually assemble the matrix.
>
> I am not sure what happens in the Stokes examples. But my guess is 
> that they use a FESystem consisting of a FE for pressure and a vectorial 
> FE 
> for velocity. To be able to use block vector in this case the DoFs need 
> to 
> be renumbered so that DoFs of a component are contiguous, which is not 
> the 

[deal.II] Re: Solve distributed BlockVector with direct solver

2022-11-05 Thread Peter Munch
If it is not too expensive, I would copy the block vector to the normal 
distributed vector. For this purpose, I use functions like these:

```
template 
void
split_up_components_fast(const VectorType , BlockVectorType )
{
  for (unsigned int i = 0, j = 0;
   i < src.get_partitioner()->locally_owned_size();
   ++j)
for (unsigned int b = 0; b < dst.n_blocks(); ++b)
  dst.block(b).local_element(j) = src.local_element(i++);
}

template 
void
merge_components_fast(const BlockVectorType , VectorType )
{
  for (unsigned int i = 0, j = 0;
   i < dst.get_partitioner()->locally_owned_size();
   ++j)
for (unsigned int b = 0; b < src.n_blocks(); ++b)
  dst.local_element(i++) = src.block(b).local_element(j);
}
```

However, you have make sure that the DoFs are enumerated consistently for 
LinearAlgebra::distributed::Vector.

Peter
On Saturday, 5 November 2022 at 07:21:40 UTC+1 yy.wayne wrote:

> Hello,
>
> I assembled a matrix for a multi-component problem. I want to solve it 
> with a direct solver, but neither Trilinos solver(SolverDirect amesos_klu) 
> nor SparseDirectUMFPACK solver receive a 
> LinearAlgebra::distributed::BlockVector as input. The Trilinos direct 
> solver receives Vector and BlockVector, while SparseDirectUMFPACK solver 
> receives LinearAlgebra::distributed::Vector. What direct solver should I 
> use?
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/29454063-2eb4-4feb-9275-00a213ccac1an%40googlegroups.com.


[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-03 Thread Peter Munch
Hi Wayne,

in that case I would provide the same DoFHandler to MatrixFree twice and 
two different AffineConstraint objects. See the version of 
MatrixFree::reinit() that takes vectors 
https://www.dealii.org/developer/doxygen/deal.II/classMatrixFree.html#abb081e301fadbfd94e0169611bae3838.

Peter

On Thursday, 3 November 2022 at 11:18:58 UTC+1 yy.wayne wrote:

> Hi Peter,
>
> Thanks for your comprehensive reply, really helpful. By assemble actually 
> I mean the MatrixFree integrator part which corresponds to assembling in 
> MatrixBased form :). But you resolve my question, since my main concern is 
> one scalar DoFHandler describing both real and imaginary part. A last 
> question is what will you do when real and imaginary parts have different 
> boundary conditions, if only on DoFHandler is provided? 
>
> Best,
> Wayne
> 在2022年11月3日星期四 UTC+8 15:08:03 写道:
>
>> Hi Wayne,
>>
>> > The solution in HeatProblem is a non-complex one then(half size of src 
>> and dst processed in MatrixFree operator)?
>>
>> We are solving equation (10) from the paper; it is the  reformulated 
>> version of equation (9), which is a complex system. The heat equation is 
>> indeed not complex; but the diagonalization approach in the case of fully 
>> implicit stage parallel IRK makes the system complex (inter term of 
>> equation (2) consists of complex blocks that need to be solved).
>>
>> > In your github code, you actually assemble a block matrix system(in 
>> MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
>> There's no multi dof_handlers.
>>
>> I am not sure which lines you are referring to, but if I assemble the 
>> matrix it only happens for the coarse-grid problem; also I don't think I 
>> did setup the matrix for the complex system and instead used simply 
>> Chebyshev iterations around point Jacobi. I have another project, where we 
>> actually assemble the matrix.
>>
>> I am not sure what happens in the Stokes examples. But my guess is that 
>> they use a FESystem consisting of a FE for pressure and a vectorial FE for 
>> velocity. To be able to use block vector in this case the DoFs need to be 
>> renumbered so that DoFs of a component are contiguous, which is not the 
>> case normally.
>>
>> In my case, I am using a single a single scalar DoFHandler. This 
>> describes both the real and the imaginary part. As a consequence, one only 
>> has to pass a single DoFHander/AffineConstraints to MatrixFree. This is 
>> somewhat different approach than it is done in the other tutorials and has 
>> been adopted from our two-phase solver adaflo (
>> https://github.com/kronbichler/adaflo), where we extensively use it 
>> reduce overhead for the level-set problem (single DoFHandler that describes 
>> the level-set field, normal, and curvature). MatrixFree works quite nicely 
>> (just like in the case if you would use FESystem) if you use it with a 
>> scalar DoFHandler and pass a BlockVector to the matrix-free loops. I think 
>> we don't have a tutorial for this way to work with block vectors in the 
>> context of MatrixFree. However, we made lot of progress in improving the 
>> usability in the last two years, since we use this feature in 2-3 projects 
>> with external collaboration partners. Unfortunately, we have not moved all 
>> utility functions to deal.II yet.
>>
>> > Can I implement p-multigrid precondition with given 
>> MGTransferBlockGlobalCoarsening?
>>
>> Yes. It is used in the code.
>>
>> > It's a parallel version but vectors are not initialized with 
>> solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So 
>> matrix_free->initialize_dof_vector has same function done?
>>
>> Yes, initialize_dof_vector() does the same just smarter. A general 
>> recommendation: always use initialize_dof_vector. If MatrixFree notices 
>> that you use the internal partitioner, it can do some short cuts. Else, it 
>> needs to do some additional checks to figure out if the vectors are 
>> comptible.
>>
>> Hope this helps,
>> Peter
>>
>> On Thursday, 3 November 2022 at 03:47:05 UTC+1 yy.wayne wrote:
>>
>>> Thank you peter.
>>>
>>> It looks awesome! I've got a few question about it.
>>>
>>>1. In your github code, you actually assemble a block matrix 
>>>system(in MatrixFree) but didn't renumber dofs as done in most Stokes 
>>>examples? There's no multi dof_handlers. The solution in HeatProblem is 
>>> a 
>>>non-complex one then(half size of src and dst processed in MatrixFree 
>>>operator)?
>>>2. Can I implement p-multigrid precondition with given 
>>>MGTransferBlockGlobalCoarsening?
>>>3. It's a parallel version but vectors are not initialized with 
>>>solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). 
>>> So 
>>>matrix_free->initialize_dof_vector has same function done?
>>>
>>> Best,
>>> Wayne
>>> 在2022年11月3日星期四 UTC+8 04:44:53 写道:
>>>
 Hi

 The block version of MGTransferGlobalCoarsening is 
 

[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-03 Thread Peter Munch
Hi Wayne,

> The solution in HeatProblem is a non-complex one then(half size of src 
and dst processed in MatrixFree operator)?

We are solving equation (10) from the paper; it is the  reformulated 
version of equation (9), which is a complex system. The heat equation is 
indeed not complex; but the diagonalization approach in the case of fully 
implicit stage parallel IRK makes the system complex (inter term of 
equation (2) consists of complex blocks that need to be solved).

> In your github code, you actually assemble a block matrix system(in 
MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
There's no multi dof_handlers.

I am not sure which lines you are referring to, but if I assemble the 
matrix it only happens for the coarse-grid problem; also I don't think I 
did setup the matrix for the complex system and instead used simply 
Chebyshev iterations around point Jacobi. I have another project, where we 
actually assemble the matrix.

I am not sure what happens in the Stokes examples. But my guess is that 
they use a FESystem consisting of a FE for pressure and a vectorial FE for 
velocity. To be able to use block vector in this case the DoFs need to be 
renumbered so that DoFs of a component are contiguous, which is not the 
case normally.

In my case, I am using a single a single scalar DoFHandler. This describes 
both the real and the imaginary part. As a consequence, one only has to 
pass a single DoFHander/AffineConstraints to MatrixFree. This is somewhat 
different approach than it is done in the other tutorials and has been 
adopted from our two-phase solver adaflo 
(https://github.com/kronbichler/adaflo), where we extensively use it reduce 
overhead for the level-set problem (single DoFHandler that describes the 
level-set field, normal, and curvature). MatrixFree works quite nicely 
(just like in the case if you would use FESystem) if you use it with a 
scalar DoFHandler and pass a BlockVector to the matrix-free loops. I think 
we don't have a tutorial for this way to work with block vectors in the 
context of MatrixFree. However, we made lot of progress in improving the 
usability in the last two years, since we use this feature in 2-3 projects 
with external collaboration partners. Unfortunately, we have not moved all 
utility functions to deal.II yet.

> Can I implement p-multigrid precondition with given 
MGTransferBlockGlobalCoarsening?

Yes. It is used in the code.

> It's a parallel version but vectors are not initialized with 
solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So 
matrix_free->initialize_dof_vector has same function done?

Yes, initialize_dof_vector() does the same just smarter. A general 
recommendation: always use initialize_dof_vector. If MatrixFree notices 
that you use the internal partitioner, it can do some short cuts. Else, it 
needs to do some additional checks to figure out if the vectors are 
comptible.

Hope this helps,
Peter

On Thursday, 3 November 2022 at 03:47:05 UTC+1 yy.wayne wrote:

> Thank you peter.
>
> It looks awesome! I've got a few question about it.
>
>1. In your github code, you actually assemble a block matrix system(in 
>MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
>There's no multi dof_handlers. The solution in HeatProblem is a 
> non-complex 
>one then(half size of src and dst processed in MatrixFree operator)?
>2. Can I implement p-multigrid precondition with given 
>MGTransferBlockGlobalCoarsening?
>3. It's a parallel version but vectors are not initialized with 
>solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So 
>matrix_free->initialize_dof_vector has same function done?
>
> Best,
> Wayne
> 在2022年11月3日星期四 UTC+8 04:44:53 写道:
>
>> Hi
>>
>> The block version of MGTransferGlobalCoarsening is 
>> MGTransferBlockGlobalCoarsening (see  
>> https://www.dealii.org/developer/doxygen/deal.II/classMGTransferBlockGlobalCoarsening.html
>> ).
>>
>> I don't know the details of step-29. But what I would recommend you to do 
>> is to take a look at 
>> https://github.com/peterrum/dealii-spirk/blob/bb52df5d1023f9c41ca51a082be769bc0171bf75/include/operator.h#L616-L660.
>>  
>> Here, I am solving a complex-valued Helmholtz operator (in my case a sum of 
>> mass and Laplace matrix, which arises from complex implicit Runge-Kutta 
>> formulation; see https://arxiv.org/abs/2209.06700). This complex code 
>> was the motivation for writing MGTransferBlockGlobalCoarsening in the fist 
>> place^^
>>
>> I think the code should answer most of your questions.
>>
>> Regarding
>>
>> > FEEvaluation<...,1> real(matrix_free_data, 0) and FEEvaluation<...,1> 
>> real(matrix_free_data, 1)
>>
>> I guess you have to replace the "1" by a "0" in the second FEEval.
>>
>> Hope this helps,
>> Peter
>>
>> On Wednesday, 2 November 2022 at 17:55:44 UTC+1 yy.wayne wrote:
>>
>>> Well a sad thing I notice is a function used for p-multigrid in step-75: 
>>> 

[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-02 Thread Peter Munch
Hi

The block version of MGTransferGlobalCoarsening is 
MGTransferBlockGlobalCoarsening (see  
https://www.dealii.org/developer/doxygen/deal.II/classMGTransferBlockGlobalCoarsening.html).

I don't know the details of step-29. But what I would recommend you to do 
is to take a look 
at 
https://github.com/peterrum/dealii-spirk/blob/bb52df5d1023f9c41ca51a082be769bc0171bf75/include/operator.h#L616-L660.
 
Here, I am solving a complex-valued Helmholtz operator (in my case a sum of 
mass and Laplace matrix, which arises from complex implicit Runge-Kutta 
formulation; see https://arxiv.org/abs/2209.06700). This complex code was 
the motivation for writing MGTransferBlockGlobalCoarsening in the fist 
place^^

I think the code should answer most of your questions.

Regarding
> FEEvaluation<...,1> real(matrix_free_data, 0) and FEEvaluation<...,1> 
real(matrix_free_data, 1)

I guess you have to replace the "1" by a "0" in the second FEEval.

Hope this helps,
Peter

On Wednesday, 2 November 2022 at 17:55:44 UTC+1 yy.wayne wrote:

> Well a sad thing I notice is a function used for p-multigrid in step-75: 
> reinit() in MGTwoLevelTransfer, is imlpemented only for 
> LinearAlgebra::distributed::Vector, not  
> LinearAlgebra::distributed::BlockVector.
> [image: 202211-3.png]
>
> I'm fine with writing some unimplemented functions myself, but a guidence 
> is really helpful. While going through this problem I found different 
> choices and not sure what's the best way having this done. (For example, 
> LinearAlgebraTrilinos::MPI::BlockVector or 
> LinearAlgebra::distributed::BlockVector? Whether we need 
> locally_relevant_dofs when reinit solution vector?(It's not used here 
> stokes_computation 
> )
>  
> Or is the non-specialized MGTransferLevel Class works as well for p 
> coarsening?)
>
> In short, I'm solving step-29(2 components) with matrix-free and 
> p-multigrid technique. Is current dealii capable of solving it? If not 
> what's the fastest way to implement it? Thanks!
> 在2022年11月2日星期三 UTC+8 21:40:36 写道:
>
>> I tried to modify my code that FEEvaluation is constructed without 
>> matrix_free_data as in the above .cc in dealii/test, but get following 
>> error information:
>> "FEEvaluation was initialized without a matrix-free object. 
>> Integer indexing is not possible".
>>
>> Guess the only method to do MatrixFree problem with multi-components now 
>> is through BlockVector in matrix_vector_Stokes test 
>> ?
>>  
>> Since I never write dealii prokect with blocked dofs before I'm affraid 
>> codes besides MatrixFree parts should be changed as well... Should I try 
>> rearrange step-29 like problem with block vector? 
>>
>> 在2022年11月2日星期三 UTC+8 17:24:23 写道:
>>
>>> In test/matrix-free/assemble_matrix_02.cc 
>>> 
>>>  a 
>>> Stokes matrix is assembled in MatrixFree form, and DoFs correspond to 
>>> velocity and pressure are called by FEEvaluation<...,dim> velocity(..., 0) 
>>> and  FEEvaluation<...,1> velocity(..., dim), respectively.
>>>
>>> I have a multi-comp system two as in step-29 with real and imaginary 
>>> parts. But initializing the 2 FEEvaluations by FEEvaluation<...,1> 
>>> real(matrix_free_data, 0) and FEEvaluation<...,1> real(matrix_free_data, 1) 
>>> gives error, because matrix_free_data.n_components() is 1 not 2. Maybe it's 
>>> because I'm using fe_collection, not just FESystem in the 
>>> assemble_matrix_02 test.
>>>
>>> If I have a FEEvaluation<...,dim=2> integrator including real and 
>>> imaginary parts, is there a way to decouple them like in FEValues (for 
>>> example, fe.system_to_component_index, or fe_values_real = 
>>> fe_values[FEValuesExtractors])?
>>>
>>> Best,
>>> Wayne
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0def21bf-6523-42f2-bec4-f25c9e3b0279n%40googlegroups.com.


[deal.II] Re: Apply boundary integral to diagonal in MatrixFree

2022-11-02 Thread Peter Munch
Hi,

you are right such a function is unfortunately missing in deal.II. It has 
been on my TODO list for a long time but I didn't get around to implement 
it, since I haven't had to use DG for a while and in most of my 
applications I can move the BCs to the RHS.

Setting up the matrix is not particular difficult. You can take a look at 
our CFD code 
(https://github.com/exadg/exadg/blob/66a38ee6c0a03bbcc003ff4c19d94354face492b/include/exadg/operators/operator_base.cpp#L801-L827,
 
https://github.com/exadg/exadg/blob/66a38ee6c0a03bbcc003ff4c19d94354face492b/include/exadg/operators/operator_base.cpp#L1681-L1967).
 
Computing the diagonal is a bit more difficult. What the function in 
deal.II does is to compute a column of the element stiffness matrix and to 
apply right away the constraints without ever having to assemble the system 
matrix. I am not sure how much work is to extend the function so that it 
considers also face integrals; I think the logic should be similar.

>  think I will write new functions if there's no appropriate one.

That would be great!

Hope this help,
Peter


On Tuesday, 1 November 2022 at 13:25:09 UTC+1 yy.wayne wrote:

> Maybe I should describe my question again.
>
> In short, to use Chebyshev smoother and solve mg coarse grid straightly, 
> we need *MatrixFreeTools::compute_diagonal and 
> MatrixFreeTools::compute_matrix,* respectively.
>
> However, for problems involve Robin boundary condition where integral is 
> required, those 2 function will not work(I think). That's because 
> FEFaceEvaluation should be included as well.
>
> I think I will write new functions if there's no appropriate one. Any 
> hint, especially on FEFaceEvaluation and multi-component MatrixFree 
> formulation is welcomed.
>
> 在2022年10月27日星期四 UTC+8 21:50:39 写道:
>
>> To write the boundary_integral in MatrixFree for step-29 is kind of 
>> confusing as well...
>> For FEFaceEvaluation with 2 components(real and imaginary parts), whether 
>> a index correspond to src(alpha and beta) or basis function(phi_0 for phi 
>> and phi_1 for psi) is a problem. I cannot test this code yet due to the 
>> former question.
>> [image: 20221027_1.png]
>> [image: 20221027_2.png]
>>
>> 在2022年10月27日星期四 UTC+8 16:01:31 写道:
>>
>>> Hi there,
>>>
>>> I've been applying MatrixFree method to Step-29 where face integrals are 
>>> involved, different from step-37, step-75 and so on. Firstly I think loop() 
>>> instead of cell_loop() should be used. Secondly, similar to step-75, 
>>> Chebyshev smoother and a coarsest level matrix are needed in my code. In 
>>> step-75 we get diagonal vector by *MatrixFreeTools::compute_diagonal 
>>> and MatrixFreeTools::compute_matrix*.
>>>
>>> However, the face integral makes them different. Both two functions only 
>>> take cell_operations(FEEvaluation), not face_operations(FEFaceEvaluation). 
>>> My question is how to compute diagonal and matrix when my weak form has 
>>> face integrals terms (only boundary integral, no interface integral). 
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/2e80dd35-9e39-48c3-8494-7114bf8a18cdn%40googlegroups.com.


Re: [deal.II] Re: measuring cpu and wall time for assembly routine

2022-10-23 Thread Peter Munch
Now, I am lost. The fast one is 3 times slower!?

Peter

On Sunday, 23 October 2022 at 10:33:38 UTC+2 Simon wrote:

> Certainly.
> When using the slow path, i.e. MappingQ in version 9.3.2, the cpu time is 
> about 6.3 seconds. 
> In case of the fast path, i.e. MappingQGeneric in version 9.3.2, the cpu 
> time is about 18.7 seconds.
> Crudely,  the .reinit function associated with the FEPointEvaluation 
> objects is called about 1.2 million times.
>
> Best,
> Simon
>
> Am Sa., 22. Okt. 2022 um 17:09 Uhr schrieb Peter Munch <
> peterr...@gmail.com>:
>
>> Happy about that! May I ask you to post the results here. I am curious 
>> since I never actually compared timings (and only blindly trusted Martin).
>>
>> Thanks,
>> Peter
>>
>> On Saturday, 22 October 2022 at 16:46:16 UTC+2 Simon wrote:
>>
>>> Yes, the issue is resolved and the computation time decreased 
>>> significantly.
>>>
>>> Thank you all!
>>>
>>> -Simon
>>>
>>> Am Sa., 22. Okt. 2022 um 12:57 Uhr schrieb Peter Munch <
>>> peterr...@gmail.com>:
>>>
>>>> You are right. Release 9.3 uses the slow path for MappingQ. The reason 
>>>> is that here 
>>>> https://github.com/dealii/dealii/blob/ccfaddc2bab172d9d139dabc044d028f65bb480a/include/deal.II/matrix_free/fe_point_evaluation.h#L708-L711
>>>>  
>>>> we check for MappingQGeneric. At that time MappingQ and MappingQGeneric 
>>>> were different classes. In the meantime, we have merged the classes so 
>>>> that 
>>>> in release 9.4 and on master this is not an issue anymore. Is there a 
>>>> chance that you update deal.II. Alternatively, you could use 
>>>> MappingQGeneric instead of MappingQ.
>>>>
>>>> Hope this resolves this issue!
>>>>
>>>> Peter
>>>>
>>>> On Friday, 21 October 2022 at 10:59:35 UTC+2 Simon wrote:
>>>>
>>>>> I revised the appendix from my last message a little bit and attache 
>>>>> now a minimal working example (just 140 lines) along with a 
>>>>> CMakeLists.txt.
>>>>> After checking the profiling results from valgrind, the combination of 
>>>>> MappingQ with FE_Q takes *not* the fast path.
>>>>>
>>>>> For info: I use dealii version 9.3.2
>>>>>
>>>>> Best,
>>>>> Simon
>>>>>
>>>>> Am Do., 20. Okt. 2022 um 18:11 Uhr schrieb Simon Wiesheier <
>>>>> simon.w...@gmail.com>:
>>>>>
>>>>>> " When you use FEPointEvaluation, you should construct it only once 
>>>>>> and re-use the same object for different points. Furthermore, you should 
>>>>>> also avoid to create "p_dofs" and the "std::vector" near the  I was not 
>>>>>> clear with my original message. Anyway, the problem is the FEValues 
>>>>>> object 
>>>>>> that gets used. I am confused by your other message that you use FE_Q 
>>>>>> together with MappingQ - that combination should be supported and if it 
>>>>>> is 
>>>>>> not, we should take a look at a (reduced) code from you. "
>>>>>>
>>>>>> I added a snippet of my code (see appendix) in which I describe the 
>>>>>> logic as to what I am doing with FEPointEvaluation. 
>>>>>> In fact, constructing FEPointEvaluation (and the vector p_dofs) once 
>>>>>> and re-using them brings only minor changes as the overall costs are 
>>>>>> dominated by the call to reinit(). 
>>>>>> But, of course, it helps at least.
>>>>>>
>>>>>> I am surprised too that the fast path is not used. Maybe you can 
>>>>>> identify a problem in my code.
>>>>>> Thank you!
>>>>>>
>>>>>> Best,
>>>>>> Simon
>>>>>>
>>>>>> Am Do., 20. Okt. 2022 um 17:02 Uhr schrieb Martin Kronbichler <
>>>>>> kronbichl...@gmail.com>:
>>>>>>
>>>>>>> Dear Simon,
>>>>>>>
>>>>>>> When you use FEPointEvaluation, you should construct it only once 
>>>>>>> and re-use the same object for different points. Furthermore, you 
>>>>>>> should 
>>>>>>> also avoid to create "p_dofs" and the "std::vector&quo

Re: [deal.II] Re: measuring cpu and wall time for assembly routine

2022-10-22 Thread Peter Munch
Happy about that! May I ask you to post the results here. I am curious 
since I never actually compared timings (and only blindly trusted Martin).

Thanks,
Peter

On Saturday, 22 October 2022 at 16:46:16 UTC+2 Simon wrote:

> Yes, the issue is resolved and the computation time decreased 
> significantly.
>
> Thank you all!
>
> -Simon
>
> Am Sa., 22. Okt. 2022 um 12:57 Uhr schrieb Peter Munch <
> peterr...@gmail.com>:
>
>> You are right. Release 9.3 uses the slow path for MappingQ. The reason is 
>> that here 
>> https://github.com/dealii/dealii/blob/ccfaddc2bab172d9d139dabc044d028f65bb480a/include/deal.II/matrix_free/fe_point_evaluation.h#L708-L711
>>  
>> we check for MappingQGeneric. At that time MappingQ and MappingQGeneric 
>> were different classes. In the meantime, we have merged the classes so that 
>> in release 9.4 and on master this is not an issue anymore. Is there a 
>> chance that you update deal.II. Alternatively, you could use 
>> MappingQGeneric instead of MappingQ.
>>
>> Hope this resolves this issue!
>>
>> Peter
>>
>> On Friday, 21 October 2022 at 10:59:35 UTC+2 Simon wrote:
>>
>>> I revised the appendix from my last message a little bit and attache now 
>>> a minimal working example (just 140 lines) along with a CMakeLists.txt.
>>> After checking the profiling results from valgrind, the combination of 
>>> MappingQ with FE_Q takes *not* the fast path.
>>>
>>> For info: I use dealii version 9.3.2
>>>
>>> Best,
>>> Simon
>>>
>>> Am Do., 20. Okt. 2022 um 18:11 Uhr schrieb Simon Wiesheier <
>>> simon.w...@gmail.com>:
>>>
>>>> " When you use FEPointEvaluation, you should construct it only once and 
>>>> re-use the same object for different points. Furthermore, you should also 
>>>> avoid to create "p_dofs" and the "std::vector" near the  I was not clear 
>>>> with my original message. Anyway, the problem is the FEValues object that 
>>>> gets used. I am confused by your other message that you use FE_Q together 
>>>> with MappingQ - that combination should be supported and if it is not, we 
>>>> should take a look at a (reduced) code from you. "
>>>>
>>>> I added a snippet of my code (see appendix) in which I describe the 
>>>> logic as to what I am doing with FEPointEvaluation. 
>>>> In fact, constructing FEPointEvaluation (and the vector p_dofs) once 
>>>> and re-using them brings only minor changes as the overall costs are 
>>>> dominated by the call to reinit(). 
>>>> But, of course, it helps at least.
>>>>
>>>> I am surprised too that the fast path is not used. Maybe you can 
>>>> identify a problem in my code.
>>>> Thank you!
>>>>
>>>> Best,
>>>> Simon
>>>>
>>>> Am Do., 20. Okt. 2022 um 17:02 Uhr schrieb Martin Kronbichler <
>>>> kronbichl...@gmail.com>:
>>>>
>>>>> Dear Simon,
>>>>>
>>>>> When you use FEPointEvaluation, you should construct it only once and 
>>>>> re-use the same object for different points. Furthermore, you should also 
>>>>> avoid to create "p_dofs" and the "std::vector" near the  I was not clear 
>>>>> with my original message. Anyway, the problem is the FEValues object that 
>>>>> gets used. I am confused by your other message that you use FE_Q together 
>>>>> with MappingQ - that combination should be supported and if it is not, we 
>>>>> should take a look at a (reduced) code from you.
>>>>>
>>>>> Regarding the high timings: There is some parallelization by tasks 
>>>>> that gets done inside the constructor of FEValues. This has good intents 
>>>>> for the case that we are in 3D and have a reasonable amount of work to 
>>>>> do. 
>>>>> However, you are in 1D (if I read your code correctly), and then it is 
>>>>> having adverse effects. The reason is that the constructor of FEValues is 
>>>>> very likely completely dominated by memory allocation. When we have 1 
>>>>> thread, everything is fine, but when we have multiple threads working 
>>>>> they 
>>>>> will start to interfere with each other when the request memory through 
>>>>> malloc(), which has to be coordinated by the operating system (and thus 
>>>>> gets slower). In fact, the big g

Re: [deal.II] Re: measuring cpu and wall time for assembly routine

2022-10-22 Thread Peter Munch
You are right. Release 9.3 uses the slow path for MappingQ. The reason is 
that here 
https://github.com/dealii/dealii/blob/ccfaddc2bab172d9d139dabc044d028f65bb480a/include/deal.II/matrix_free/fe_point_evaluation.h#L708-L711
 
we check for MappingQGeneric. At that time MappingQ and MappingQGeneric 
were different classes. In the meantime, we have merged the classes so that 
in release 9.4 and on master this is not an issue anymore. Is there a 
chance that you update deal.II. Alternatively, you could use 
MappingQGeneric instead of MappingQ.

Hope this resolves this issue!

Peter

On Friday, 21 October 2022 at 10:59:35 UTC+2 Simon wrote:

> I revised the appendix from my last message a little bit and attache now a 
> minimal working example (just 140 lines) along with a CMakeLists.txt.
> After checking the profiling results from valgrind, the combination of 
> MappingQ with FE_Q takes *not* the fast path.
>
> For info: I use dealii version 9.3.2
>
> Best,
> Simon
>
> Am Do., 20. Okt. 2022 um 18:11 Uhr schrieb Simon Wiesheier <
> simon.w...@gmail.com>:
>
>> " When you use FEPointEvaluation, you should construct it only once and 
>> re-use the same object for different points. Furthermore, you should also 
>> avoid to create "p_dofs" and the "std::vector" near the  I was not clear 
>> with my original message. Anyway, the problem is the FEValues object that 
>> gets used. I am confused by your other message that you use FE_Q together 
>> with MappingQ - that combination should be supported and if it is not, we 
>> should take a look at a (reduced) code from you. "
>>
>> I added a snippet of my code (see appendix) in which I describe the logic 
>> as to what I am doing with FEPointEvaluation. 
>> In fact, constructing FEPointEvaluation (and the vector p_dofs) once and 
>> re-using them brings only minor changes as the overall costs are dominated 
>> by the call to reinit(). 
>> But, of course, it helps at least.
>>
>> I am surprised too that the fast path is not used. Maybe you can identify 
>> a problem in my code.
>> Thank you!
>>
>> Best,
>> Simon
>>
>> Am Do., 20. Okt. 2022 um 17:02 Uhr schrieb Martin Kronbichler <
>> kronbichl...@gmail.com>:
>>
>>> Dear Simon,
>>>
>>> When you use FEPointEvaluation, you should construct it only once and 
>>> re-use the same object for different points. Furthermore, you should also 
>>> avoid to create "p_dofs" and the "std::vector" near the  I was not clear 
>>> with my original message. Anyway, the problem is the FEValues object that 
>>> gets used. I am confused by your other message that you use FE_Q together 
>>> with MappingQ - that combination should be supported and if it is not, we 
>>> should take a look at a (reduced) code from you.
>>>
>>> Regarding the high timings: There is some parallelization by tasks that 
>>> gets done inside the constructor of FEValues. This has good intents for the 
>>> case that we are in 3D and have a reasonable amount of work to do. However, 
>>> you are in 1D (if I read your code correctly), and then it is having 
>>> adverse effects. The reason is that the constructor of FEValues is very 
>>> likely completely dominated by memory allocation. When we have 1 thread, 
>>> everything is fine, but when we have multiple threads working they will 
>>> start to interfere with each other when the request memory through 
>>> malloc(), which has to be coordinated by the operating system (and thus 
>>> gets slower). In fact, the big gap between compute time and wall time shows 
>>> that there is a lot of time wasted by "system time" that does not do actual 
>>> work on the cores.
>>>
>>> I guess the library could have a better measure of when to spawn tasks 
>>> in FEValues in similar context, but it is a lot of work to get this right. 
>>> (This is why I keep avoiding it in critical functions.)
>>>
>>> Best,
>>> Martin
>>>
>>>
>>> On 20.10.22 16:47, Simon Wiesheier wrote:
>>>
>>> Update:
>>>
>>> I profiled my program with valgrind --tool=callgrind and could figure 
>>> out that
>>> FEPointEvaluation creates an FEValues object along with a quadrature 
>>> object under the hood. 
>>> Closer inspection revealed that all constructors, destructors,... 
>>> associated with FEPointEvaluation 
>>> need roughly 5000 instructions more (per call!).
>>> That said, FEValues is indeed the faster approach, at least for FE_Q 
>>> elements.
>>>
>>> export DEAL_II_NUM_THREADS=1
>>> eliminated the gap between cpu and wall time. 
>>> Using FEValues directly, I get cpu time 19.8 seconds
>>> and in the case of FEPointEvaluation cpu time = 21.9 seconds;
>>> Wall times are in the same ballpark. 
>>> Out of curiosity, why produces multi-threading such high wall times (200 
>>> seconds) in my case?. 
>>>
>>> These times are far too big given that the solution of the linear system 
>>> takes only about 13 seconds.
>>> But based on what all of you have said, there is probably no other to 
>>> way to implement my problem. 
>>>
>>> Best
>>> Simon
>>>
>>> Am Do., 20. Okt. 

Re: [deal.II] Re: measuring cpu and wall time for assembly routine

2022-10-20 Thread Peter Munch
> FEPointEvaluation creates an FEValues object along with a quadrature 
object under the hood.
Closer inspection revealed that all constructors, destructors,... 
associated with FEPointEvaluation
need roughly 5000 instructions more (per call!).
That said, FEValues is indeed the faster approach, at least for FE_Q 
elements.

What type of Mapping are you using? If you take a look 
at 
https://github.com/dealii/dealii/blob/ad13824e599601ee170cb2fd1c7c3099d3d5b0f7/source/matrix_free/fe_point_evaluation.cc#L40-L95
 
you can see when the fast path of FEPointEvaluation is taken. Indeed, the 
slow path is (FEValues). One question: are you running in release or debug 
mode?

Hope this brings us closer to the issue,
Peter

On Thursday, 20 October 2022 at 16:47:17 UTC+2 Simon wrote:

> Update:
>
> I profiled my program with valgrind --tool=callgrind and could figure out 
> that
> FEPointEvaluation creates an FEValues object along with a quadrature 
> object under the hood. 
> Closer inspection revealed that all constructors, destructors,... 
> associated with FEPointEvaluation 
> need roughly 5000 instructions more (per call!).
> That said, FEValues is indeed the faster approach, at least for FE_Q 
> elements.
>
> export DEAL_II_NUM_THREADS=1
> eliminated the gap between cpu and wall time. 
> Using FEValues directly, I get cpu time 19.8 seconds
> and in the case of FEPointEvaluation cpu time = 21.9 seconds;
> Wall times are in the same ballpark. 
> Out of curiosity, why produces multi-threading such high wall times (200 
> seconds) in my case?. 
>
> These times are far too big given that the solution of the linear system 
> takes only about 13 seconds.
> But based on what all of you have said, there is probably no other to way 
> to implement my problem. 
>
> Best
> Simon
>
> Am Do., 20. Okt. 2022 um 11:55 Uhr schrieb Simon Wiesheier <
> simon.w...@gmail.com>:
>
>> Dear Martin and Wolfgang,
>>
>> " You seem to be looking for FEPointEvaluation. That class is shown in 
>> step-19 and provides, for simple FiniteElement types, a much faster way to 
>> evaluate solutions at arbitrary points within a cell. Do you want to give 
>> it a try? "
>>
>> I implemented the FEPointEvaluation approach like this:
>>
>> FEPointEvaluation<1,1> fe_eval(mapping,
>> FE_Q<1>(1),
>> update_gradients | 
>> update_values); 
>> fe_eval.reinit(cell, 
>> make_array_view(std::vector>{ref_point_energy_vol}));
>> Vector p_dofs(2);
>> cell->get_dof_values(solution_global, p_dofs);
>> fe_eval.evaluate(make_array_view(p_dofs),
>> EvaluationFlags::values | 
>> EvaluationFlags::gradients);
>> double val = fe_eval.get_value(0);
>> Tensor<1,1> grad = fe_eval.get_gradient(0);
>>
>> I am using FE_Q elements of degree one and a MappingQ object also of 
>> degree one.
>>
>> Frankly, I do not really understand the measured computation times. 
>> My program has several loadsteps with nested Newton iterations:
>> Loadstep 1:
>> Assembly 1: cpu time 12.8 sec  wall time 268.7 sec
>> Assembly 2: cpu time 17.7 sec  wall time 275.2 sec 
>> Assembly 3: cpu time 22.3 sec  wall time 272.6 sec 
>> Assembly 4: cpu time 23.8 sec  wall time 271.3sec 
>> Loadstep 2:
>> Assembly 1: cpu time 14.3 sec  wall time 260.0 sec
>> Assembly 2: cpu time 16.9 sec  wall time 262.1 sec 
>> Assembly 3: cpu time 18.5 sec  wall time 270.6 sec 
>> Assembly 4: cpu time 17.1 sec  wall time 262.2 sec 
>> ...
>>
>> Using FEValues instead of FEPointEvaluation, the results are:
>> Loadstep 1:
>> Assembly 1: cpu time 23.9 sec  wall time 171.0 sec
>> Assembly 2: cpu time 32.5 sec  wall time 168.9 sec 
>> Assembly 3: cpu time 33.2 sec  wall time 168.0 sec 
>> Assembly 4: cpu time 32.7 sec  wall time 166.9 sec 
>> Loadstep 2:
>> Assembly 1: cpu time 24.9 sec  wall time 168.0 sec 
>> Assembly 2: cpu time 34.7 sec  wall time 167.3 sec 
>> Assembly 3: cpu time 33.9 sec  wall time 167.8 sec 
>> Assembly 4: cpu time 34.3 sec  wall time 167.7 sec 
>> ...
>>
>> Clearly, the fluctuations using FEValues are smaller than in case of 
>> FEPointEvaluation. 
>> Anyway, using FEPointEvaluation the cpu time is smaller but the wall time 
>> substantially bigger. 
>> If I am not mistaken, the values cpu time 34.3 sec and wall time 167.7 
>> sec mean that
>> the cpu needs 34.3 sec to execute my assembly routine and has to wait in 
>> the
>> remaining 167.7-34.3 seconds. 
>> This huge gap between cpu and wall time has to be related to what I do 
>> with FEValues or FEPointEvaluation
>> as cpu and wall time are nearly balanced if I use either neither of them.
>> What might be the problem?
>>
>> Best
>> Simon
>>
>>
>>
>>
>>
>> Am Mi., 19. Okt. 2022 um 22:34 Uhr schrieb Wolfgang Bangerth <
>> bang...@colostate.edu>:
>>
>>> On 10/19/22 08:45, Simon Wiesheier wrote:
>>> > 
>>> > What I want to do boils down to the following:
>>> > Given the reference co-ordinates of a point 'p', along with the cell 
>>> on 
>>> 

[deal.II] Re: Run time analysis for step-75 with matrix-free or matrix-based method

2022-10-19 Thread Peter Munch
Hi Wayne,

your numbers make totally sense. Don't forget that you are running for high 
order: degree=6! The number of non-zeroes per element-stiffness matrix is 
((degree + 1)^dim)^2 and the cost of computing the element stiffness matrix 
is even ((degree + 1)^dim)^3 if I am not mistaken (3 nested loop: i, j and 
q). Higher orders are definitely made for matrix-free algorithms!

Out of curiosity: how large is the setup cost of MG in the case of the 
matrix-free run? As a comment: don't be surprised that the setup costs are 
relatively high compared to the solution process: you are probably setting 
up a new Triangulation-, DoFHander-, MatrixFree-, ... -object per level. In 
many simulations, you can reuse these objects, since you don't perform AMR 
every time step. 

Peter

On Wednesday, 19 October 2022 at 10:38:34 UTC+2 yy.wayne wrote:

> Hello everyone,
>
> I modified step-75 a little bit and try to test it's runtime. However the 
> result is kind of inexplainable from my point of view, especially on 
> *disproportionate 
> assemble time and solve time*. Here are some changes:
> 1. a matrix-based version of step75 is contructed to compare with 
> matrix-free one.
> 2. no mesh refinement and no GMG, and fe_degree is constant across all 
> cells within every cycle. Fe_degree adds one after each cycle. I make this 
> setting to compare runtime due to fe_degree.
> 3. a direct solver on coareset grid. I think it won't affect runtime since 
> coarest grid never change
>
> For final cycle it has fe_degree=6 and DoFs=111,361.
> For matrix-based method, overall runtime is 301s where setup system(84s) 
> and solve system(214s) take up most. In step-75 solve system actually did 
> both multigrid matrices assembling, smoother construction, and CG solving. 
> Runtime of this case is shown:
> [image: matrix-based.png]
> On each level I print time assembling level matrix. *The solve system is 
> mostly decomposed to MG matrices assembling(83.9+33.6+...=133s), smoother 
> set up(65s), coarse grid solve(6s) and CG solve(2.56).* My doubt is why 
> actual CG solve only takes 2.56 out of 301 seconds for this problem? The 
> time spent on assembling and smoother construction account too much that 
> they seems a burden.
>
> For matrix-free method however, runtime is much smaller without assembling 
> matrices. Besides, CG solve cost more because of more computation required 
> by matrix-free I guess. But *smoother construction time reduces 
> significantly* as well is out of my expectation.
> [image: matrix-free.png]
>
> Matrix-free framework saves assembling time but it seems too efficient to 
> be real. The text in bold are my main confusion. May someone share some 
> experience on matrix-free and multigrid methods' time consumption?
>
> Best,
> Wayne
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/36311361-2ecb-4b81-b196-dd89d2263534n%40googlegroups.com.


[deal.II] Re: Trying to increase precision of floating point values in step3 of tutorial

2022-10-10 Thread Peter Munch
> It is binary and so outputs as many digits as there are in a 32-bit 
floating point number.

@WB Which is equivalent to "deal.ii outputs the solution vector values upto 
7 decimal" if I am not mistaken.

To be able to switch between double and single precision would be nice. 
It's not the first time I heard someone wanting this feature. If you would 
be willing to implement this would be great.

But for the time being, you could also quickly change some floats (e.g. 
https://github.com/dealii/dealii/blob/a95f7e5f2e7d15c10670ee93313dd357178a18b3/source/base/data_out_base.cc#L1547-L1548)
 
to doubles.

Peter

On Monday, 10 October 2022 at 15:48:11 UTC+2 ce21...@smail.iitm.ac.in wrote:

> Thank you.
>
> I used the setprecision function to output the result to desired accuracy.
> However, I noticed that deal.ii writes the output to vtk file only upto 
> certain precision(6).
> Is there a way to write output to the vtk file according to required 
> precision?
>
> Thank you
>
> On Thursday, September 29, 2022 at 6:13:28 PM UTC+5:30 
> bruno.t...@gmail.com wrote:
>
>> Hello,
>>
>> What do you mean exactly by "deal.ii outputs the solution vector values 
>> upto 7 decimal"? Which function are you using? If you are using a function 
>> like std::cout to inspect the values in the vector, the default precision 
>> is 6 (I think) but it can be changed using std::setprecision (see 
>> https://en.cppreference.com/w/cpp/io/manip/setprecision)
>>
>> Best,
>>
>> Bruno
>>
>> On Thursday, September 29, 2022 at 4:24:49 AM UTC-4 
>> ce21...@smail.iitm.ac.in wrote:
>>
>>> Hello everyone.
>>> Is there a way to increase precision of floating point values in 
>>> deal.ii? I am modifying step3 and checking my solution against a linear 
>>> exact solution. I am getting machine precision (around 10^-16 ) for 
>>> 4,16...4096 elements. However, when I refine further, I am getting an error 
>>> of 10^-6 and 10^-5 for 16384 and 65536 elements. What I realized is that 
>>> deal.ii outputs the solution vector values upto 7 decimal places only and 
>>> hence the error begins to show up for very fine meshes.
>>> Is there a way to increase my precision to say upto 14 decimal places?
>>> I tried using long double instead of double for my various variables but 
>>> it gives the following error:
>>> error: no matching function for call to 
>>> ‘interpolate_boundary_values(dealii::DoFHandler<2, 2>&, int, 
>>> dealii::Functions::ConstantFunction<2>, std::map>> double>&)’
>>>   296 |boundary_values);
>>>
>>> Thank You!
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/bb6f50fc-7339-4132-a1c8-2758dd58fc66n%40googlegroups.com.


Re: [deal.II] Re: Matrix-based p-multigrid preconditioning

2022-10-06 Thread Peter Munch

Yes.

Peter

On 07.10.22 07:44, 'yy.wayne' via deal.II User Group wrote:
I see. So that's why we create a new dof_handlers in solve_with_gmg() 
function instead of continue using dof_handler as other tutorials. If 
I want to build mg_matrices and their SparsityPattern, they should 
based on dof_handlers as well, is it correct?


Thank you

在2022年10月7日星期五 UTC+8 13:35:16 写道:

Hej,

> One thing I'm confused is that why we use
"make_hanging_node_constraints" and "interpolate_boundary_values"
for a new constraints in solve_with_gmg in step-75, instead of the
boundary_constraints as done in previous MG tutorials.

Your observation is correct. It is related to the fact that
p-multigrid in deal.II uses the new global-coarsening
infrastructure (it was introduced in release 7.3). In the other
tutorials, the local-smoothing infrastructure is used.

In a nutshell, the difference is that local smoothing is working
on refinement levels, while global coarsening works on the
complete active levels of multiple Triangulation/DoFHandler
objects. In the case of h-multigrid, one works on multiple meshes,
which are created by globally coarsening the finest mesh (this is
the reason for the name "global coarsening"); in the case of
p-multigrid, one would work with the same Triangulation but would
distribute different FEs on different DoFHandler objects. Since
you work on the active level, you can simple use the functions you
would use if you don't work on refinement levels.

For more details see our publication: https://arxiv.org/abs/2203.12292

Hope this help,
Peter

On Wednesday, 5 October 2022 at 14:17:03 UTC+2 yy.wayne wrote:

Hello sir,

One thing I'm confused is that why we use
"make_hanging_node_constraints" and
"interpolate_boundary_values" for a new constraints in
solve_with_gmg in step-75, instead of the boundary_constraints
as done in previous MG tutorials.

在2022年10月4日星期二 UTC+8 00:11:49 写道:

Hi!

The "Possibilities for extensions" section of step-75
gives you a few hints on how to achieve p-multigrid
preconditioning with matrix-based methods:

https://www.dealii.org/current/doxygen/deal.II/step_75.html#Solvewithmatrixbasedmethods

If you look for an example implementation, the "hpbox"
tool demonstrates both matrix-based and matrix-free
implementations of the p-multigrid method.
https://zenodo.org/record/6425947
https://github.com/marcfehling/hpbox/

Best,
Marc

On Monday, October 3, 2022 at 3:39:51 AM UTC-6 yy.wayne wrote:

Hello,

I plan to test the converge of polynomial multigrid
preconditioning on a problem, with matrix-based
framework. However, step-75 is a bit confusing since I
cannot clearly separate classses contributed by
matrix-free and p-multigrid.

Specifically, in step-16 (with MeshWorker) and step-56
(assemble cell_matrix by hand) there are
assemble_system() and assemble_multigrid(), while in
step-75 there are solve_system() which calls
solve_with_gmg and mg_solve. It is due to matrix-free
or p-multigrid? By the way, is there a code gallery
for matrix-based polynomial-multigrid ?

Thanks

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en

---
You received this message because you are subscribed to the Google 
Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/55c95bd6-48b2-4dcf-9e6c-6544b85be53en%40googlegroups.com 
.


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fa4b497d-7357-0c21-7056-30b0d2a35736%40gmail.com.


[deal.II] Re: Matrix-based p-multigrid preconditioning

2022-10-06 Thread Peter Munch
Hej,

> One thing I'm confused is that why we use "make_hanging_node_constraints" 
and "interpolate_boundary_values" for a new constraints in solve_with_gmg 
in step-75, instead of the boundary_constraints as done in previous MG 
tutorials.

Your observation is correct. It is related to the fact that p-multigrid in 
deal.II uses the new global-coarsening infrastructure (it was introduced in 
release 7.3). In the other tutorials, the local-smoothing infrastructure is 
used. 

In a nutshell, the difference is that local smoothing is working on 
refinement levels, while global coarsening works on the complete active 
levels of multiple Triangulation/DoFHandler objects. In the case of 
h-multigrid, one works on multiple meshes, which are created by globally 
coarsening the finest mesh (this is the reason for the name "global 
coarsening"); in the case of p-multigrid, one would work with the same 
Triangulation but would distribute different FEs on different DoFHandler 
objects. Since you work on the active level, you can simple use the 
functions you would use if you don't work on refinement levels.

For more details see our publication: https://arxiv.org/abs/2203.12292

Hope this help,
Peter

On Wednesday, 5 October 2022 at 14:17:03 UTC+2 yy.wayne wrote:

> Hello sir,
>
> One thing I'm confused is that why we use "make_hanging_node_constraints" 
> and "interpolate_boundary_values" for a new constraints in solve_with_gmg 
> in step-75, instead of the boundary_constraints as done in previous MG 
> tutorials.
>
> 在2022年10月4日星期二 UTC+8 00:11:49 写道:
>
>> Hi!
>>
>> The "Possibilities for extensions" section of step-75 gives you a few 
>> hints on how to achieve p-multigrid preconditioning with matrix-based 
>> methods:
>>
>> https://www.dealii.org/current/doxygen/deal.II/step_75.html#Solvewithmatrixbasedmethods
>>
>> If you look for an example implementation, the "hpbox" tool demonstrates 
>> both matrix-based and matrix-free implementations of the p-multigrid method.
>> https://zenodo.org/record/6425947
>> https://github.com/marcfehling/hpbox/
>>
>> Best,
>> Marc
>>
>> On Monday, October 3, 2022 at 3:39:51 AM UTC-6 yy.wayne wrote:
>>
>>> Hello,
>>>
>>> I plan to test the converge of polynomial multigrid preconditioning on a 
>>> problem, with matrix-based framework. However, step-75 is a bit confusing 
>>> since I cannot clearly separate classses contributed by matrix-free and 
>>> p-multigrid. 
>>>
>>> Specifically, in step-16 (with MeshWorker) and step-56 (assemble 
>>> cell_matrix by hand) there are assemble_system() and assemble_multigrid(), 
>>> while in step-75 there are solve_system() which calls solve_with_gmg and 
>>> mg_solve. It is due to matrix-free or p-multigrid? By the way, is there a 
>>> code gallery for matrix-based polynomial-multigrid ? 
>>>
>>> Thanks
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/856696f6-57d6-48f9-8649-035d6d513553n%40googlegroups.com.


[deal.II] Re: Importing Tetrahedron mesh from Abaqus into dealii

2022-09-17 Thread Peter Munch
Hi Masoud,

it seems that read_abaqus() calls read_ucd() and the the latter has been 
not generalized for simplex meshes yet. 
See: 
https://github.com/dealii/dealii/blob/ee0a4995de1009875a4a5572ca14bde8a1979ad5/source/grid/grid_in.cc#L948-L950.
 
I am a bit surprised that you did not get an error message. Would you be 
willing to generalize the function? You could take a look at read_vtk() or 
read_msh(), which have been generalized for arbitrary mesh types. 
Alternatively, you could use one of the latter two function.

Hope this helps,
Peter

On Saturday, 17 September 2022 at 13:38:07 UTC+2 masou...@gmail.com wrote:

> Hi all,
>
> I import Hexahedron mesh from Abaqus *inp file into dealii by the 
> following lines of code:
>
> Triangulation mesh;
> GridIn grid_in;
> grid_in.attach_triangulation(mesh);
> std::ifstream input_mesh("cube.inp");
> grid_in.read_abaqus(input_mesh);
>
> and it works fine:
> [image: Screenshot 2022-09-17 at 12.34.39.png]
>
> But, with the same lines of code, for importing Tetrahedron mesh I get:
> [image: Screenshot 2022-09-17 at 12.33.40.png]
>
> which obviously is not correct. What should I do here? 
>
> Best regards,
> Masoud
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/87c1b752-9149-4d1b-9d72-51c6fa74461en%40googlegroups.com.


[deal.II] Re: Does deal.II support readin a matirx market file and plot it sturcture out.

2022-09-04 Thread Peter Munch
I am not familiar with these files. But the example given on their webpage 
(https://math.nist.gov/MatrixMarket/mmio/c/example_read.c) shows how to 
read such a file in C. The same code can be used to fill a deal.II 
SparsityPattern (see SparsityPattern::add_entries()). You can use e.g. 
SparsityPattern::print_svg() to output the structure as svg format, which 
you can open in your browser.

Hope this helps,
Peter

On Sunday, 4 September 2022 at 05:18:11 UTC+2 ztdep...@gmail.com wrote:

> Does deal.II support readin a matirx market file and plot it sturcture out.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/693643a1-9c6f-43af-add1-6828ae1a5034n%40googlegroups.com.


[deal.II] Re: Evaluating a field at a point

2022-07-29 Thread Peter Munch
I guess for your purpose, `VectorTools::point_values()` and 
`VectorTools::point_gradients()` are more useful and it works in parallel. 
If the location of the points do not change, they are cached in 
RemotePointEvaluation.

> If I am not refining the mesh, is there a more efficient why of 
accomplishing this? I assume by storing the specific cell that loc is in? 

Indeed, this is happening internally in RemotePointEvaluation. It stores 
the cell and the reference positions. In a second step the actual 
evaluation is done efficiently by FEPointEvaluation. If the interface is 
too high level for your purposes, you can take a look at the 
implementations of the setup routines, which uses 
GridTools::internal::distributed_compute_point_locations.

Hope this helps,
Peter

On Thursday, 28 July 2022 at 22:15:43 UTC+2 andre...@gmail.com wrote:

> I currently have the solution to a PDE stored in a Vector solution. 
> I need to compute the solution at some Point<2> loc. I accomplish this 
> using the function 
>
> VectorTools::point_value(dof_handler, solution, loc, val);
>
> which fills val with the value I care about. Similarly, I get the 
> gradient with point_gradient. 
>
> This accomplishes what I need but if I have a lot of points like loc and 
> have to do this every timestep then it quickly becomes a computational 
> bottleneck. 
>
> If I am not refining the mesh, is there a more efficient why of 
> accomplishing this? I assume by storing the specific cell that loc is in? 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0b0b141a-5f0d-4793-9ca1-12414bb8f779n%40googlegroups.com.


Re: [deal.II] Re: Using AMR with MatrixFree

2022-07-08 Thread Peter Munch
> I was able to make it work with FCL, though, 
surprisingly, KellyErrorEstimator gets confused (tries to access DoFs on 
another processor) unless MatrixTools::categorize_by_boundary_ids is run at 
the system setup.

The reason for that are the following 
lines: 
https://github.com/dealii/dealii/blob/dff43525d7b37aa672d0825b979acacb1bb86ff4/include/deal.II/matrix_free/tools.h#L231-L235.
 
However, one does not needed that. Instead what you have to guarantee that 
the vector you are passing to KellyErrorEstimator does contain all DoFs of 
all ghost cells. However, MatrixFree (and 
MatrixFree::initialize_dof_vector()) does some optimizations to reduce 
communication overhead. What you need to do is to create a new vector with 
correct ghosting before KellyErrorEstimator and copy over the locally owned 
data and update the ghost values. This is not too expensive. 

> in the case of AMR+FCL the face loop is actually over subfaces so no 
additional action by user is needed? 

Indeed.

> And is there any opportunity to query subfaces? I can't see any clear way 
to query subfaces from Triangulation or to correlate the cell from a 
matrix-free loop with a cell iterator from Triangulation or DoFHandler.

Do you think of such a function 
(MatrixFree::get_cell_iterator()): 
https://www.dealii.org/developer/doxygen/deal.II/classMatrixFree.html#aad2ce5ee5867360c16019d618e250ce4?

Peter

On Friday, 8 July 2022 at 14:23:55 UTC+2 Alexander Kiselyov wrote:

> Thank you for the answer, Peter!
>
> I was able to make it work with FCL, though, surprisingly, 
> KellyErrorEstimator gets confused (tries to access DoFs on another 
> processor) unless MatrixTools::categorize_by_boundary_ids is run at the 
> system setup.
>
> Regarding ECL+AMR technical details, do I understand correctly that in the 
> case of AMR+FCL the face loop is actually over subfaces so no additional 
> action by user is needed? And is there any opportunity to query subfaces? I 
> can't see any clear way to query subfaces from Triangulation or to 
> correlate the cell from a matrix-free loop with a cell iterator from 
> Triangulation or DoFHandler.
>
> Best regards,
> Alexander Kiselyov
>
> On Thu, 2022-07-07 at 12:55 -0700, Peter Munch wrote:
>
> The problem is that ECL is not working with AMR, yet.
>
> A technical issue is that one cannot that easily loop over 2*dim faces and 
> compute the fluxes for each but also has to go over the subfaces when the 
> neighbor is refined. One might be able to hide this in FEFaceEvaluation and 
> load the values from the neighbors there and do the interpolation there; 
> but I not 100% sure about that; particularly if this approach is 
> conserving, i.e., the accumulated flux is the same on both sides as is the 
> case in FCL. 
>
> We should add somewhere an assert but no sure where to do that. I'll add a 
> comment to MatrixFree::loop_cell_centric().
>
> Peter
>
> On Thursday, 7 July 2022 at 21:41:28 UTC+2 Alexander Kiselyov wrote:
>
> Dear deal.II users,
>
> I am currently learning how to use mesh refinement by developing a toy 2D 
> advection solver with discontinuous Galerkin elements. This solver is built 
> using matrix-free approach with ECL and MPI, and is based on a previous 
> version which worked properly with a fixed mesh.
>
> I have added mesh refinement by closely following step-26 and step-40. I 
> have encountered two problems:
>
>
>- when running with mpirun -n 1 the simulation finishes normally, but 
>the solution gets increasingly non-contiguous with time (see the attached 
>image);
>- when running with mpirun -n 2 I'm getting the following error on the 
>second iteration (after a single refinement step):
>
>
> An error occurred in line <8305> of file 
> 
>  in function
>
> void dealii::FEFaceEvaluation n_components_, Number, VectorizedArrayType>::reinit(unsigned int, unsigned 
> int) [with int dim = 2; int fe_degree = 2; int n_q_points_1d = 3; int 
> n_components_ = 1; Number = double; VectorizedArrayType = 
> dealii::VectorizedArray]
>
> The violated condition was: 
>
> ::dealii::deal_II_exceptions::internals::compare_less_than(index, 
> this->matrix_free->get_mapping_info() .face_data_by_cells[this->quad_no] 
> .quadrature_point_offsets.size())
>
> Additional information: 
>
> Index 415 is not in the half-open range [0,392).
>
>
> It occurs during advection operator application (vmult_add) in the time 
> loop in the iteration after the first refinement, see the code fragments 
> below.
>
> I am quite at a loss here (somehow dof vectors are initialized 
> incorrectly?), is there any obvious error I am missing?
>
> Best regards,
> Alexander Kiselyov
>
> --

[deal.II] Re: Using AMR with MatrixFree

2022-07-07 Thread Peter Munch
The problem is that ECL is not working with AMR, yet.

A technical issue is that one cannot that easily loop over 2*dim faces and 
compute the fluxes for each but also has to go over the subfaces when the 
neighbor is refined. One might be able to hide this in FEFaceEvaluation and 
load the values from the neighbors there and do the interpolation there; 
but I not 100% sure about that; particularly if this approach is 
conserving, i.e., the accumulated flux is the same on both sides as is the 
case in FCL. 

We should add somewhere an assert but no sure where to do that. I'll add a 
comment to MatrixFree::loop_cell_centric().

Peter

On Thursday, 7 July 2022 at 21:41:28 UTC+2 Alexander Kiselyov wrote:

> Dear deal.II users,
>
> I am currently learning how to use mesh refinement by developing a toy 2D 
> advection solver with discontinuous Galerkin elements. This solver is built 
> using matrix-free approach with ECL and MPI, and is based on a previous 
> version which worked properly with a fixed mesh.
>
> I have added mesh refinement by closely following step-26 and step-40. I 
> have encountered two problems:
>
>
>- when running with mpirun -n 1 the simulation finishes normally, but 
>the solution gets increasingly non-contiguous with time (see the attached 
>image);
>- when running with mpirun -n 2 I'm getting the following error on the 
>second iteration (after a single refinement step):
>
>
> An error occurred in line <8305> of file 
> 
>  in function
>
> void dealii::FEFaceEvaluation n_components_, Number, VectorizedArrayType>::reinit(unsigned int, unsigned 
> int) [with int dim = 2; int fe_degree = 2; int n_q_points_1d = 3; int 
> n_components_ = 1; Number = double; VectorizedArrayType = 
> dealii::VectorizedArray]
>
> The violated condition was: 
>
> ::dealii::deal_II_exceptions::internals::compare_less_than(index, 
> this->matrix_free->get_mapping_info() .face_data_by_cells[this->quad_no] 
> .quadrature_point_offsets.size())
>
> Additional information: 
>
> Index 415 is not in the half-open range [0,392).
>
>
> It occurs during advection operator application (vmult_add) in the time 
> loop in the iteration after the first refinement, see the code fragments 
> below.
>
> I am quite at a loss here (somehow dof vectors are initialized 
> incorrectly?), is there any obvious error I am missing?
>
> Best regards,
> Alexander Kiselyov
>
> --
>
> I'm using deal.II 9.4.0 on a Linux system (NixOS). The attached image is 
> calculated with the transport velocity a(x, y) = [ y x ], the initial 
> state is a symmetric Maxwell-like "fuzzy circle".
>
> Time loop (VectorType = dealii::LinearAlgebra::distributed::Vector, 
> complete_operator is a composition of mass matrix and stationary 
> advection operators):
>
>   output_result();
>
>
>   VectorType rhs, f_tmp, f_tmp2;
>
>   for (auto iter = 0; iter < N_iters; ++iter) {
>
> // Assemble time-dependent rhs
>
> matrix_free->initialize_dof_vector(rhs);
>
> matrix_free->initialize_dof_vector(f_tmp);
>
> matrix_free->initialize_dof_vector(f_tmp2);
>
> rhs = adv_rhs;
>
> rhs *= dt;
>
> mass_operator.vmult_add(rhs, old_solution);
>
> f_tmp = old_solution;
>
> f_tmp *= -dt*(1-theta);
>
> adv_operator.vmult_add(rhs, f_tmp);
>
>
> solve_one_timestep(complete_operator, rhs);
>
> refine_mesh(min_refinement, min_refinement + refinement_steps);
>
> output_result(iter+1);
>
> old_solution = solution;
>
>   }
>
>
>
> setup_system():
>
>   dof_handler.distribute_dofs(fe);
>
>
>   constraints.clear();
>
>   dealii::DoFTools::make_hanging_node_constraints(dof_handler, constraints);
>
>   constraints.close();
>
>
>   typename MF::AdditionalData additional_data;
>
>   const auto update_flags = dealii::update_values | dealii::update_gradients 
> | dealii::update_JxW_values | dealii::update_quadrature_points | 
> dealii::update_normal_vectors | dealii::update_jacobians | 
> dealii::update_inverse_jacobians;
>
>   additional_data.mapping_update_flags = update_flags;
>
>   additional_data.mapping_update_flags_inner_faces = update_flags;
>
>   additional_data.mapping_update_flags_boundary_faces = update_flags;
>
>   if (use_ecl) {
>
> additional_data.mapping_update_flags_faces_by_cells = update_flags;
>
> additional_data.hold_all_faces_to_owned_cells = true;
>
> dealii::MatrixFreeTools::categorize_by_boundary_ids(tria, 
> additional_data);
>
>   }
>
>
>   if (!matrix_free)
>
> matrix_free.reset(new MF());
>
>   matrix_free->reinit(mapping, dof_handler, constraints, quad, 
> additional_data);
>
>   adv_operator.initialize(matrix_free);
>
>   mass_operator.initialize(matrix_free);
>
>
>   matrix_free->initialize_dof_vector(solution);
>
>   matrix_free->initialize_dof_vector(old_solution);
>
>   matrix_free->initialize_dof_vector(adv_rhs);
>
>   AdvectionOperator::build_adv_rhs(*matrix_free, adv_rhs);
>
>
>   complete_operator.initialize(matrix_free);
>
>
>
> 

[deal.II] Re: DoFTools::make_periodicity_constraints error when working with FEColleciton

2022-07-04 Thread Peter Munch
Hi Rose,

the assert was definitely not positioned at the right place and checked on 
every face - even on non-active ones. I have moved it in the 
PR https://github.com/dealii/dealii/pull/14091. Now this assert is not 
triggered.

Generally, I am wondering if hp and PBC is working. At least, I cannot find 
a test in the hp test folder. Such a test should have triggered the assert.

Hope the patch works!

Peter

On Monday, 4 July 2022 at 18:09:41 UTC+2 jose.a...@gmail.com wrote:

> Dear dealii community,
>
> I am still trying to decipher what causes the error. It occurs in the 
> first assert of make_periodicity_constraints 
> 
> :
>
> AssertDimension(
> face_1->get_fe(face_1->nth_active_fe_index(0)).n_unique_faces(), 1);
>
> So I created vectors to store the active_fe_index and the 
> nth_active_fe_index(0), replicated the fe_index_is_active assert and the 
> get_fe() call:
>
> active_fe_index.reinit(triangulation.n_active_cells());
> nth_active_fe_index.reinit(triangulation.n_active_cells());
>
> active_fe_index = -1.0;
> nth_active_fe_index = -1.0;
>
> for (const auto  :
> dof_handler.active_cell_iterators())
> if (cell->is_locally_owned())
> {
> active_fe_index(cell->active_cell_index()) = cell->active_fe_index();
>
> if (cell->at_boundary())
> for (const auto _id : cell->face_indices())
> if (cell->face(face_id)->at_boundary())
> {
> AssertThrow(cell->face(face_id)->get_active_fe_indices().size() == 1, 
> dealii::ExcMessage("Error!"));
>
> nth_active_fe_index(cell->active_cell_index()) = cell->face(face_id)->
> nth_active_fe_index(0);
>
> AssertThrow(cell->face(face_id)->fe_index_is_active(cell->face(face_id)->
> nth_active_fe_index(0)), dealii::ExcMessage("Error!"));
>
> cell->face(face_id)->get_fe(cell->face(face_id)->nth_active_fe_index(0));
> }
> }
>
> I don't get any error from this loop and the from the visual inspection of 
> .vtu file containing both vectors one can see that they indeed coincide. 
> Maybe the problem is coming from GridTools::collect_periodic_faces as the 
> face iterators are created by it?
>
> Furthermore I also noticed that if no global refinement is performed the 
> executable runs without error in serial but still gets the error in 
> parallel.
>
> Attached is the updated MWE. A true/false argument can be passed to the 
> executable to perform one global refinement or not.
>
> Cheers,
> Jose
>
> On Wednesday, June 29, 2022 at 4:45:04 PM UTC+2 jose.a...@gmail.com wrote:
>
>> Dear dealii community,
>>
>> I am working on a gradient enhanced crystal plasticity code where the 
>> displacement field is continuous but the slip fields can be discontinuous. 
>> To this end, I am using a hp::FECollection. For example, in the 
>> case of two crystals with one slip system I have a hp::FECollection<2> 
>> with
>>
>> fe_collection[0] = [FE_Q<2> FE_Q<2> FE_Q<2>  FE_Nothing<2>]
>> fe_collection[1] = [FE_Q<2> FE_Q<2> FE_Nothing<2> FE_Q<2> ]
>>
>> where the first two FE_Q<2> correspond to the displacement field in 2D 
>> and the latter to the slip field.
>>
>> My reference problem consist of the unit square under simple shear with 
>> periodic boundary conditions on the x-direction. When I call the 
>> DoFTools::make_periodicity_constraints> dim> method, I am getting the following error when I have subdomains 
>> with different values of fe_index
>>
>> An error occurred in line <1690> of file 
>> 
>>  
>> in function
>> const dealii::FiniteElement& 
>> dealii::DoFAccessor::get_fe(unsigned int) 
>> const [with int structdim = 1; int dim = 2; int spacedim = 2; bool 
>> level_dof_access = false]
>> The violated condition was: 
>> fe_index_is_active(fe_index) == true
>> Additional information: 
>> This function can only be called for active FE indices
>>
>> I wrote a minimum working example based on my code to showcase the error, 
>> which I have attached to this post. The executable takes a true or false 
>> argument to assign different material identifiers to different subdomains 
>> or not. If there are no subdomains and therefore, and more importantly, the 
>> size of the hp::FECollection is 1, the code runs with no problem.
>>
>> My question is if my algorithm is wrong or does the method 
>> DoFTools::make_periodicity_constraints> dim> does not currently support hp::FECollection with sizes bigger 
>> than one.
>>
>> Cheers,
>> Jose
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fa901000-4471-4394-8acb-f759a9ab089fn%40googlegroups.com.


[deal.II] Re: Different resulting meshes in spatial adaptivity for d::p::d::triangulation and d::triangulation

2022-05-24 Thread Peter Munch
Hi Maurice,

deal.II 9.0.1 is quite old, don't you want to update ;)

If I understand this is the first refinement step. So results should be the 
same if you do everything the same. Question: with how many MPI processes 
are you running the `p::d::triangulation` simulation? Could you try to run 
with one process. If that works and it only happens in parallel, my best 
guess is that you have forgotten to update the ghost values of the vector. 
Is the problematic cell near an internal boundary? Are you running deal.II 
in debug mode?

Peter

On Tuesday, 24 May 2022 at 17:23:30 UTC+2 maurice@googlemail.com wrote:

> Dear deal.II community,
>
> While parallelizing an in-house phase-field fracture code with spatial 
> adaptivity using the p::d::triangulation we observed different meshes using 
> the spatial adaptivity for the serial and the distributed triangulation.
>
> We mark cells for refinement if at least one scalar value falls below a 
> given threshold within a cell.
>
> To see where this might come from, we counted the number of cells before 
> and after the call of *triangulation.prepare_coarsening_and_refinement()* 
> in the serial and distributed case.
> The number of user-defined cells to be refined is the same (number of 
> cells before calling *triangulation.prepare_coarsening_and_refinement()*). 
> However, after this call, the number differs and therefore different meshes 
> are observed.
>
> Ideally, we would expect that there is no difference for the same problem 
> (same mesh, same parameters, etc.).
>
> We are using dealii 9.0.1 and do not set any smoothing operation in the 
> constructor of the serial and the parallel meshes.
>
> Is such a behavior expected, or is it difficult to compare the result for 
> the serial and the distributed triangulation one by one?
>
> Attached is also the resulting mesh (left parallel, right serial) as well 
> as the phase-field value at a node before refinement (just to make sure 
> that the problems are the same for the serial and distributed version).
>
> Thanks for your help in advance.
> Best regards, 
> Maurice Rohracker
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c5a9a3ef-f18c-4e2d-acb7-bfa17fe26b1bn%40googlegroups.com.


[deal.II] Re: Using MatrixFree functionality without distributed arrays

2022-05-23 Thread Peter Munch
Hi Corbin,

Vector and LiearAlgebra::Vector should work work with 
MatrixFree. If not let us know what (compilation) errors you get.

Alternatively, you can use LinearAlgebra::distributed::Vector  and 
run the program with a single process.

Hope that helps,
Peter

On Monday, 23 May 2022 at 10:29:03 UTC+2 corbin@gmail.com wrote:

> Hello everyone,
>
> I've been reading through some of the matrix-free tutorials---step-67 in 
> particular. I'd like to write something similar, an RKDG solver for 
> hyperbolic problems for which I can use explicit RK time integration.
>
>- Does it make sense to use the MatrixFree functionality with regular 
>Vector operands? The tutorials often make use of 
>LinearAlgebra::distributed::Vector objects to store the vectors.
>- I don't want to use the latter because I'm driving the code from 
>Python, and any use of MPI or distributed memory could be a headache in 
>communicating data back and forth between the two languages. Additionally, 
>I'm solving many small problems as opposed to a single large one.
>- Ultimately, I'm looking for a fast way to apply the elemental 
>inverse mass matrices and face integrations, so MatrixFree functionality 
>seemed the logical choice.
>
> Best, 
> Corbin
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fd28184a-4ec2-47f7-9911-f0008c0eb803n%40googlegroups.com.


[deal.II] Re: Computing the global index values from vertex iterator

2022-02-21 Thread Peter Munch
That should fix the issue: https://github.com/dealii/dealii/pull/13429

Peter

On Sunday, 20 February 2022 at 22:06:02 UTC+1 reza@gmail.com wrote:

> Hi all,
>
> I am maintaining a code containing a function to obtain the global dof 
> numbers of a vertex based on a given vertex iterator (without any use of 
> the cell iterator). The implementation for this function and a sample 
> program using it is shown below:
>
> #include 
> #include 
> #include 
> #include 
> #include 
> #include 
> using namespace dealii;
> template 
> std::vector get_vertex_dofs(
>   const typename Triangulation::active_vertex_iterator ,
>   const DoFHandler &dof_handler)
> {
>   DoFAccessor<0, DoFHandler, false> vertex_dofs(
>   &(dof_handler.get_triangulation()),
>   vertex->level(),
>   vertex->index(),
>   _handler);
>   const auto  n_dofs = dof_handler.get_fe().dofs_per_vertex;
>   std::vector dofs(n_dofs);
>   for (unsigned int i = 0; i < n_dofs; ++i)
>   {
> dofs[i] = vertex_dofs.vertex_dof_index(0, i);
>   }
>   return dofs;
> }
>
> int main() {
>   Triangulation<2> triangulation;
>   GridGenerator::hyper_cube(triangulation);
>   DoFHandler<2> dof_handler(triangulation);
>   const FESystem<2> finite_element(FE_Q<2>(1), 2);
>   dof_handler.distribute_dofs(finite_element);
>   auto vertex = triangulation.begin_active_vertex();
>   auto vertex_end = triangulation.end_vertex();
>   for (; vertex != vertex_end; ++vertex)
>   {
> const auto dofs = get_vertex_dofs(vertex, dof_handler);
> std::cout << dofs[0] << ", " << dofs[1] << std::endl;
>   }
>   return 0;
> }
>
>
> I am able to run this code with dealii 9.2.0 and see the following output:
>
> 0, 1
> 2, 3
> 4, 5
> 6, 7
>
> However, when I try to run it with a newer versions of dealii, the code 
> fails to compile. For example, if I compile the aforementioned code with 
> the latest dealii (commit 855afe3f4c), I get an error because the type 
> DoFAccessor<0, 
> DoFHandler, false> does not exist anymore. Therefore, I changed the 
> type to DoFAccessor<0, dim, dim, false>. However, the code still fails to 
> compile and shows this error to me:
>
> In file included 
> from /home/reza_rastak/dealii/include/deal.II/dofs/dof_accessor.h:2135,
> 
>  from /home/reza_rastak/dealii/include/deal.II/dofs/dof_handler.h:32,
> 
>  from /home/reza_rastak/dealsims/dealii-examples/vertex-dof/vertex_dof.cc
> :3:
> /home/reza_rastak/dealii/include/deal.II/dofs/dof_accessor.templates.h: In 
> instantiation of 'unsigned int dealii::DoFAccessor spacedim, lda>::nth_active_fe_index(unsigned int) const [with int structdim 
> = 0; int dim = 2; int spacedim = 2; bool level_dof_access = false]':
> /home/reza_rastak/dealii/include/deal.II/dofs/dof_accessor.templates.h:1544:17:
>   
>  required from 'dealii::types::global_dof_index 
> dealii::DoFAccessor lda>::vertex_dof_index(unsigned int, unsigned int, unsigned int) const 
> [with int structdim = 0; int dim = 2; int spacedim = 2; bool 
> level_dof_access = false; dealii::types::global_dof_index = unsigned int]'
> /home/reza_rastak/dealsims/dealii-examples/vertex-dof/vertex_dof.cc:27:43:  
>  required from 'std::vector get_vertex_dofs(const typename 
> dealii::Triangulation::active_vertex_iterator&, const 
> dealii::DoFHandler&) [with int dim = 2; typename 
> dealii::Triangulation::active_vertex_iterator 
> =dealii::TriaActiveIterator >]'
> /home/reza_rastak/dealsims/dealii-examples/vertex-dof/vertex_dof.cc:43:58:  
>  required from here
> /home/reza_rastak/dealii/include/deal.II/dofs/dof_accessor.templates.h:1488:31:
>  error: 'const 
> class dealii::DoFAccessor<0, 2, 2, false>' has no member named 
> 'present_index'
>  1488 | this->present_index,
>   | ~~^
>
>
> I dug into the deal.ii codebase, and it seems the error is caused because 
> TriaAccessor<0, dim, spacedim> is not a subclass of TriaAccessorBase<0, 
> dim, spacedim> and thus it does not have a protected member named 
> present_index. However, this protected member is needed within many of the 
> member functions of the DoFAccessor class.
>
> Could you help me to solve this compile issue and keep this function in my 
> codebase?
>
> Thanks,
>
> Reza Rastak
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/239a11cc-b7f0-4fb2-a599-c99f7d95e6dcn%40googlegroups.com.


[deal.II] Re: Periodic bc for dim=1

2022-02-07 Thread Peter Munch
What do you mean by "does not work for 1D"? In hyper.deal 
(https://github.com/hyperdeal/hyperdeal), we also have PBC in "1D".

Peter

On Tuesday, 8 February 2022 at 06:34:42 UTC+1 Praveen C wrote:

> Dear all
>
> I was trying to write a code that works for 1/2/3d navier stokes.
>
> But it seems like collect_periodic_faces does not work for 1d ? Is this 
> the case, and is there any other way to deal with this situation ?
>
> Thanks
> praveen

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3dec29d1-bb19-4f58-8c8f-35568ceaf754n%40googlegroups.com.


[deal.II] Re: Vlasov equation integration issue (hyper.deal)

2022-02-03 Thread Peter Munch
Hi Alexander,

Nice that you found hyper.deal!

Since this seems to be more a hyper.deal related issue, I would suggest to 
move the discussion to: https://github.com/hyperdeal/hyperdeal/discussions. 
On the first glance, the code seems to be correct but I have one or two 
ideas how to simplify the problem so that it is easier to debug!

Thanks,
Peter

On Thursday, 3 February 2022 at 12:12:20 UTC+1 Alexander Kiselyov wrote:

> Dear all,
>
> I'm having a mysterious issue with hyper.deal, but I suspect the reason 
> might lie in my mistakes in using or misunderstanding deal.II.
>
> I'm currently trying to develop a 6-D Vlasov FEM solver using hyper.deal 
> accounting for both electric and magnetic field as a first step towards 
> a self-consistent Vlasov-Maxwell solver. The current physical problem is 
> collisionless plasma in a constant magnetic field B. From my 
> understanding, spherically symmetric velocity distribution should not 
> change in time in this case (magnetic field would induce its rotation 
> around B vector).
>
> Diagnostics tools are density ($\int f(r,v) dv$), velocity distribution 
> ($\int f(r,v) dr$), full f(r,v) output (r, v and f(r,v) at each 
> quadrature point) and advection vector (transport velocity) field. The 
> issue is that when using a function depending on v as an advection 
> vector in the df/dv term of the Vlasov equation artifacts appear (below 
> the examples are for the Lorenz force without electric field, but it is 
> observable for something artificial like $(v \cdot B) B$ too). They 
> manifest themselves as ever increasing values "along" magnetic field for 
> odd quadrature points in velocity distribution plots (see attached 
> images). What the strangest thing is that saving solution values at 
> quadrature points, doing naïve integration (i.e. just summing f values) 
> and plotting using Gnuplot does _not_ reveal such artifacts: the 
> velocity distribution remains spherically symmetric. It is puzzling that 
> their amplitude does _not_ depend on the usual numerical parameters: dt 
> and grid density.
>
> See `artifact_vtu.png` for velocity distribution calculated using 
> deal.II/hyper.deal routines, `artifact_naive.png` for a naïve velocity 
> distribution calculation. Advection vector (transport velocity) fields 
> for coordinate and velocity spaces are shown in `advection_x.png` at v = 
> (0.014088, -0.38909, -0.36091) and in `advection_v.png` at r = 
> (-0.014088, -0.014088, -0.0625). Numerical parameters are dt = 0.01, T = 
> 0.1, the grid is a hypercube [-0.5; 0.5]^6 with the size of 8 
> subdivisions along each dimension. Magnetic field is constant B = (1, 0, 
> 0), normalized in such a way that $F_{Lorenz} = q [v \times B]$.
>
> The velocity distribution routine is implemented along the same lines as 
> `hyperdeal::VectorTools::velocity_space_integration` 
> (`include/vector_tools.h`):
>
> data.template cell_loop(
> [&](const auto&, Vector_Out& dest, const Vector_In& src_,
> const auto cell) mutable
> {
> phi.reinit(cell);
> phi.read_dof_values(src_);
> phi_x.reinit(cell.x);
> phi_v.reinit(cell.v);
>
> const VectorizedArrayType* data_ptr_src = phi.get_data_ptr();
> VectorizedArrayTypeV* data_ptr_dst = phi_v.begin_dof_values();
>
> const auto nv_factor = dealii::Utilities::pow(n_points, dim_v);
> const auto N_lanes = phi.n_vectorization_lanes_filled();
>
> for (unsigned int qv = 0; qv < phi_v.n_q_points; ++qv) {
> VectorizedArrayTypeV sum_x = 0.0;
>
> for (unsigned int qx = 0; qx < phi_x.n_q_points; ++qx) {
> const auto q = qx + qv*nv_factor;
> for (unsigned int lane = 0; lane < N_lanes; ++lane)
> sum_x += data_ptr_src[q][lane] * phi_x.JxW(qx)[lane];
> }
>
> data_ptr_dst[qv] = sum_x;
> }
>
> phi_v.distribute_local_to_global(dest);
> },
> dst,
> src
> );
>
> The extraction of solution values at quadrature points is quite similar 
> (`include/output.h`, line 427):
>
> matrix_free.template cell_loop(
> [, , _x, _v](const auto&, int&,
> const VectorType& src,
> const auto cell) mutable
> {
> phi.reinit(cell);
> phi.read_dof_values(src);
> phi_x.reinit(cell.x);
> const unsigned int index_v = cell.v / VectorizedArrayTypeV::size();
> const unsigned int lane_v = cell.v % VectorizedArrayTypeV::size();
> phi_v.reinit(cell.v);
>
> const VectorizedArrayType* data_ptr_src = phi.get_data_ptr();
>
> const auto nv_factor = dealii::Utilities::pow(n_points, dim_v);
> const auto N_lanes = phi.n_vectorization_lanes_filled();
> for (unsigned int qv = 0; qv < phi_v.n_q_points; ++qv) {
> for (unsigned int qx = 0; qx < phi_x.n_q_points; ++qx) {
> const auto qx_point = phi_x.quadrature_point(qx);
> const auto qv_point = phi_v.quadrature_point(qv);
> const auto q_value = data_ptr_src[qx + qv*nv_factor];
>
> for (unsigned int lane = 0; lane < N_lanes; ++lane) {
> double cache = 0.0;
> // Write coordinates
> for (unsigned int d = 0; d < dim_x; ++d) {
> // all lanes filled
> cache = qx_point[d][lane];
> out.write(reinterpret_cast(), sizeof cache);
> }
> for (unsigned int d = 0; d < 

[deal.II] Re: Meshing complex geometries

2021-11-07 Thread Peter Munch
Hi Kaushik, 

> Is there a plan for relaxing this limitation in the future? Is there any 
other way to do this?

Unfortunately, there are currently no active developments regarding 
anisotropic refinement. Would it be an option to refine isotropically? 
Latter works very well parllel::distributed::Triangulation in parallel.

PM

On Sunday, 7 November 2021 at 13:48:04 UTC+1 k.d...@gmail.com wrote:

> Forgot to attach the figure. Here it is. 
>
> On Sunday, November 7, 2021 at 7:45:02 AM UTC-5 Kaushik Das wrote:
>
>> Hi all,
>> I am trying to mesh a complex geometry by first creating a triangulation 
>> of the bounding box of the geometry and then successively refining the 
>> cells that are within the geometry.  I use a hp::FECollection to assign  
>> FE_Nothing() 
>> to cells that are outside the boundary and FE_Q(1) to cells inside 
>> the geometry. Please see the attached figure, which shows only FE_Q cells. 
>> A limitation that I am facing is I cannot use an anisotropic refinement 
>> case because make_hanging_node_constraints does not support 
>> hp::FECollection. 
>> Is there a plan for relaxing this limitation in the future? Is there any 
>> other way to do this?
>> I want to keep the axis-aligned nature of cells that are generated this 
>> way. It is an additive manufacturing application. Therefore, using an 
>> external tet mesh generator is not a good option for me. 
>> Thanks,
>> Kaushik 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/652623a8-1308-4cce-93b0-79c646a0410bn%40googlegroups.com.


[deal.II] Re: Computing the solution gradient at the quadrature point on a face

2021-08-01 Thread Peter Munch
The return type should be `*Tensor<2,dim,NumberType>*` shouldn't it?

Hope this helps. If not, could describe how the code does not work? It it 
not compiling? Is the result wrong?

PM

On Sunday, 1 August 2021 at 04:09:45 UTC+2 lian...@gmail.com wrote:

> Dear All,
>
> I want to do a surface integration which needs to evaluate the gradients 
> of the solution on the face. The following code does not work 
>
> fe_face_values.*get_function_gradients*(solution, 
> solution_grads);
> 
> for (unsigned int f_q_point = 0; f_q_point < n_q_points_f; ++f_q_point)
>   {
>*const Tensor<2,dim,NumberType> _u = 
> solution_grads[f_q_point];*
> *...*
>   }
>
> Can anybody give a hint on that?
>
> Thanks,
> Michael
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4fa2b351-9162-4777-b8a9-8cbc42ecdc66n%40googlegroups.com.


[deal.II] Re: std::set> : missing operator(s)

2021-06-27 Thread Peter Munch
Hi Simon,

you don't need to modify the point class. You can simply pas to the 
constructor of the set a comparator. See the first answer 
in https://stackoverflow.com/questions/2620862/using-custom-stdset-comparator.

Hope this helps,
PM

On Sunday, 27 June 2021 at 12:15:40 UTC+2 Simon wrote:

> Dear all,
>
> I am trying to create a set of Points, i.e std::set>. 
> I get a quite extensive error message when trying to insert a point in the 
> set but for my understanding of the message, the only problem is the 
> missing operator< for two points. 
> (Well I could make use of a std::vector instead but a std::set fits much 
> better for my issue since (i) I need to insert points at arbitrary 
> positions quite often and second (ii) wanna make sure that a given Point is 
> only once in the container.)
>
> But irrespective of what container I use, I would like to supplement the 
> Point Class by adding the 'operator<' like in the following:
>
> template
> bool Point::operator< (const Point& p)
> {
> //...some Asserts
> return this.norm() }
>
> Of course I can not write this piece of code since there is no declaration 
> for a member function 'operator<' in the Point Class. 
> For that reasons my next idea was to modify the file  > 
> by simply adding the missing operator< declaration and the corresponding 
> definition. So I did this but without effect, probably because all the 
> files of the dealii-library are already compiled and executing my program 
> via 'make run' did not see my changes in the Point Class at all. (I also 
> added a dummy command, std::cout<<"...Test..."< appear on my screen either.)
>
> I am only familiar with basic cmake- and make-commands. 
> My question is if it is possible to recompile one specific file of the 
> library again?
> So if this were possible I would appreciate getting some guidance for 
> implementing this.
> I would also be interested in writing a patch for that issue. 
>
> Best
> Simon
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/34faa5ae-9bed-4a66-ad9f-cb2bb59d7ba1n%40googlegroups.com.


[deal.II] dealii-9.3.0 release candidate 1

2021-05-22 Thread Peter Munch

Dear all,

We have tagged a first release candidate for the upcoming deal.II 9.3.0 
release. You can find the sources and a generated offline documentation 
here:


  https://github.com/dealii/dealii/releases/tag/v9.3.0-rc1

Alternatively, you can switch to the dealii-9.3 branch in your local git 
repository


  git remote update
  git checkout v9.3.0-rc1

It would be great if you could test it on your machine with your typical 
configuration! If no further regressions show up, we will release 
Saturday May 29. The release progress is tracked in the following GitHub 
issue:


  https://github.com/dealii/dealii/issues/12117

Thanks!

David, Matthias, Peter
on behalf of the deal.II developer team

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/291531d5-e99c-d91b-88b4-f6a4e0bf18fd%40gmail.com.


Re: [deal.II] Unique FE evaluation on arbitrary points

2021-05-12 Thread Peter Munch

Hi David,

> I want to evaluate each point out of the cloud only once in case of a 
distributed triangulation.


A quick (and dirty) solution would be to evaluate and perform the 
communication non-uniquely and pick out one of the received values at 
the end. See also VectorTools::evaluate_at_points() and the flag 
VectorTools::EvaluationFlags::insert: although intentionally not 
documented this function will take the first value of the process with 
the lowest rank.


> I would probably need to go a step further and implement a consensus 
algorithm in order to decide only for one specific process, which seems 
not trivial to me.


You are right. One would need an additional communication step (with or 
without CA) to make the evaluation/communication more efficient. I was 
thinking about introducing an additional flag to the constructor of 
RemotePointEvaluation, but have not implemented it yet, since in our use 
cases we either need the average values (DG) or we set up the 
communication pattern in every time step due to some very dynamic 
systems (multi phase flow).


Anyway, I am happy to discuss how to implement the needed feature. The 
code that would need to be touched is: 
https://github.com/dealii/dealii/blob/654b14481eeff7e744297025ea2078ab115611ac/source/grid/grid_tools.cc#L5905-L5909, 
i.e., a consistent modification of send_components and recv_components. 
I am happy to write a draft if you want.


Hope this helps,
Peter

On 12.05.21 09:08, 'David' via deal.II User Group wrote:

Dear all,

I'm currently looking for a way to evaluate my global solution on an 
arbitrary cloud of points and stumbled across the (new) class 
`Utilities::MPI::RemotePointEvaluation`. In general, the class 
does exactly what I want with one particular exception; I want to 
evaluate each point out of the cloud only once in case of a 
distributed triangulation. If I understand it correctly this property 
is fulfilled in case the boolean function `is_map_unique()` returns 
true, but I don’t see any option to filter or tell the class that I 
actually want the map to be unique.


Assuming that the class cannot handle this case I tried to pre-filter 
the vertices by hand: For me, the most natural choice to fulfill the 
unique evaluation of the points is to assign them to a process, so 
that I only work on ‘locally owned vertices’. I compute a bounding box 
using the iterator filter `LocallyOwnedCell()`, filter the received 
cloud of points on the process and finally plug them into the 
`RemotePointEvaluation`. However, this does not work in case points of 
the cloud are located at the boundary of the bounding box, because 
they will technically be part of both bounding boxes of the involved 
processes. I would probably need to go a step further and implement a 
consensus algorithm in order to decide only for one specific process, 
which seems not trivial to me.


Before starting now to implement a more advanced solution I would like 
to ask if there is an obvious way I’m not aware of. Is it possible to 
enforce a unique evaluation using one of the ‘PointEvaluation’ classes 
or is it possible to let deal.II decide for an owner of an arbitrary 
vertex? I have also seen the function 
‘find_active_cell_around_point()’ which might help, but it might end 
up in similar problem as well.


Thanks in advance and best regards,

David


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en

---
You received this message because you are subscribed to the Google 
Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to dealii+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c29fcdf5-59bf-450f-9de6-d9f9bddd3c68n%40googlegroups.com 
.


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/27c6316c-d31e-7e44-e0e2-dd0d23770864%40gmail.com.


[deal.II] Publications based on deal.II

2021-05-08 Thread Peter Munch
All,

as you may know, we try to list all publications based on deal.II at
 http://dealii.org/publications.html
We use this list to justify the effort we spend on writing this software,
both
to our universities as well as the funding agencies that support its
development.

In anticipation of the next release, we would like to update this page. If
you
have (or know of) a publication that uses deal.II for numerical results and
that isn't already listed, please let us know so we can put it on there. For
this purpose, publications also include MSc, Diploma or PhD theses, or
anything else that may seem appropriate.

We've made the process of submitting entries much simpler than in the past.
Here are a number of ways you can let us know:
* Open a new issue via
  https://github.com/dealii/publication-list/issues/new and just copy-paste
  either a bibtex entry, or just the reference information
* If you want to save us a bit of work, follow the process here:
  https://github.com/dealii/publication-list/blob/master/README.md
* Or email the information directly to me or one of the other principal
  developers.

Thanks
Peter

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAHw3Rk57EOb8Vk7JF3%3DpFZpt9paco7nfa6Qf1c8scqEAdR8Xkw%40mail.gmail.com.


[deal.II] Re: deal.II-based FSI matrix-free solver / looking for a post-doc

2021-04-18 Thread Peter Munch
Good luck at the defence! Hope it goes well!

Peter

On Sunday, 18 April 2021 at 00:44:44 UTC+2 mtwich...@gmail.com wrote:

> Dear all,
>  I have developed a parallel matrix-free monolithic FSI solver in the ALE 
> frame of reference. Exemplary results [1 , 2 
> ], more details in my PhD thesis [3] 
> 
> .
>
> *Highlights*:
>
> -> The proposed monolithic predictor-corrector time integration scheme 
> requires solving the generalized Stokes problem with discontinuous variable 
> coefficients. This has to be done one or two times per time step, depending 
> on the variant of the scheme.
>
> -> I have introduced a new multilevel preconditioner, for a 3D case the 
> time required to solve a system consisting of 421 million unknowns using 96 
> cores is 178 seconds.
>
> -> As demonstrated by the movie[1], I am able to solve 3D problems at 
> moderate Reynolds numbers (time step size of 0.01 did not result in any 
> stability issues with Re=2k). 
>
> ->The preconditioner from the thesis can still be greatly improved. I have 
> a proof of concept demonstrating the basic properties of a new idea of 
> smoother. The results look very promising and I'm curious how it would 
> work. Moreover, the algorithm is suitable for GPU that could result in 
> further speedup.
>  
> [1] 3D "Turek benchmark" at Re = 2000 (30M DoF)
> https://youtu.be/7TaC1dTfAQY
> [2] Water hammer in elastic tube (12M DoF)
> https://youtu.be/TRnlt_eC32Q
>
> Those results were obtained as a part of my PhD thesis. I have finally 
> managed to submit it in January, the defence will be held in 3 weeks 
> (expect reviews next week).
> I am looking for a post-doc (preferably deal.II-related). I'll be happy to 
> get an opportunity to continue my work.
>
> Best,
> Michał Wichrowski
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c9fd51a4-cc3d-4fac-9da8-4f9a3fbaea66n%40googlegroups.com.


Re: [deal.II] Re: Step-7 Extension : How to get the runtime at each cycle to plot it

2021-03-22 Thread Peter Munch
In that case, you might want to take a look at the `Timer` and/or 
`TimerOutput` classes. How to use these are shown a couple of the later 
tutorials, e.g., step-28.

Hope this helps,
Peter

On Monday, 22 March 2021 at 09:20:28 UTC+1 pushkar...@gmail.com wrote:

> Dear Peter
>
> I am able to plot the dofs vs error in julia but actually I am looking for 
> a way to output runtime at each cycle so that I can plot runtime Vs error.
>
> Regards 
> Pushkar
>
> On Mon, Mar 22, 2021 at 12:45 PM Peter Munch  wrote:
>
>> Dear Pushkar,
>>
>> it seems that the convergence table is already outputted during the 
>> execution of the program. So all you have to do is to read, postprocess, 
>> and plot the data in your favorite programming language. You might want to 
>> checkout R (as shown in 
>> https://www.dealii.org/developer/doxygen/deal.II/step_3.html#UsingRandggplot2togenerateplots),
>>  
>> Python, or Matlab. Alternatively, you might want to load the data into a 
>> spread sheet, where you can easily modify the style of the axis. 
>>
>> Does this help? Was your question targeted in this direction?
>>
>> Peter
>>
>> On Monday, 22 March 2021 at 07:33:16 UTC+1 pushkar...@gmail.com wrote:
>>
>>> Dear Deal.II Community
>>>
>>> I am currently trying to implement the extensions in Step-7 tutorial 
>>> program wherein I have to plot the runtime Vs error in order to get a 
>>> better idea about the convergence characteristic of the solver as suggested 
>>> in the documentation.Any help in this regard will be beneficial.
>>>
>>> Regards
>>> Pushkar
>>>
>>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/dealii/zxONkbCnnss/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/3d4a9748-c6a2-406c-8113-716ff5cf7dbfn%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/dealii/3d4a9748-c6a2-406c-8113-716ff5cf7dbfn%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fcd76263-45d6-4293-aaca-070a078d5146n%40googlegroups.com.


[deal.II] Re: Step-7 Extension : How to get the runtime at each cycle to plot it

2021-03-22 Thread Peter Munch
Dear Pushkar,

it seems that the convergence table is already outputted during the 
execution of the program. So all you have to do is to read, postprocess, 
and plot the data in your favorite programming language. You might want to 
checkout R (as shown 
in 
https://www.dealii.org/developer/doxygen/deal.II/step_3.html#UsingRandggplot2togenerateplots),
 
Python, or Matlab. Alternatively, you might want to load the data into a 
spread sheet, where you can easily modify the style of the axis. 

Does this help? Was your question targeted in this direction?

Peter

On Monday, 22 March 2021 at 07:33:16 UTC+1 pushkar...@gmail.com wrote:

> Dear Deal.II Community
>
> I am currently trying to implement the extensions in Step-7 tutorial 
> program wherein I have to plot the runtime Vs error in order to get a 
> better idea about the convergence characteristic of the solver as suggested 
> in the documentation.Any help in this regard will be beneficial.
>
> Regards
> Pushkar
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3d4a9748-c6a2-406c-8113-716ff5cf7dbfn%40googlegroups.com.


[deal.II] Marc Fehling appointed principal developer of deal.II

2021-03-13 Thread Peter Munch


Dear all,

it's our pleasure to announce that we have appointed Marc Fehling as a 
principal-developer of deal.II! I guess many of you already know him 
through meetings at workshops or interactions here or on GitHub but 
nevertheless let me introduce Marc to you quickly.

Marc has started using deal.II during his PhD at University of 
Wuppertal/Jülich Research Centre in Germany, where he worked on the 
parallelization of the hp-framework in deal.II. You can find the final 
version of his thesis here: https://juser.fz-juelich.de/record/878206

Last year, Marc joined Wolfgang Bangerth at Colorado State University (Fort 
Collins, CO, USA) as a post-doc.

Happy coding, Marc!

Best
Peter and the principal developers

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f0526c98-12ad-490c-8015-7b8dd8b0bb6bn%40googlegroups.com.