Re: [deal.II] Re: Could not find partitioner that fits vector

2023-05-15 Thread 'yy.wayne' via deal.II User Group
Thank you, Daniel.
That's great, my mistake.

Best, 
Wayne

在2023年5月15日星期一 UTC+8 21:24:58 写道:

> Wayne,
>
> > I have another suggestion that the 
> mg_constrained_dofs::make_zero_boundary() seems only accept one 
> > boundary_ids set + ComponentMask combination. So if there are 2 
> Dirichlet BC that mask different components
> > in vector problems, more make_zero_boundary functions might be added. 
> Not a big problem though.
>
> The documentation (
> https://www.dealii.org/current/doxygen/deal.II/classMGConstrainedDoFs.html#ad9f0da6bd4cb834c8ea3798d8ddacb92)
>  
> says:
>
> [...]
> This function can be called multiple times to allow considering different 
> sets of boundary_ids for different components.
> [...]
>
> Best,
> Daniel
>
> On Mon, May 15, 2023 at 8:57 AM 'yy.wayne' via deal.II User Group <
> dea...@googlegroups.com> wrote:
>
>> The modification is easy:
>> 1) on line 518 of include/deal.II/multigrid/mg_transfer_matrix_free.h 
>> <https://github.com/dealii/dealii/blob/master/include/deal.II/multigrid/mg_transfer_matrix_free.h>,
>>  
>> change first build() to:
>> void 
>>   build(const DoFHandler _handler, 
>> const std::vector> Utilities::MPI::Partitioner>>
>>   _partitioners =
>> std::vector> Utilities::MPI::Partitioner>>());
>> 2) on line 768 of source/multigrid/mg_transfer_matrix_free.cc 
>> <https://github.com/dealii/dealii/blob/master/source/multigrid/mg_transfer_matrix_free.cc>,
>>  
>> change first build() to
>> template 
>> void
>> MGTransferBlockMatrixFree::build(
>>   const DoFHandler _handler,
>>   const std::vector>
>>   _partitioners)
>> {
>>   AssertDimension(this->matrix_free_transfer_vector.size(), 1);
>>   this->matrix_free_transfer_vector[0].build(dof_handler,
>>  external_partitioners);
>> }
>>
>> I'm not familiar with github pulls so just list the modification here. I 
>> tested with either with external_partitioners
>> or without external_partitioners build() functions, and both show no 
>> error. By introducing external_partitioners 
>> the former bug diminishes.
>>
>> I have another suggestion that the 
>> mg_constrained_dofs::make_zero_boundary() seems only accept one 
>> boundary_ids set + ComponentMask combination. So if there are 2 Dirichlet 
>> BC that mask different components
>> in vector problems, more make_zero_boundary functions might be added. Not 
>> a big problem though.
>>
>> Best,
>> Wayne
>>
>>
>> 在2023年5月7日星期日 UTC+8 15:37:24 写道:
>>
>>> Thanks!
>>> Peter
>>>
>>> On Sunday, 7 May 2023 at 09:27:34 UTC+2 yy.wayne wrote:
>>>
>>>> I'll try add build function to  MGTransferBlockMatrixFree  then.
>>>> Thank you.
>>>>
>>>> Best,
>>>> Wayne
>>>>
>>>> 在2023年5月7日星期日 UTC+8 15:23:12 写道:
>>>>
>>>>> >  I wonder will the unmatched partitioners occur even coding is 
>>>>> correct?
>>>>>
>>>>> Yes. The requirements of MG and the level operators are not identical. 
>>>>> If you don't do anything special, the vector are set up for MG and not 
>>>>> the 
>>>>> level operators. With the help of external partitioners you can fix this 
>>>>> "problem".
>>>>>
>>>>> I guess you could introduce a new 
>>>>> function MGTransferBlockMatrixFree::build(). Shouldn't to be too much 
>>>>> work, 
>>>>> since all you need to do is to delegate the task 
>>>>> to MGTransferMatrixFree::build(). Happy to accept a patch!
>>>>>
>>>>> Peter
>>>>>
>>>>> On Sunday, 7 May 2023 at 08:18:37 UTC+2 yy.wayne wrote:
>>>>>
>>>>>> Thank you Peter,
>>>>>>
>>>>>> Currently I'm using MGTransferBlockMatrixFree where build with 
>>>>>> external partitioners is not supported.
>>>>>> But before moving to modify Block mg transfers, I wonder will the 
>>>>>> unmatched partitioners occur even coding is correct?
>>>>>> My problem is preconditioned with global coarsening h-MG, with 6 dofs 
>>>>>> per node for real and imaginary vectors in 3d,
>>>>>> BlockVectors are applied except for trilinos direct solver on 
>>>>>> coarsest 

[deal.II] Re: Could not find partitioner that fits vector

2023-05-15 Thread 'yy.wayne' via deal.II User Group
The modification is easy:
1) on line 518 of include/deal.II/multigrid/mg_transfer_matrix_free.h 
,
 
change first build() to:
void 
  build(const DoFHandler _handler, 
const std::vector>
  _partitioners =
std::vector>());
2) on line 768 of source/multigrid/mg_transfer_matrix_free.cc 
,
 
change first build() to
template 
void
MGTransferBlockMatrixFree::build(
  const DoFHandler _handler,
  const std::vector>
  _partitioners)
{
  AssertDimension(this->matrix_free_transfer_vector.size(), 1);
  this->matrix_free_transfer_vector[0].build(dof_handler,
 external_partitioners);
}

I'm not familiar with github pulls so just list the modification here. I 
tested with either with external_partitioners
or without external_partitioners build() functions, and both show no error. 
By introducing external_partitioners 
the former bug diminishes.

I have another suggestion that the 
mg_constrained_dofs::make_zero_boundary() seems only accept one 
boundary_ids set + ComponentMask combination. So if there are 2 Dirichlet 
BC that mask different components
in vector problems, more make_zero_boundary functions might be added. Not a 
big problem though.

Best,
Wayne


在2023年5月7日星期日 UTC+8 15:37:24 写道:

> Thanks!
> Peter
>
> On Sunday, 7 May 2023 at 09:27:34 UTC+2 yy.wayne wrote:
>
>> I'll try add build function to  MGTransferBlockMatrixFree  then.
>> Thank you.
>>
>> Best,
>> Wayne
>>
>> 在2023年5月7日星期日 UTC+8 15:23:12 写道:
>>
>>> >  I wonder will the unmatched partitioners occur even coding is correct?
>>>
>>> Yes. The requirements of MG and the level operators are not identical. 
>>> If you don't do anything special, the vector are set up for MG and not the 
>>> level operators. With the help of external partitioners you can fix this 
>>> "problem".
>>>
>>> I guess you could introduce a new 
>>> function MGTransferBlockMatrixFree::build(). Shouldn't to be too much work, 
>>> since all you need to do is to delegate the task 
>>> to MGTransferMatrixFree::build(). Happy to accept a patch!
>>>
>>> Peter
>>>
>>> On Sunday, 7 May 2023 at 08:18:37 UTC+2 yy.wayne wrote:
>>>
 Thank you Peter,

 Currently I'm using MGTransferBlockMatrixFree where build with external 
 partitioners is not supported.
 But before moving to modify Block mg transfers, I wonder will the 
 unmatched partitioners occur even coding is correct?
 My problem is preconditioned with global coarsening h-MG, with 6 dofs 
 per node for real and imaginary vectors in 3d,
 BlockVectors are applied except for trilinos direct solver on coarsest 
 level, where only Vectors rather than BlockVector 
 is supported. Transfer between Vector and BlockVector is a potential 
 risk though.

 Best,
 Wayne

 在2023年5月6日星期六 UTC+8 16:41:40 写道:

> Hi Wayne,
>
> my best guess it that the partitioners in the multigrid operator and 
> the matrix-free operators not match, with the result that a vector with 
> wrong ghosting coming from the multigrid operator is used as src/dst in 
> the 
> matrix-free loop resulting in the assert you show. Previously, one would 
> need to fix the the ghosting in the operator (see 
> https://github.com/dealii/dealii/blob/a091f25c5ff77687fbd8a65f5b336b235f201100/include/deal.II/matrix_free/operators.h#L489-L498)
>  
> but nowadays one can pass in external partitioners to MG: see, e.g., 
> https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/multigrid_throughput.cc#L1802.
>  
> Similarly one can do this in the context of global-coarsening MG.
>
> Hope this helps!
>
> Peter
>
> On Saturday, 6 May 2023 at 10:22:58 UTC+2 yy.wayne wrote:
>
>> It might fails too for 1st basis order too ...
>> But all fails occur when mpi process > 2. If I run with only 2 mpi 
>> processes, no error shows.
>>
>> 在2023年5月6日星期六 UTC+8 15:22:23 写道:
>>
>>> Hello my friends,
>>>
>>> I'm testing a matrix-free code that, preconditioned by a geometric 
>>> multigrid.
>>> The code is not stable that it fails for some fe_degree + 
>>> coarse_level + total_mesh levels
>>> combination. 
>>> For example, any n_mpi_process + 1st order basis elements run 
>>> succesfully.
>>> But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 2) 
>>> or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during 
>>> SolverGMRES.solve()*. 
>>> The error in debug mode is:
>>> 
>>> An error occurred in line <3403> of file 
>>> 
>>>  
>>> in function
>>> unsigned int dealii::internal::VectorDataExchange>> 

[deal.II] Re: Could not find partitioner that fits vector

2023-05-07 Thread 'yy.wayne' via deal.II User Group
I'll try add build function to  MGTransferBlockMatrixFree  then.
Thank you.

Best,
Wayne

在2023年5月7日星期日 UTC+8 15:23:12 写道:

> >  I wonder will the unmatched partitioners occur even coding is correct?
>
> Yes. The requirements of MG and the level operators are not identical. If 
> you don't do anything special, the vector are set up for MG and not the 
> level operators. With the help of external partitioners you can fix this 
> "problem".
>
> I guess you could introduce a new 
> function MGTransferBlockMatrixFree::build(). Shouldn't to be too much work, 
> since all you need to do is to delegate the task 
> to MGTransferMatrixFree::build(). Happy to accept a patch!
>
> Peter
>
> On Sunday, 7 May 2023 at 08:18:37 UTC+2 yy.wayne wrote:
>
>> Thank you Peter,
>>
>> Currently I'm using MGTransferBlockMatrixFree where build with external 
>> partitioners is not supported.
>> But before moving to modify Block mg transfers, I wonder will the 
>> unmatched partitioners occur even coding is correct?
>> My problem is preconditioned with global coarsening h-MG, with 6 dofs per 
>> node for real and imaginary vectors in 3d,
>> BlockVectors are applied except for trilinos direct solver on coarsest 
>> level, where only Vectors rather than BlockVector 
>> is supported. Transfer between Vector and BlockVector is a potential risk 
>> though.
>>
>> Best,
>> Wayne
>>
>> 在2023年5月6日星期六 UTC+8 16:41:40 写道:
>>
>>> Hi Wayne,
>>>
>>> my best guess it that the partitioners in the multigrid operator and the 
>>> matrix-free operators not match, with the result that a vector with wrong 
>>> ghosting coming from the multigrid operator is used as src/dst in the 
>>> matrix-free loop resulting in the assert you show. Previously, one would 
>>> need to fix the the ghosting in the operator (see 
>>> https://github.com/dealii/dealii/blob/a091f25c5ff77687fbd8a65f5b336b235f201100/include/deal.II/matrix_free/operators.h#L489-L498)
>>>  
>>> but nowadays one can pass in external partitioners to MG: see, e.g., 
>>> https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/multigrid_throughput.cc#L1802.
>>>  
>>> Similarly one can do this in the context of global-coarsening MG.
>>>
>>> Hope this helps!
>>>
>>> Peter
>>>
>>> On Saturday, 6 May 2023 at 10:22:58 UTC+2 yy.wayne wrote:
>>>
 It might fails too for 1st basis order too ...
 But all fails occur when mpi process > 2. If I run with only 2 mpi 
 processes, no error shows.

 在2023年5月6日星期六 UTC+8 15:22:23 写道:

> Hello my friends,
>
> I'm testing a matrix-free code that, preconditioned by a geometric 
> multigrid.
> The code is not stable that it fails for some fe_degree + coarse_level 
> + total_mesh levels
> combination. 
> For example, any n_mpi_process + 1st order basis elements run 
> succesfully.
> But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 2) 
> or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during 
> SolverGMRES.solve()*. 
> The error in debug mode is:
> 
> An error occurred in line <3403> of file 
> 
>  
> in function
> unsigned int dealii::internal::VectorDataExchange VectorizedArrayType>::find_vector_in_mf(const VectorType&, bool) const 
> [with VectorType = dealii::LinearAlgebra::distributed::Vector; 
> int 
> dim = 3; Number = double; VectorizedArrayType = 
> dealii::VectorizedArray]
> The violated condition was: 
> false
> Additional information: 
> Could not find partitioner that fits vector
>
> It might be my own coding mistake. I'm using 9.3.0 may upgrade dealii 
> to 9.4.0 help ?
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/8282ee17-1341-46db-b565-057860182777n%40googlegroups.com.


[deal.II] Re: Could not find partitioner that fits vector

2023-05-07 Thread 'yy.wayne' via deal.II User Group
Thank you Peter,

Currently I'm using MGTransferBlockMatrixFree where build with external 
partitioners is not supported.
But before moving to modify Block mg transfers, I wonder will the unmatched 
partitioners occur even coding is correct?
My problem is preconditioned with global coarsening h-MG, with 6 dofs per 
node for real and imaginary vectors in 3d,
BlockVectors are applied except for trilinos direct solver on coarsest 
level, where only Vectors rather than BlockVector 
is supported. Transfer between Vector and BlockVector is a potential risk 
though.

Best,
Wayne

在2023年5月6日星期六 UTC+8 16:41:40 写道:

> Hi Wayne,
>
> my best guess it that the partitioners in the multigrid operator and the 
> matrix-free operators not match, with the result that a vector with wrong 
> ghosting coming from the multigrid operator is used as src/dst in the 
> matrix-free loop resulting in the assert you show. Previously, one would 
> need to fix the the ghosting in the operator (see 
> https://github.com/dealii/dealii/blob/a091f25c5ff77687fbd8a65f5b336b235f201100/include/deal.II/matrix_free/operators.h#L489-L498)
>  
> but nowadays one can pass in external partitioners to MG: see, e.g., 
> https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/multigrid_throughput.cc#L1802.
>  
> Similarly one can do this in the context of global-coarsening MG.
>
> Hope this helps!
>
> Peter
>
> On Saturday, 6 May 2023 at 10:22:58 UTC+2 yy.wayne wrote:
>
>> It might fails too for 1st basis order too ...
>> But all fails occur when mpi process > 2. If I run with only 2 mpi 
>> processes, no error shows.
>>
>> 在2023年5月6日星期六 UTC+8 15:22:23 写道:
>>
>>> Hello my friends,
>>>
>>> I'm testing a matrix-free code that, preconditioned by a geometric 
>>> multigrid.
>>> The code is not stable that it fails for some fe_degree + coarse_level + 
>>> total_mesh levels
>>> combination. 
>>> For example, any n_mpi_process + 1st order basis elements run 
>>> succesfully.
>>> But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 2) 
>>> or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during 
>>> SolverGMRES.solve()*. 
>>> The error in debug mode is:
>>> 
>>> An error occurred in line <3403> of file 
>>>  
>>> in function
>>> unsigned int dealii::internal::VectorDataExchange>> VectorizedArrayType>::find_vector_in_mf(const VectorType&, bool) const 
>>> [with VectorType = dealii::LinearAlgebra::distributed::Vector; int 
>>> dim = 3; Number = double; VectorizedArrayType = 
>>> dealii::VectorizedArray]
>>> The violated condition was: 
>>> false
>>> Additional information: 
>>> Could not find partitioner that fits vector
>>>
>>> It might be my own coding mistake. I'm using 9.3.0 may upgrade dealii to 
>>> 9.4.0 help ?
>>>
>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c3a16128-54e0-4f0f-8151-ffd1adf96e9bn%40googlegroups.com.


[deal.II] Re: Could not find partitioner that fits vector

2023-05-06 Thread 'yy.wayne' via deal.II User Group
It might fails too for 1st basis order too ...
But all fails occur when mpi process > 2. If I run with only 2 mpi 
processes, no error shows.

在2023年5月6日星期六 UTC+8 15:22:23 写道:

> Hello my friends,
>
> I'm testing a matrix-free code that, preconditioned by a geometric 
> multigrid.
> The code is not stable that it fails for some fe_degree + coarse_level + 
> total_mesh levels
> combination. 
> For example, any n_mpi_process + 1st order basis elements run succesfully.
> But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 2) 
> or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during 
> SolverGMRES.solve()*. 
> The error in debug mode is:
> 
> An error occurred in line <3403> of file 
>  
> in function
> unsigned int dealii::internal::VectorDataExchange VectorizedArrayType>::find_vector_in_mf(const VectorType&, bool) const 
> [with VectorType = dealii::LinearAlgebra::distributed::Vector; int 
> dim = 3; Number = double; VectorizedArrayType = 
> dealii::VectorizedArray]
> The violated condition was: 
> false
> Additional information: 
> Could not find partitioner that fits vector
>
> It might be my own coding mistake. I'm using 9.3.0 may upgrade dealii to 
> 9.4.0 help ?
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/230621a1-c92b-47cd-bc63-b408516007ecn%40googlegroups.com.


Re: [deal.II] Re: Remove constraints at some DoF

2023-03-19 Thread 'yy.wayne' via deal.II User Group
That's a helpful tutorial. Thank you.

Best,
Wayne

在2023年3月20日星期一 UTC+8 02:43:26 写道:

> On 3/17/23 21:09, 'yy.wayne' via deal.II User Group wrote:
> > Another question relates to singular field that, do we support mixed 
> finite 
> > elements (with cg elements and dg elements at different location)?
> > I only see mixed order formulation but not  mixed element type in 
> tutorials.
>
> The canonical reference for using different elements in different parts of 
> the 
> domain is step-46.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/58f52129-a104-413f-b892-0944963e29dan%40googlegroups.com.


[deal.II] Re: Remove constraints at some DoF

2023-03-18 Thread 'yy.wayne' via deal.II User Group
Thansk Peter.  Appreciate your feedback.

Best,
Wayne

在2023年3月18日星期六 UTC+8 16:24:55 写道:

> I don't see a remove function in AffineConstraints. What you could do is 
> to create a new AffineConstrains instance based on the old one by ignoring 
> the unwanted entry. See also: 
> https://github.com/dealii/dealii/blob/a654d1fe45cc11c9dbbbcaaef6f53320f459baa2/include/deal.II/lac/affine_constraints.templates.h#L526-L541
>
> > I only see mixed order formulation but not  mixed element type in 
> tutorials.
>
> Maybe you could take a look at the `tests/` folders. Maybe there is an 
> example!?
>
> Hope this helps,
> Peter
>
> On Saturday, March 18, 2023 at 4:09:57 AM UTC+1 yy.wayne wrote:
>
>> Another question relates to singular field that, do we support mixed 
>> finite elements (with cg elements and dg elements at different location)? 
>> I only see mixed order formulation but not  mixed element type in 
>> tutorials.
>>
>> 在2023年3月18日星期六 UTC+8 09:45:50 写道:
>>
>>> Hello my friends,
>>>
>>> I'm working on problems with singularity fields. Is it possible to 
>>> remove a constraint line/entry at specific DoF? Say at the edge of a corner 
>>> in 3d, the two faces intersect at the corner are already interpolate with 
>>> zero boundary condition. However I want to un-constraint the DoF exactly at 
>>> the corner rather on faces (for FE_Q elements). Is there a function to 
>>> remove constraint on already constrained DoFs?
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/079ea6dc-367c-4fdd-95c5-cd843754fea1n%40googlegroups.com.


[deal.II] Re: Remove constraints at some DoF

2023-03-17 Thread 'yy.wayne' via deal.II User Group
Another question relates to singular field that, do we support mixed finite 
elements (with cg elements and dg elements at different location)? 
I only see mixed order formulation but not  mixed element type in tutorials.

在2023年3月18日星期六 UTC+8 09:45:50 写道:

> Hello my friends,
>
> I'm working on problems with singularity fields. Is it possible to remove 
> a constraint line/entry at specific DoF? Say at the edge of a corner in 3d, 
> the two faces intersect at the corner are already interpolate with zero 
> boundary condition. However I want to un-constraint the DoF exactly at the 
> corner rather on faces (for FE_Q elements). Is there a function to remove 
> constraint on already constrained DoFs?
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/048336f0-49e3-4d52-8904-99dd2e6c45b0n%40googlegroups.com.


[deal.II] Remove constraints at some DoF

2023-03-17 Thread 'yy.wayne' via deal.II User Group
Hello my friends,

I'm working on problems with singularity fields. Is it possible to remove a 
constraint line/entry at specific DoF? Say at the edge of a corner in 3d, 
the two faces intersect at the corner are already interpolate with zero 
boundary condition. However I want to un-constraint the DoF exactly at the 
corner rather on faces (for FE_Q elements). Is there a function to remove 
constraint on already constrained DoFs?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/676aaf04-b7e5-4175-88be-0a88f44b7a61n%40googlegroups.com.


Re: [deal.II] Is laplace operator well-conditioned?

2023-03-14 Thread 'yy.wayne' via deal.II User Group
I get it now. Thanks for your rely.

Best,
Wayne

在2023年3月15日星期三 UTC+8 00:19:43 写道:

>
> > This question might be silly. Is laplcae operator a positive definite
> > system and easy to solve? Does it become ill-conditioned when
> > mesh size and mesh quality reduce?
>
> "ill-conditioned" and "difficult to solve" are two different things. The 
> condition number of the discretized Laplace operator (i.e., the Laplace 
> matrix) grows like O(1/h^2), so it becomes quite large on fine meshes. But 
> linear systems with this matrix are relatively easy to solve if you use 
> multigrid methods.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/cc27a2dc-cf80-495c-81df-e25e03d2c13en%40googlegroups.com.


[deal.II] Is laplace operator well-conditioned?

2023-03-14 Thread 'yy.wayne' via deal.II User Group
Hello, 

I‘m kind of lost in the nature of laplace operator.
Poisson equations are easy to solve numerically, and it has
 type weak form.
However, in step-22 

 it 
says laplace operator is ill-conditioned
and expensive to invert with iterative method.

This question might be silly. Is laplcae operator a positive definite
system and easy to solve? Does it become ill-conditioned when
mesh size and mesh quality reduce?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/17cdeaa8-bab3-4e81-ba22-c7392dd6840en%40googlegroups.com.


Re: [deal.II] Mumps and superludist do not scale

2023-02-24 Thread 'yy.wayne' via deal.II User Group
Yes I think handing the transfer with mapping is more reasonable...
Thanks for your sugeestion.

Best,
Wayne

在2023年2月24日星期五 UTC+8 20:27:57 写道:

> On 2/24/23 05:22, 'yy.wayne' via deal.II User Group wrote:
> > I use trilinos because MatrixFree method relys on 
> LinearAlbegra::distributed 
> > Vector or Block Vector.
> > When solve the coarsest level of MatrixFree multigrid problem directly, 
> > trilinos direct solver only
> > handles non-blocked vectors, so I do transfer between BlockVectors and 
> > Vectors. The ordering of
> > DoFs is destructed when fe degree rises to 3 and higher, causing the 
> transfer 
> > error.
>
> The problem with renumbering by component is that the DoFs owned by each 
> process are no longer a contiguous range, and that just isn't allowed by 
> the 
> vector class you are trying to use. It needs to be a contiguous index 
> range.
>
> If you need to copy from block vectors to non-block vectors, you might 
> just 
> have to copy in a way that doesn't just map from src[i] to dst[i], but 
> from 
> src[i] to dst[mapping[i]] where mapping takes care of the translation of 
> corresponding vector elements.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/06c98db3-cf6b-4680-ad08-8a14000aa5ddn%40googlegroups.com.


Re: [deal.II] Mumps and superludist do not scale

2023-02-24 Thread 'yy.wayne' via deal.II User Group
Well I renumbered the DoFs because this discussion 
<https://groups.google.com/g/dealii/c/MXQO5Nhcpws/m/kteyeUOcCAAJ#:~:text=II%20User%20Group-,Well%20it%20takes%20several%20steps%20to%20Renumber%20on%20the%20coarsest%20level.,-1)%20For%20a>
.
I use trilinos because MatrixFree method relys on 
LinearAlbegra::distributed Vector or Block Vector.
When solve the coarsest level of MatrixFree multigrid problem directly, 
trilinos direct solver only
handles non-blocked vectors, so I do transfer between BlockVectors and 
Vectors. The ordering of 
DoFs is destructed when fe degree rises to 3 and higher, causing the 
transfer error.

Best,
Wayne

在2023年2月24日星期五 UTC+8 19:54:45 写道:

> On 2/24/23 04:37, 'yy.wayne' via deal.II User Group wrote:
> > 
> > However another problem pops out. When test a vector-valued problem, 
> renumber 
> > DoFs
> > component wise  make reinit of vectors fail. If 
> DoFRenumbering::component_wise 
> > is called,
> > 
> > 1. LinearAlgebra::distributed::Vector
> > v1(dof_handler.locally_owned_dofs(), mpi_communicator);
> > v1.reinit(dof_handler.locally_owned_dofs(), mpi_communicator);
> > Success
> > 2. LinearAlgebra::distributed::Vector v2;
> > v2.reinit(dof_handler.locally_owned_dofs(), mpi_communicator);
> > Fails.
> > 
> > Is that relates to trilinos date type?
>
> If you do not arrange the different components of a vector-valued finite 
> element into a block vector (i.e., if you use a single vector for all 
> components of the solution), then there is no need to call 
> DoFRenumbering::component_wise().
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6ea6f3fa-9a29-4929-801d-d2102c7e5b4bn%40googlegroups.com.


Re: [deal.II] Mumps and superludist do not scale

2023-02-24 Thread 'yy.wayne' via deal.II User Group
Sorry I forgot to present the error.

An error occurred in line <164> of file 
 in function
void dealii::Utilities::MPI::Partitioner::set_owned_indices(const 
dealii::IndexSet&)
The violated condition was: 
locally_owned_indices.is_contiguous() == true
Additional information: 
The index set specified in locally_owned_indices is not contiguous.

Stacktrace:
---
#0  /home/ubuntu/deal.II/installed/lib/libdeal_II.g.so.9.4.0: 
dealii::Utilities::MPI::Partitioner::set_owned_indices(dealii::IndexSet 
const&)
#1  /home/ubuntu/deal.II/installed/lib/libdeal_II.g.so.9.4.0: 
dealii::Utilities::MPI::Partitioner::Partitioner(dealii::IndexSet const&, 
ompi_communicator_t* const&)
...
#9  /home/ubuntu/deal.II/installed/lib/libdeal_II.g.so.9.4.0: 
std::shared_ptr 
std::make_shared(dealii::IndexSet const&, 
ompi_communicator_t* const&)
#10  /home/ubuntu/deal.II/installed/lib/libdeal_II.g.so.9.4.0: 
dealii::LinearAlgebra::distributed::Vector::reinit(dealii::IndexSet const&, 
ompi_communicator_t* const&)
#11  /home/ubuntu/deal.II/installed/lib/libdeal_II.g.so.9.4.0: 
dealii::LinearAlgebra::distributed::Vector 写道:

> Step-32 and step-50 are important for parallelization of TrilinosWrappered 
> data type.
> For someone has to write codes with Trilinos data type, here are necessary 
> modification
> from serial to parallel programs:
>
>1.  Change triangulation to parallel::distributed::triangulation 
>rather than  
> GridTools::partition_triangulation() 
>
> <https://www.dealii.org/current/doxygen/deal.II/namespaceGridTools.html#a99eba8e3b388258eda37a2724579dd1d>
>   
>or  parallel::shared::Triangulation 
>
> <https://www.dealii.org/current/doxygen/deal.II/classparallel_1_1shared_1_1Triangulation.html>
>.
>2.  SparsityPattern and solution/rhs Vector initialization specified 
>for Trilinos, see Step-42
>3.  Choose locally owned cells or level cells during assembling.
>
> Though they both allow parallel, step-42's DoFTools::make_sparsity_pattern 
> <https://www.dealii.org/current/doxygen/deal.II/group__constraints.html#gaf78e864edbfba7e0a7477457bfb96b26>
>  has 
> better performance
> in matrix LU decomposition (30%-50% less time) than tep-50's.
>
> However another problem pops out. When test a vector-valued problem, 
> renumber DoFs 
> component wise  make reinit of vectors fail. If 
> DoFRenumbering::component_wise is called,
>
>1. LinearAlgebra::distributed::Vector 
> v1(dof_handler.locally_owned_dofs(), 
>mpi_communicator); v1.reinit(dof_handler.locally_owned_dofs(), 
>mpi_communicator); 
>Success
>2. LinearAlgebra::distributed::Vector v2; 
>v2.reinit(dof_handler.locally_owned_dofs(), mpi_communicator); 
>Fails.
>
> Is that relates to trilinos date type?
> 在2023年2月23日星期四 UTC+8 22:44:57 写道:
>
>> On 2/23/23 07:42, 'yy.wayne' via deal.II User Group wrote: 
>> > So if step-4 doesn't support parallel then what I got is reasonable. 
>>
>> Correct. 
>> W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4e07362c-e779-4973-ae5e-44b864465409n%40googlegroups.com.


Re: [deal.II] Mumps and superludist do not scale

2023-02-24 Thread 'yy.wayne' via deal.II User Group
Step-32 and step-50 are important for parallelization of TrilinosWrappered 
data type.
For someone has to write codes with Trilinos data type, here are necessary 
modification
from serial to parallel programs:

   1.  Change triangulation to parallel::distributed::triangulation rather 
   than  
GridTools::partition_triangulation() 
   
<https://www.dealii.org/current/doxygen/deal.II/namespaceGridTools.html#a99eba8e3b388258eda37a2724579dd1d>
  
   or  parallel::shared::Triangulation 
   
<https://www.dealii.org/current/doxygen/deal.II/classparallel_1_1shared_1_1Triangulation.html>
   .
   2.  SparsityPattern and solution/rhs Vector initialization specified for 
   Trilinos, see Step-42
   3.  Choose locally owned cells or level cells during assembling.

Though they both allow parallel, step-42's DoFTools::make_sparsity_pattern 
<https://www.dealii.org/current/doxygen/deal.II/group__constraints.html#gaf78e864edbfba7e0a7477457bfb96b26>
 has 
better performance
in matrix LU decomposition (30%-50% less time) than tep-50's.

However another problem pops out. When test a vector-valued problem, 
renumber DoFs 
component wise  make reinit of vectors fail. If 
DoFRenumbering::component_wise is called,

   1. LinearAlgebra::distributed::Vector 
v1(dof_handler.locally_owned_dofs(), 
   mpi_communicator); v1.reinit(dof_handler.locally_owned_dofs(), 
   mpi_communicator); 
   Success
   2. LinearAlgebra::distributed::Vector v2; 
   v2.reinit(dof_handler.locally_owned_dofs(), mpi_communicator); 
   Fails.
   
Is that relates to trilinos date type?
在2023年2月23日星期四 UTC+8 22:44:57 写道:

> On 2/23/23 07:42, 'yy.wayne' via deal.II User Group wrote:
> > So if step-4 doesn't support parallel then what I got is reasonable.
>
> Correct.
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/da33a84a-5ad3-423a-8377-42eeec16e7f6n%40googlegroups.com.


Re: [deal.II] Mumps and superludist do not scale

2023-02-23 Thread 'yy.wayne' via deal.II User Group
Basic I modify the SparseMatrix and Vector type to Trilinoswrappered ones.
So if step-4 doesn't support parallel then what I got is reasonable.
在2023年2月23日星期四 UTC+8 22:40:14 写道:

> On 2/23/23 07:37, 'yy.wayne' via deal.II User Group wrote:
> > **
> > 
> > I'm making a big mistake then. I thought step-4 and step-40's only 
> difference 
> > are parallel
> > distributed triangulation and handing ghost dofs.
>
> I don't know what you're doing in your modification of step-4, so I really 
> can't tell. But step-4 itself has no notion that there are other 
> processors. 
> Unless you modified it extensively, every process will build and store the 
> entire matrix, and I assume that you are also only calling MUMPS and 
> SuperLUU 
> with that entire matrix.
>
> But of course I don't know -- I don't have the code you are running.
>
> Best
> W>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a697be8f-0652-47b0-8364-158340e9145bn%40googlegroups.com.


Re: [deal.II] Mumps and superludist do not scale

2023-02-23 Thread 'yy.wayne' via deal.II User Group
I'm making a big mistake then. I thought step-4 and step-40's only 
difference are parallel
distributed triangulation and handing ghost dofs.
Thank you.

Best,
Wayne

在2023年2月23日星期四 UTC+8 22:32:39 写道:

>
> yy.wayne:
>
> > I'm using mumps and superludist through Trilinos' Amesos wrapper 
> directly.
> > The code is simple, just step4 with TrilinosWrappers:SparseMatrix and
> > LinearAlgebra::distributed:Vector.
> > Here's the result from my virtual machine for a 98k DoFs problem.
> > Snipaste_2023-02-23_18-31-08.png
> > 
> > I've test a 300k+ DoFs problem with Amesos_mumps on server as well.
> > Multiple processes is not faster than one process.
>
> It's not clear to me what you are doing. step-4 is not parallel, and 
> unless 
> you extended it like we show in step-40, for example, all that happens 
> when 
> you run the program with
> mpirun -np 4 ./step-4
> is that the same program is run four times on four different processes, 
> but 
> all these four copies do is run the exact same instructions. This might 
> also 
> explain why you see the output printed multiple times. If that is so, you 
> shouldn't be surprised that that takes exactly as long as running one 
> instance.
>
> If you want to test these kinds of solvers, start with step-40.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/895445e9-c71f-4a14-a5d8-5c2e24919686n%40googlegroups.com.


[deal.II] Re: Specify vectorization length

2023-02-23 Thread 'yy.wayne' via deal.II User Group
Great! Thank you Peter.

Best,
Wayne

在2023年2月23日星期四 UTC+8 22:21:50 写道:

> It should be part of the 9.4.1 point release.
>
> Peter
>
> On Thursday, 23 February 2023 at 15:20:14 UTC+1 yy.wayne wrote:
>
>> I'm using deal.II 9.4.0. It seems bug are fixed...
>> 在2023年2月23日星期四 UTC+8 22:16:47 写道:
>>
>>> What version of deal.II are you using. I thought we fixed the 
>>> vector-valued version: 
>>> https://github.com/dealii/dealii/commit/6351f255d16c3cf465752e2ce943726df6bf4ea4
>>>
>>> Peter
>>>
>>> On Thursday, 23 February 2023 at 15:13:42 UTC+1 yy.wayne wrote:
>>>
 I think the problem exists for vector-valued problem.
 I modified the vectorization length of step-37 and it works fine.
 But for vector value problem, on line 6090 of 
 deal.II/matrix_free/fe_evaluation.h:

- val_in[comp] * weight * jac[comp][comp]

 The jac is defined as const Tensor<2, dim, 
 dealii::VectorizedArray>, which doesn't
 match customized VectorizedArrayType.

 Similar on line 6101

- transpose(invert(inv_t_jac))

 where inv_t_jac is also dealii::VectorizedArray type data.
 在2023年2月23日星期四 UTC+8 18:43:07 写道:

> Thanks you Peter,
>
> The error message is quite long and tricky (beyond my terminal can 
> show) when I set
> VectorizedArray with width=1.
> Knowing that deal.II do support this function is great. I‘ll try debug 
> the code first and came back with
> further progress or questions :)
>
> Best,
> Wayne
>
> 在2023年2月23日星期四 UTC+8 16:00:43 写道:
>
>> Hi Wayne,
>>
>> What is you error message?
>>
>> > I replace all FEEvaluation, MatrixFree and VectorizedArray 
>> construction with a VectorizedArray, but make returns 
>> some 
>> intricate error. Is specify vectorization length possible? I think I've 
>> seen one in dealii/test ...
>>
>> Yes, it is possible and the to specify the additional template 
>> argument is the right approach.
>>
>> I have introduced this template parameter for the hyper.deal project: 
>> https://github.com/hyperdeal/hyperdeal, where we create a tensor 
>> product of MatrixFree instances (where one is vectorized and the other 
>> is 
>> not).
>>
>> Take a look at the test: 
>> https://github.com/dealii/dealii/blob/3b785178375a8ea233fada96cf987aed6d53583e/tests/matrix_free/ecl.h#L52
>>
>> The important aspect is that the vector length has to match the ones 
>> supported by given ISA. See table 1 in the release paper of deal.II 9.2: 
>> https://dealii.org/deal92-preprint.pdf
>>
>> Peter
>>
>> On Thursday, February 23, 2023 at 7:09:07 AM UTC+1 yy.wayne wrote:
>>
>>> Hi,
>>> is it possible to specify vectorization length(>> I'm running same code on local computer and server, whose cpu 
>>> supports SSE2+4.10GHz
>>> and AVX512+2.60GHz respectively. The speed up for iteration time is 
>>> only 2 on server, which should be 8*2.6 / 2*4.1 > 2.5 times. Therefore 
>>> I'd 
>>> like to run code with different
>>> vectorization length on same computer to test Vectorization's power.
>>>
>>> I replace all FEEvaluation, MatrixFree and VectorizedArray 
>>> construction with a VectorizedArray, but make returns 
>>> some 
>>> intricate error. Is specify vectorization length possible? I think I've 
>>> seen one in dealii/test ...
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/723aca0d-b95b-4f3d-8afc-5209d003dbf0n%40googlegroups.com.


[deal.II] Re: Specify vectorization length

2023-02-23 Thread 'yy.wayne' via deal.II User Group
I'm using deal.II 9.4.0. It seems bug are fixed...
在2023年2月23日星期四 UTC+8 22:16:47 写道:

> What version of deal.II are you using. I thought we fixed the 
> vector-valued version: 
> https://github.com/dealii/dealii/commit/6351f255d16c3cf465752e2ce943726df6bf4ea4
>
> Peter
>
> On Thursday, 23 February 2023 at 15:13:42 UTC+1 yy.wayne wrote:
>
>> I think the problem exists for vector-valued problem.
>> I modified the vectorization length of step-37 and it works fine.
>> But for vector value problem, on line 6090 of 
>> deal.II/matrix_free/fe_evaluation.h:
>>
>>- val_in[comp] * weight * jac[comp][comp]
>>
>> The jac is defined as const Tensor<2, dim, 
>> dealii::VectorizedArray>, which doesn't
>> match customized VectorizedArrayType.
>>
>> Similar on line 6101
>>
>>- transpose(invert(inv_t_jac))
>>
>> where inv_t_jac is also dealii::VectorizedArray type data.
>> 在2023年2月23日星期四 UTC+8 18:43:07 写道:
>>
>>> Thanks you Peter,
>>>
>>> The error message is quite long and tricky (beyond my terminal can show) 
>>> when I set
>>> VectorizedArray with width=1.
>>> Knowing that deal.II do support this function is great. I‘ll try debug 
>>> the code first and came back with
>>> further progress or questions :)
>>>
>>> Best,
>>> Wayne
>>>
>>> 在2023年2月23日星期四 UTC+8 16:00:43 写道:
>>>
 Hi Wayne,

 What is you error message?

 > I replace all FEEvaluation, MatrixFree and VectorizedArray 
 construction with a VectorizedArray, but make returns some 
 intricate error. Is specify vectorization length possible? I think I've 
 seen one in dealii/test ...

 Yes, it is possible and the to specify the additional template argument 
 is the right approach.

 I have introduced this template parameter for the hyper.deal project: 
 https://github.com/hyperdeal/hyperdeal, where we create a tensor 
 product of MatrixFree instances (where one is vectorized and the other is 
 not).

 Take a look at the test: 
 https://github.com/dealii/dealii/blob/3b785178375a8ea233fada96cf987aed6d53583e/tests/matrix_free/ecl.h#L52

 The important aspect is that the vector length has to match the ones 
 supported by given ISA. See table 1 in the release paper of deal.II 9.2: 
 https://dealii.org/deal92-preprint.pdf

 Peter

 On Thursday, February 23, 2023 at 7:09:07 AM UTC+1 yy.wayne wrote:

> Hi,
> is it possible to specify vectorization length( I'm running same code on local computer and server, whose cpu supports 
> SSE2+4.10GHz
> and AVX512+2.60GHz respectively. The speed up for iteration time is 
> only 2 on server, which should be 8*2.6 / 2*4.1 > 2.5 times. Therefore 
> I'd 
> like to run code with different
> vectorization length on same computer to test Vectorization's power.
>
> I replace all FEEvaluation, MatrixFree and VectorizedArray 
> construction with a VectorizedArray, but make returns some 
> intricate error. Is specify vectorization length possible? I think I've 
> seen one in dealii/test ...
>


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4d2c921e-b399-442d-b2f8-018540b2b6edn%40googlegroups.com.


[deal.II] Re: Specify vectorization length

2023-02-23 Thread 'yy.wayne' via deal.II User Group
I think the problem exists for vector-valued problem.
I modified the vectorization length of step-37 and it works fine.
But for vector value problem, on line 6090 of 
deal.II/matrix_free/fe_evaluation.h:

   - val_in[comp] * weight * jac[comp][comp]

The jac is defined as const Tensor<2, dim, 
dealii::VectorizedArray>, which doesn't
match customized VectorizedArrayType.

Similar on line 6101

   - transpose(invert(inv_t_jac))

where inv_t_jac is also dealii::VectorizedArray type data.
在2023年2月23日星期四 UTC+8 18:43:07 写道:

> Thanks you Peter,
>
> The error message is quite long and tricky (beyond my terminal can show) 
> when I set
> VectorizedArray with width=1.
> Knowing that deal.II do support this function is great. I‘ll try debug the 
> code first and came back with
> further progress or questions :)
>
> Best,
> Wayne
>
> 在2023年2月23日星期四 UTC+8 16:00:43 写道:
>
>> Hi Wayne,
>>
>> What is you error message?
>>
>> > I replace all FEEvaluation, MatrixFree and VectorizedArray construction 
>> with a VectorizedArray, but make returns some intricate 
>> error. Is specify vectorization length possible? I think I've seen one in 
>> dealii/test ...
>>
>> Yes, it is possible and the to specify the additional template argument 
>> is the right approach.
>>
>> I have introduced this template parameter for the hyper.deal project: 
>> https://github.com/hyperdeal/hyperdeal, where we create a tensor product 
>> of MatrixFree instances (where one is vectorized and the other is not).
>>
>> Take a look at the test: 
>> https://github.com/dealii/dealii/blob/3b785178375a8ea233fada96cf987aed6d53583e/tests/matrix_free/ecl.h#L52
>>
>> The important aspect is that the vector length has to match the ones 
>> supported by given ISA. See table 1 in the release paper of deal.II 9.2: 
>> https://dealii.org/deal92-preprint.pdf
>>
>> Peter
>>
>> On Thursday, February 23, 2023 at 7:09:07 AM UTC+1 yy.wayne wrote:
>>
>>> Hi,
>>> is it possible to specify vectorization length(>> I'm running same code on local computer and server, whose cpu supports 
>>> SSE2+4.10GHz
>>> and AVX512+2.60GHz respectively. The speed up for iteration time is only 
>>> 2 on server, which should be 8*2.6 / 2*4.1 > 2.5 times. Therefore I'd like 
>>> to run code with different
>>> vectorization length on same computer to test Vectorization's power.
>>>
>>> I replace all FEEvaluation, MatrixFree and VectorizedArray construction 
>>> with a VectorizedArray, but make returns some intricate 
>>> error. Is specify vectorization length possible? I think I've seen one in 
>>> dealii/test ...
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ef2097ba-e23f-45e8-ae43-cde84f65d305n%40googlegroups.com.


[deal.II] Re: Specify vectorization length

2023-02-23 Thread 'yy.wayne' via deal.II User Group
Thanks you Peter,

The error message is quite long and tricky (beyond my terminal can show) 
when I set
VectorizedArray with width=1.
Knowing that deal.II do support this function is great. I‘ll try debug the 
code first and came back with
further progress or questions :)

Best,
Wayne

在2023年2月23日星期四 UTC+8 16:00:43 写道:

> Hi Wayne,
>
> What is you error message?
>
> > I replace all FEEvaluation, MatrixFree and VectorizedArray construction 
> with a VectorizedArray, but make returns some intricate 
> error. Is specify vectorization length possible? I think I've seen one in 
> dealii/test ...
>
> Yes, it is possible and the to specify the additional template argument is 
> the right approach.
>
> I have introduced this template parameter for the hyper.deal project: 
> https://github.com/hyperdeal/hyperdeal, where we create a tensor product 
> of MatrixFree instances (where one is vectorized and the other is not).
>
> Take a look at the test: 
> https://github.com/dealii/dealii/blob/3b785178375a8ea233fada96cf987aed6d53583e/tests/matrix_free/ecl.h#L52
>
> The important aspect is that the vector length has to match the ones 
> supported by given ISA. See table 1 in the release paper of deal.II 9.2: 
> https://dealii.org/deal92-preprint.pdf
>
> Peter
>
> On Thursday, February 23, 2023 at 7:09:07 AM UTC+1 yy.wayne wrote:
>
>> Hi,
>> is it possible to specify vectorization length(> I'm running same code on local computer and server, whose cpu supports 
>> SSE2+4.10GHz
>> and AVX512+2.60GHz respectively. The speed up for iteration time is only 
>> 2 on server, which should be 8*2.6 / 2*4.1 > 2.5 times. Therefore I'd like 
>> to run code with different
>> vectorization length on same computer to test Vectorization's power.
>>
>> I replace all FEEvaluation, MatrixFree and VectorizedArray construction 
>> with a VectorizedArray, but make returns some intricate 
>> error. Is specify vectorization length possible? I think I've seen one in 
>> dealii/test ...
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/880f9fc6-763d-4d25-826f-2c23dc7dee39n%40googlegroups.com.


[deal.II] Specify vectorization length

2023-02-22 Thread 'yy.wayne' via deal.II User Group
Hi,
is it possible to specify vectorization length( 2.5 times. Therefore I'd like to 
run code with different
vectorization length on same computer to test Vectorization's power.

I replace all FEEvaluation, MatrixFree and VectorizedArray construction 
with a VectorizedArray, but make returns some intricate 
error. Is specify vectorization length possible? I think I've seen one in 
dealii/test ...

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/25c5d181-4ed8-43cc-8a42-16e5a62fdacan%40googlegroups.com.


[deal.II] Re: Spack error with cgal and arbox

2023-02-22 Thread 'yy.wayne' via deal.II User Group
I followed the first solution in dealii and hdf5 compile problem 
(google.com)  and deal.II 
codes compiled successfully. I've run several tests in parallel and seems 
no error.

For someone may have similar problem, these are what I've done:

   1. Install all deal.II dependencies with spack
   2. Build dealii from source rather in spack. The second line in 
   spack-build-out(you generate this file if building deal.II with spack 
   onece) is what spack's cmake configuration for deal.II. I disabled '-G 
   Ninja' cmake and set 'KOKKOS_DIR=/path/to/spack/kokkos' explicitly in cmake 
   configuration.
   3. Follow  dealii and hdf5 compile problem (google.com) 
    to comment some lines 
   in  configure_50_hdf5.cmake


在2023年2月22日星期三 UTC+8 23:32:11 写道:

> Hi,
>
> Thank for your response. I set -DKOKKOS_DIR for cmake and now "make tests" 
> is good.
> I'll see the discussion in  dealii and hdf5 compile problem (google.com) 
> .
> 在2023年2月22日星期三 UTC+8 21:55:50 写道:
>
>> Hi,
>>
>> Don't set any variable when using spack. Please post the exact spack 
>> command you are using to install deal.II with spack. In general, you want 
>> to let spack picks the dependencies for you. I would also advise that you 
>> restart from a clean state and remove the configuration files created by 
>> spack.
>>
>> Best,
>>
>> Bruno
>>
>>
>> On Wednesday, February 22, 2023 at 7:53:24 AM UTC-5 marco@gmail.com 
>> wrote:
>>
>>> Hi Wayne,
>>>
>>> CGAL requires C++17, hence you should add a proper compiler flag (
>>> https://github.com/dealii/dealii/wiki/deal.II-in-Spack#compiler-flags) 
>>> if you want to have it.
>>>
>>> Adding *cppflags="-std=c++17"* should fix that. 
>>>
>>> For what concerns your last message, here's a related discussion: 
>>> https://groups.google.com/g/dealii/c/nodblNqEjd4
>>>
>>> Best,
>>> Marco
>>> Il giorno mercoledì 22 febbraio 2023 alle 13:36:12 UTC+1 yy.wayne ha 
>>> scritto:
>>>
 Since spack is calling error, I install directly with the depending 
 libraries installed by spack.
 Deal.II cmake and 'make install' successfully. But shows error when 
 'make test':

 ...
 An error occurred in line <170> of file 
 
  
 in function
 void dealii::SparsityTools::{anonymous}::partition_metis(const 
 dealii::SparsityPattern&, const std::vector&, unsigned int, 
 std::vector&)
 The violated condition was:
 ierr == 1
 Additional information: 
 An error with error number -3 occurred while calling a METIS function
 ...

 The METIS_DIR was set to metis-5.1.0. I changed METIS_DIR to parmetis, 
 but the error remains.

 Besides, I fail to build step1 after this fault installation. cmake 
 passed but make gives following error:
 [image: Snipaste_2023-02-22_20-36-58.png]

 在2023年2月22日星期三 UTC+8 15:16:40 写道:

> The spacl-build-out.txt with middle content deleted. The original file 
> was too big.
> 在2023年2月22日星期三 UTC+8 15:12:25 写道:
>
>> I set variant "~arborx". There's only ninja warning but installation 
>> exit with ERROR. However in spack-build-out.txt every libraries seem 
>> connected successfully. Why ninja return an ERROR?
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0b0635aa-df9e-4ea5-8d17-15b1feb0df13n%40googlegroups.com.


[deal.II] Re: Spack error with cgal and arbox

2023-02-22 Thread 'yy.wayne' via deal.II User Group
Hi,

Thank for your response. I set -DKOKKOS_DIR for cmake and now "make tests" 
is good.
I'll see the discussion in  dealii and hdf5 compile problem (google.com) 
.
在2023年2月22日星期三 UTC+8 21:55:50 写道:

> Hi,
>
> Don't set any variable when using spack. Please post the exact spack 
> command you are using to install deal.II with spack. In general, you want 
> to let spack picks the dependencies for you. I would also advise that you 
> restart from a clean state and remove the configuration files created by 
> spack.
>
> Best,
>
> Bruno
>
>
> On Wednesday, February 22, 2023 at 7:53:24 AM UTC-5 marco@gmail.com 
> wrote:
>
>> Hi Wayne,
>>
>> CGAL requires C++17, hence you should add a proper compiler flag (
>> https://github.com/dealii/dealii/wiki/deal.II-in-Spack#compiler-flags) 
>> if you want to have it.
>>
>> Adding *cppflags="-std=c++17"* should fix that. 
>>
>> For what concerns your last message, here's a related discussion: 
>> https://groups.google.com/g/dealii/c/nodblNqEjd4
>>
>> Best,
>> Marco
>> Il giorno mercoledì 22 febbraio 2023 alle 13:36:12 UTC+1 yy.wayne ha 
>> scritto:
>>
>>> Since spack is calling error, I install directly with the depending 
>>> libraries installed by spack.
>>> Deal.II cmake and 'make install' successfully. But shows error when 
>>> 'make test':
>>>
>>> ...
>>> An error occurred in line <170> of file 
>>> 
>>>  
>>> in function
>>> void dealii::SparsityTools::{anonymous}::partition_metis(const 
>>> dealii::SparsityPattern&, const std::vector&, unsigned int, 
>>> std::vector&)
>>> The violated condition was:
>>> ierr == 1
>>> Additional information: 
>>> An error with error number -3 occurred while calling a METIS function
>>> ...
>>>
>>> The METIS_DIR was set to metis-5.1.0. I changed METIS_DIR to parmetis, 
>>> but the error remains.
>>>
>>> Besides, I fail to build step1 after this fault installation. cmake 
>>> passed but make gives following error:
>>> [image: Snipaste_2023-02-22_20-36-58.png]
>>>
>>> 在2023年2月22日星期三 UTC+8 15:16:40 写道:
>>>
 The spacl-build-out.txt with middle content deleted. The original file 
 was too big.
 在2023年2月22日星期三 UTC+8 15:12:25 写道:

> I set variant "~arborx". There's only ninja warning but installation 
> exit with ERROR. However in spack-build-out.txt every libraries seem 
> connected successfully. Why ninja return an ERROR?



-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/1224a682-fc27-45c6-9dd8-751344f02ae0n%40googlegroups.com.


Re: [deal.II] Install scalapack and mumps fail with MPI fortran error

2023-02-20 Thread 'yy.wayne' via deal.II User Group
I ran into the wrong direction then... There are some trilinos warning on 
my vm machine 
but deal.ii install was success so I skip over them. Locating the error is 
very helpful. 
Thank you Wolfgang!

Best,
Wayne



在2023年2月21日星期二 UTC+8 11:38:27 写道:

>
> > Thank you Wolfgang. I shift from mumps to superlu_dist. The trilinos is 
> > compiled with superlu_dist completely, but another error shows up. It 
> says:
> > 
> > * Compiler setup check for DEAL_II_HAVE_USABLE_FLAGS_DEBUG failed with:
> > * Change Dir:
> > 
> /home/wyy/libs/candi/install/tmp/build/deal.II-master/check_compiler_setup/CheckCompilerSetupDEAL_II_HAVE_USABLE_FLAGS_DEBUG
> > * Unable to compile a simple test program. Trying to drop "-fuse-ld=gold"
> > from the linker flags.
>
> The real error is actually above that:
>
> /home/wyy/libs/candi/install/trilinos-release-13-2-0/lib/libstratimikosbelos.so:
>  
>
> error: undefined reference to 'Tpetra::MultiVector Kokkos::Compat::KokkosDeviceWrapperNode 
> >::get1dCopy(Teuchos::ArrayView const&, unsigned long) const'
> /home/wyy/libs/candi/install/trilinos-release-13-2-0/lib/libstratimikosbelos.so:
>  
>
> error: undefined reference to 'Tpetra::MultiVector Kokkos::Compat::KokkosDeviceWrapperNode 
> >::getLocalLength() const'
> /home/wyy/libs/candi/install/trilinos-release-13-2-0/lib/libstratimikosbelos.so:
>  
>
> error: undefined reference to 'Tpetra::MultiVector Kokkos::Compat::KokkosDeviceWrapperNode 
> >::getGlobalLength() const'
> /home/wyy/libs/candi/install/trilinos-release-13-2-0/lib/libstratimikosbelos.so:
>  
>
> error: undefined reference to 'Tpetra::MultiVector Kokkos::Compat::KokkosDeviceWrapperNode 
> >::getNumVectors() const'
> /home/wyy/libs/candi/install/trilinos-release-13-2-0/lib/libstratimikosbelos.so:
>  
>
> error: undefined reference to 'Tpetra::MultiVector Kokkos::Compat::KokkosDeviceWrapperNode 
> >::getVector(unsigned long) const'
> /home/wyy/libs/candi/install/trilinos-release-13-2-0/lib/libstratimikosamesos2.so:
>  
>
> error: undefined reference to 'Amesos2::KLU2 long long, Kokkos::Compat::KokkosDeviceWrapperNode Kokkos::HostSpace> >, Tpetra::MultiVector Kokkos::Compat::KokkosDeviceWrapperNode 
> > 
> >::KLU2(Teuchos::RCP Kokkos::Compat::KokkosDeviceWrapperNode 
> > 
> const>, Teuchos::RCP Kokkos::Compat::KokkosDeviceWrapperNode 
> > 
> >, Teuchos::RCP Kokkos::Compat::KokkosDeviceWrapperNode 
> > 
> const>)'
> /home/wyy/libs/candi/install/trilinos-release-13-2-0/lib/libstratimikosamesos2.so:
>  
>
> error: undefined reference to 
> 'Amesos2::TachoSolver int, long long, Kokkos::Compat::KokkosDeviceWrapperNode Kokkos::HostSpace> >, Tpetra::MultiVector Kokkos::Compat::KokkosDeviceWrapperNode 
> > 
> >::TachoSolver(Teuchos::RCP Kokkos::Compat::KokkosDeviceWrapperNode 
> > 
> const>, Teuchos::RCP Kokkos::Compat::KokkosDeviceWrapperNode 
> > 
> >, Teuchos::RCP Kokkos::Compat::KokkosDeviceWrapperNode 
> > 
> const>)'
> collect2: error: ld returned 1 exit status
>
> I must admit that I don't know enough about Trilinos and candi to say 
> where 
> the problem is -- maybe others have seen this problem before.
>
> Best
> WB
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/dade49e6-6c8b-44a2-a1b2-6a4280631c9dn%40googlegroups.com.


[deal.II] Re: Install scalapack and mumps fail with MPI fortran error

2023-02-17 Thread 'yy.wayne' via deal.II User Group
Sorry didn't mention I'm using candi to install it.

在2023年2月17日星期五 UTC+8 16:03:30 写道:

> I upgrad gcc g++ gfortran to 9.3.1 but the error remains.
> Another question is SuperLUDist wasn't auto enabled by Trilinos. Any hint 
> on this ?
> [image: trilinos_candia_configure.png]
> Best, 
> Wayne
> 在2023年2月16日星期四 UTC+8 19:49:19 写道:
>
>> Hello,
>>
>> I try to install dealii on a CentOS virtual machine. It fails when 
>> installing scalapack and mumps, both related to mpi fortran. Is it due to 
>> openmpi compatibility?
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/534c21ac-063c-4721-92b3-bbbcea91c67en%40googlegroups.com.


[deal.II] Install scalapack and mumps fail with MPI fortran error

2023-02-16 Thread 'yy.wayne' via deal.II User Group
Hello,

I try to install dealii on a CentOS virtual machine. It fails when 
installing scalapack and mumps, both related to mpi fortran. Is it due to 
openmpi compatibility?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/108b6df6-a7e3-4ea0-a57f-6427397567fen%40googlegroups.com.
CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument  value or use a ... suffix to tell
  CMake that the project does not need compatibility with older versions.


-- The C compiler identification is GNU 9.3.1
-- The Fortran compiler identification is GNU 9.3.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/lib64/openmpi/bin/mpicc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - failed
-- Check for working Fortran compiler: /usr/lib64/openmpi/bin/mpif90
-- Check for working Fortran compiler: /usr/lib64/openmpi/bin/mpif90 - broken
CMake Error at 
/home/123456/libs/candi-master/candi_install/cmake-3.20.5-linux-x86_64/share/cmake-3.20/Modules/CMakeTestFortranCompiler.cmake:51
 (message):
  The Fortran compiler

"/usr/lib64/openmpi/bin/mpif90"

  is not able to compile a simple test program.

  It fails with the following output:

Change Dir: 
/home/123456/libs/candi-master/candi_install/tmp/build/scalapack-2.1.0/CMakeFiles/CMakeTmp

Run Build Command(s):/opt/rh/devtoolset-9/root/usr/bin/gmake -f Makefile 
cmTC_238a2/fast && /opt/rh/devtoolset-9/root/usr/bin/gmake  -f 
CMakeFiles/cmTC_238a2.dir/build.make CMakeFiles/cmTC_238a2.dir/build
gmake[1]: Entering directory 
'/home/123456/libs/candi-master/candi_install/tmp/build/scalapack-2.1.0/CMakeFiles/CMakeTmp'
Building Fortran object CMakeFiles/cmTC_238a2.dir/testFortranCompiler.f.o
/usr/lib64/openmpi/bin/mpif90   -g -O3 -fallow-argument-mismatch  -fPIE -c 
/home/123456/libs/candi-master/candi_install/tmp/build/scalapack-2.1.0/CMakeFiles/CMakeTmp/testFortranCompiler.f
 -o CMakeFiles/cmTC_238a2.dir/testFortranCompiler.f.o
gfortran: error: unrecognized command line option 
‘-fallow-argument-mismatch’; did you mean ‘-Wno-argument-mismatch’?
gmake[1]: *** [CMakeFiles/cmTC_238a2.dir/build.make:78: 
CMakeFiles/cmTC_238a2.dir/testFortranCompiler.f.o] Error 1
gmake[1]: Leaving directory 
'/home/123456/libs/candi-master/candi_install/tmp/build/scalapack-2.1.0/CMakeFiles/CMakeTmp'
gmake: *** [Makefile:127: cmTC_238a2/fast] Error 2



  

  CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
  CMakeLists.txt:2 (project)


-- Configuring incomplete, errors occurred!
See also 
"/home/123456/libs/candi-master/candi_install/tmp/build/scalapack-2.1.0/CMakeFiles/CMakeOutput.log".
See also 
"/home/123456/libs/candi-master/candi_install/tmp/build/scalapack-2.1.0/CMakeFiles/CMakeError.log".-- The C compiler identification is GNU 9.3.1
-- The Fortran compiler identification is GNU 9.3.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/lib64/openmpi/bin/mpicc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/lib64/openmpi/bin/mpif90 - skipped
-- Checking whether /usr/lib64/openmpi/bin/mpif90 supports Fortran 90
-- Checking whether /usr/lib64/openmpi/bin/mpif90 supports Fortran 90 - yes
-- checking that C and Fortran compilers can link
-- checking that C and Fortran compilers can link - OK
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE  
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.27.1") 
-- Checking for one of the modules 'lapack-netlib;lapack'
-- Checking for one of the modules 'blas-netlib;blas'
-- Performing Test LAPACK_real64_links
-- Performing Test LAPACK_real64_links - Success
-- Performing Test LAPACK_real32_links
-- Performing Test LAPACK_real32_links - Success
-- Found LAPACK: /usr/lib64/liblapack.so;/usr/lib64/libblas.so  found 
components: Netlib 
-- Checking for one of the modules 'ompi-c'
-- Performing Test MPI_C_links
-- Performing Test MPI_C_links - Success
-- Checking for one of the modules 'ompi-fort'
-- Performing Test 

Re: [deal.II] Convergence failure in step 0.

2023-02-14 Thread 'yy.wayne' via deal.II User Group
I'll do some further test. Your advice on locating NaN error really helps 
me. Thank you Wolfgang :)

Best,
Wayne
在2023年2月15日星期三 UTC+8 12:17:44 写道:

> On 2/14/23 19:04, 'yy.wayne' via deal.II User Group wrote:
> > 
> > Yes the diagonal could be negative when k rises. Will Chebyshev 
> preconditioner 
> > smoother converge
> > when there are negative entries?
>
> I have no idea :-)
>
>
> > In my case k hasn't rise so large that diagonal entries is negative. The 
> > negative entries shows up only
> > when compute_normal_flux constraints is applied. I think 
> > AffineConstraints::distribute_local_to_global 
> > <
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.dealii.org%2Fcurrent%2Fdoxygen%2Fdeal.II%2FclassAffineConstraints.html%23a373fbdacd8c486e675b8d2bff8943192%3A~%3Atext%3DThus%2C%2520if%2520a%2520degree%2Cin%2520the%2520global%2520matrix.=05%7C01%7CWolfgang.Bangerth%40colostate.edu%7Cd2ecd8f014fc4c51b63308db0ef8fd28%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C638120234786119642%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=c2TyCwiADwVWxKmtOEDrujHEkVC%2F8QPR5k7gIip3OoQ%3D=0
> >
> > is the cause since in matrix-based assemble, this function add a 
> positive 
> > value on the constrained
> > dofs while matrix-free does not. Therefore the diagonals are different 
> between 
> > matrix-free and matrix-based.
>
> That's possible. I don't know enough about the matrix-free framework to 
> tell 
> one way or the other, but maybe some of my colleagues here will know.
>
> Best
> W.
>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7c40c32c-5802-4911-9143-1971a158c81dn%40googlegroups.com.


Re: [deal.II] Convergence failure in step 0.

2023-02-14 Thread 'yy.wayne' via deal.II User Group
Hi Wolfgang,

Yes the diagonal could be negative when k rises. Will Chebyshev 
preconditioner smoother converge
when there are negative entries?

In my case k hasn't rise so large that diagonal entries is negative. The 
negative entries shows up only
when compute_normal_flux constraints is applied. I think 
AffineConstraints::distribute_local_to_global 
<https://www.dealii.org/current/doxygen/deal.II/classAffineConstraints.html#a373fbdacd8c486e675b8d2bff8943192:~:text=Thus,%20if%20a%20degree,in%20the%20global%20matrix.>
is the cause since in matrix-based assemble, this function add a positive 
value on the constrained 
dofs while matrix-free does not. Therefore the diagonals are different 
between matrix-free and matrix-based.

Best,
Wayne

在2023年2月15日星期三 UTC+8 00:39:41 写道:

> On 2/14/23 07:15, 'yy.wayne' via deal.II User Group wrote:
> > The weak problem I'm solving is   +  - 
> > k^2 = RHS, which is vector Maxwell wave equation (for nodal basis).
> > [...]
> > 1) Is it reasonable the diagonal computed by MatrixFree operator with 
> > compute_normal_flux() has negative entries?
>
> The diagonal entries of the matrix, for this bilinear form, are 
> integrals of the form
>
> A_ii = \int (curl phi_i)*(curl phi_i) + (div phi_i)*(div phi_i)
> - k^2 phi_i * phi_i
>
> In other words, A_ii is a sum of squares, but one of the squares is 
> negative. If k is small, the sum may still be positive, but for sure if 
> k is large, the diagonal entries of A will be negative.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0067be90-8710-4184-a33d-48c7c5fbdc3an%40googlegroups.com.


Re: [deal.II] Convergence failure in step 0.

2023-02-14 Thread 'yy.wayne' via deal.II User Group
Thanks for your reply Martin.

The reason my Chebyshev preconditioner breaks is the (inverse) diagonal 
vector I get from MatrixFree operator contains negative entries. 
In my corresponding matrix-based code with exactly same geometry and 
boundary condition, the matrix diagonal has no negaive entries.

I apply a "compute_nonzero_tangential_flux_constraints_on_level" function 
similar to the normal_flux one. The geometry of this problem is a shell. 
When that BC is applied on outer boundary there's no negative entries, but 
the error comes when applied on inner boundary. I'm still debugging to see 
where the error comes from.

I got a quesiton that do FEEvaluation auto deal with constraints? It seems 
a matrixfree_operator.vmult solution 
equals matrixbased_matrix.vmult + AffineConstraints.distribute.

Best
Wayne

在2023年2月14日星期二 UTC+8 15:33:18 写道:

> To extend over what Wolfgang said, the most likely causes for failure with 
> the Chebyshev preconditioner are:
>
> - the vector supplying the matrix diagonal (via the field 
> AdditionalData::preconditioner) contains Inf or NaN entries, which in turn 
> result from either a ill-formed differential operator or a bug in the code 
> computing the diagonal, or
>
> - a coarse level holding only constrained degrees of freedom, which are 
> not treated properly so that the SolverCG making an eigenvalue estimate 
> needs to work with an unsolvable linear system (right hand side not in the 
> image of the matrix).
>
> For the first case, you would check the diagonal on each level and the 
> code leading to that, in the second the treatment of constraints.
>
> Best,
> Martin
> On 14.02.23 06:33, 'yy.wayne' via deal.II User Group wrote:
>
> Thank you Wolfgang. I didin't dive much into the Chebyshev preconditioner 
> because I'm not familiar with it.  
> I assume it's because the CG solver inside Chebyshev preconditioner didn't 
> converge or what.
> But as you stated, if NaN should only get from NaN elements and divided by 
> zero, then I may find the error. 
> I'll follow your debug suggestion.   
>
> Best
> Wayne
>
> 在2023年2月14日星期二 UTC+8 12:48:42 写道:
>
>>
>> > It's strange that the iteration breaks at step 0. If I try right 
>> > preconditioning, it breaks at step 1 with nan. Besides, if there is 
>> only on 
>> > multigrid level, the problem is solved correctly with 1 iteration, so 
>> matrices 
>> > should be correct. The coarse solution is good enough for smoothing. 
>> > 
>> > The question is, in which case may a CG or GMRES solver breaks at step 
>> 0 with nan? 
>>
>> NaN errors, like segmentation faults, are relatively easy to find because 
>> you 
>> know that every NaN you get is wrong. So you check that the matrix and 
>> right 
>> hand side and solution vector you give to the solver don't have any NaNs. 
>> If 
>> they don't, and you get NaNs out, then the problem likely lies with the 
>> preconditioner, so you single step through the solver around the place 
>> where 
>> the preconditioner is applied and check in which operation the NaN 
>> appears. 
>>
>> In your case, you seem to have already found out that it's the Chebyshev 
>> preconditioner. The question is why. Is it because the matrix the 
>> preconditioner uses has NaNs in it? Is it because the preconditioner 
>> makes an 
>> assumption that it can divide by certain matrix entries, but they are 
>> zero? 
>> Single stepping in a debugger will give you the answer :-) 
>>
>> Best 
>> W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>
>> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/a0326f84-a7b7-49ec-97b7-db1579574e87n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/dealii/a0326f84-a7b7-49ec-97b7-db1579574e87n%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/dc0e8bfc-3f3d-44ec-9b71-12ed663d75adn%40googlegroups.com.


Re: [deal.II] Convergence failure in step 0.

2023-02-13 Thread 'yy.wayne' via deal.II User Group
Thank you Wolfgang. I didin't dive much into the Chebyshev preconditioner 
because I'm not familiar with it. 
I assume it's because the CG solver inside Chebyshev preconditioner didn't 
converge or what.
But as you stated, if NaN should only get from NaN elements and divided by 
zero, then I may find the error. 
I'll follow your debug suggestion.  

Best
Wayne

在2023年2月14日星期二 UTC+8 12:48:42 写道:

>
> > It's strange that the iteration breaks at step 0. If I try right 
> > preconditioning, it breaks at step 1 with nan. Besides, if there is only 
> on 
> > multigrid level, the problem is solved correctly with 1 iteration, so 
> matrices 
> > should be correct. The coarse solution is good enough for smoothing.
> > 
> > The question is, in which case may a CG or GMRES solver breaks at step 0 
> with nan?
>
> NaN errors, like segmentation faults, are relatively easy to find because 
> you 
> know that every NaN you get is wrong. So you check that the matrix and 
> right 
> hand side and solution vector you give to the solver don't have any NaNs. 
> If 
> they don't, and you get NaNs out, then the problem likely lies with the 
> preconditioner, so you single step through the solver around the place 
> where 
> the preconditioner is applied and check in which operation the NaN appears.
>
> In your case, you seem to have already found out that it's the Chebyshev 
> preconditioner. The question is why. Is it because the matrix the 
> preconditioner uses has NaNs in it? Is it because the preconditioner makes 
> an 
> assumption that it can divide by certain matrix entries, but they are 
> zero? 
> Single stepping in a debugger will give you the answer :-)
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a0326f84-a7b7-49ec-97b7-db1579574e87n%40googlegroups.com.


[deal.II] Convergence failure in step 0.

2023-02-13 Thread 'yy.wayne' via deal.II User Group
Hi,
I'm solving a multi-component problem in MatrixFree framework, 
preconditioned by geometric multigrid. The code runs good for several cases 
except one (with different geometry and boundary condition). The error is 
"Iterative method reported convergence failure in step 0. 
 The residual in the last step was nan".
It's strange that the iteration breaks at step 0. If I try right 
preconditioning, it breaks at step 1 with nan. Besides, if there is only on 
multigrid level, the problem is solved correctly with 1 iteration, so 
matrices should be correct. The coarse solution is good enough for 
smoothing.

The question is, in which case may a CG or GMRES solver breaks at step 0 
with nan?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/dc620cb4-b172-4631-9b5b-74459f2dfd1fn%40googlegroups.com.


[deal.II] Re: Monitor memory usage of the program

2023-02-12 Thread 'yy.wayne' via deal.II User Group
Thank you Bruno!

在2023年2月8日星期三 UTC+8 02:23:35 写道:

> Hello,
>
> Personally, I prefer to use profilers when monitoring the memory usage. 
> This is because what really matters is how much memory the operating system 
> is allocating for you code. For multiple reasons this can be different than 
> what your code require.
>
> Best,
>
> Bruno
>
> On Tuesday, February 7, 2023 at 2:11:38 AM UTC-5 yy.wayne wrote:
>
>> Hello everyone,
>> I am trying to monitor the memory consumption of my program.
>> Most classes provide a memory_consumption() function to realize this 
>> demand.
>> However, such function is missing for 2 process:
>> 1) Memory of a LU factorized matrix (a UMFPACK factorization like 
>> SparseDirectUMFPACK, or klu solver like TrilinosWrappers::SolverDirect);
>> 2) Memory consumption of iterative solver (CG or GMRES).
>>
>> Is there a recommended way to track memomry usage during these 2 
>> processes?
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c4a44d60-3fd1-480d-b044-64fcdbf9bc09n%40googlegroups.com.


[deal.II] Monitor memory usage of the program

2023-02-06 Thread 'yy.wayne' via deal.II User Group
Hello everyone,
I am trying to monitor the memory consumption of my program.
Most classes provide a memory_consumption() function to realize this demand.
However, such function is missing for 2 process:
1) Memory of a LU factorized matrix (a UMFPACK factorization like 
SparseDirectUMFPACK, or klu solver like TrilinosWrappers::SolverDirect);
2) Memory consumption of iterative solver (CG or GMRES).

Is there a recommended way to track memomry usage during these 2 processes?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/692abde1-a613-41ab-bfc7-021ef67f61afn%40googlegroups.com.


Re: [deal.II] Interpolate H1 shape function to H1curl shape function

2023-01-18 Thread 'yy.wayne' via deal.II User Group
Thanks for your reply.

I thought about this problem in the past month. The fundamental cause of 
the difference lie in different 
spaces built by Nedelec and CG basis functions. Nedelec elements create 
singularity at stage corner 
while CG not. Visualization expressed what the solution is. There's defect 
for certain geometry to simulate
EM waves with CG elements.
在2023年1月18日星期三 UTC+8 11:10:57 写道:

>
> > I'm solving a 3-d vector Maxwell's wave equation problem with nodal 
> basis 
> > functions. To overcome spurious solution and singularity, the 
> "regularized 
> > Maxwell equation 
> > <
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F233149933_Regularized_Maxwell_Equations_and_Nodal_Finite_Elements_for_Electromagnetic_Field_Computations%3FenrichId%3Drgreq-f9f1ce993e048a2fa4a19555b0bfd719-XXX%26enrichSource%3DY292ZXJQYWdlOzIzMzE0OTkzMztBUzoxMDIzMTM5NzY5MjYyMDhAMTQwMTQwNDk2MTE0MQ%253D%253D%26el%3D1_x_3%26_esc%3DpublicationCoverPdf=05%7C01%7CWolfgang.Bangerth%40colostate.edu%7C2ec5fd5b09f04d1949f508dae1cf4792%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C638070577855923173%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=S%2BCRm%2FERX40V4gn0M63l7oLsnBV7BSg%2F2gsWD1zTsno%3D=0>"
>  
> is applied. The question is, I cannot judge if my solution is same as the 
> one solved with Nedelec elements (shown in the figure, FE_Q(3) on the left 
> with "Nonlinear subdivision Level" and Nedelec(0) on the right) from 
> visualization. Even if the results are identical, FE_Q has dof on nodes and 
> FE_Nedelec has dof on edge, so FE_Q's visualization results always  have 
> a asymptotic effect around singularity on the central edge. Is there a away 
> to display nodal-based result as edge-based? FETools::interpolate does not 
> support this because Nedelec is non-primitive.
>
> I think it took all of us a month to get back to your question because 
> it's 
> not quite clear what it is you want to do. If I understand correctly, you 
> want 
> to *compare* the two solutions somehow, and for that you tried (i) to 
> interpolate the Nedelec solution to the FE_Q space and (ii) to interpolate 
> the 
> FE_Q solution onto the Nedelec space.
>
> The first of these does not immediately work because the Nedelec solution 
> is, 
> in general, discontinuous at nodes and so the interpolation operator is 
> not 
> easily defined. The second of these does not immediately work because 
> FETools::interpolate() does not work for Nedelec target elements.
>
> But in the end, what that points out is that it isn't quite clear to me 
> what 
> you are trying to do. The two solutions live in different function spaces 
> that 
> are not nested, and so interpolation in either direction can not preserve 
> the 
> solution: Even if FETools::interpolate() would work, the FE_Q solution 
> would 
> not be equal to the interpolated function.
>
> As a consequence, I don't quite know what you would learn from comparing 
> the 
> interpolated FE_Q solution and the Nedelec solution. My suggestion would 
> be to 
> treat both the FE_Q solution and the Nedelec solution as L2 functions 
> (namely, 
> piecewise polynomials, possibly discontinuous). Then compare them as such. 
> In 
> fact, that would be easy enough if you just output both in a graphical 
> format 
> via DataOut and show both solutions at the same time; typical 
> visualization 
> programs also allow you to define derived variables that can, for example, 
> be 
> the difference between the two solutions which will help you understand 
> how 
> the two differ.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/265461e4-96ce-48d9-8cc7-9519939e8f85n%40googlegroups.com.


Re: [deal.II] Differences between cell_iterators() and mg_cell_iterators()

2023-01-03 Thread 'yy.wayne' via deal.II User Group
That's great. Thank you.

在2023年1月4日星期三 UTC+8 03:32:48 写道:

> This is not documented well and the doxygen types shown are not very
> helpful. In the end it boils down to the following:
> *_mg_* functions return level or multigrid iterators that have an
> accessor (that is inside the iterator) with the template argument
> level_dof_access = true. The only difference, if I remember correctly,
> is that get_active_or_mg_dof_indices() will return the result of
> get_mg_dof_indices() instead of get_dof_indices(). So, if you use
> level iterators, you can use get_active_or_mg_dof_indices() in your
> assembly and it will automatically do the right thing. This means you
> can use the same assembler for active and mg levels.
>
> See 
> https://www.dealii.org/developer/doxygen/deal.II/classDoFCellAccessor.html#a15ef35e919b1005dc6050f9bca96956d
>
> On Sun, Jan 1, 2023 at 8:45 AM 'yy.wayne' via deal.II User Group
>  wrote:
> >
> > I'm a bit confused on the effect of cell_iterators (similar ones are 
> cell_iterators_on_level and dof_handler. begin()) and mg_cell_iterators. In 
> step16, begin_mg() is used when assembling multigrid, while in step56 we 
> use cell_iterators rather
> > ZjQcmQRYFpfptBannerStart
> > This Message Is From an External Sender
> > Use caution when opening links or attachments if you do not recognize 
> the sender.
> >
> > ZjQcmQRYFpfptBannerEnd
> > I'm a bit confused on the effect of cell_iterators (similar ones are 
> cell_iterators_on_level and dof_handler.begin()) and mg_cell_iterators. In 
> step16, begin_mg() is used when assembling multigrid, while in step56 we 
> use cell_iterators rather than mg_cell_iterators. I think they both loop 
> through all cells, active or not. In documentation the difference is 
> mg_cell_iterators "return an iterator in their level-cell form". May some 
> expert explain this a little further?
> >
> > --
> > The deal.II project is located at http://www.dealii.org/
> > For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> > ---
> > You received this message because you are subscribed to the Google 
> Groups "deal.II User Group" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to dealii+un...@googlegroups.com.
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/a52ed968-d0d3-4cf9-be1a-ccd96de3e5f4n%40googlegroups.com
> .
>
>
>
> -- 
> Timo Heister
> http://www.math.clemson.edu/~heister/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3d61f1d9-4cb4-4b8a-b0b5-210a643b72dan%40googlegroups.com.


[deal.II] Differences between cell_iterators() and mg_cell_iterators()

2023-01-01 Thread 'yy.wayne' via deal.II User Group
I'm a bit confused on the effect of cell_iterators (similar ones are 
cell_iterators_on_level and dof_handler.begin()) and mg_cell_iterators. In 
step16, begin_mg() is used when assembling multigrid, while in step56 we 
use cell_iterators rather than mg_cell_iterators. I think they both loop 
through all cells, active or not. In documentation the difference is 
mg_cell_iterators "return an iterator in their level-cell form".  May some 
expert explain this a little further?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a52ed968-d0d3-4cf9-be1a-ccd96de3e5f4n%40googlegroups.com.


[deal.II] Re: Boundary conditions besides Dirichlet BC for Multigrid

2022-12-20 Thread 'yy.wayne' via deal.II User Group
Wow that's wonderful.
Thank you Peter. You save me all the time.

在2022年12月21日星期三 UTC+8 15:37:12 写道:

> Hi Wayne,
>
> you can attache your own constraints as shown here: 
> https://github.com/dealii/dealii/blob/2376c62a4b89953b74801852983eb8556930d54d/tests/numerics/no_flux_18.cc#L1136-L1146
>
> Hope this helps!
> Peter
>
> On Wednesday, 21 December 2022 at 02:44:42 UTC+1 yy.wayne wrote:
>
>> Hi guys, I have a question on setting the boundary constraints for 
>> mg_constrained_dofs object.
>> In many problems where the only boundary conditions(for 
>> AffineConstraints) are Dirichlet BC and hanging nodes, mg_constrained knows 
>> them by MGConstrainedDoFs::make_zero_boundary_constraints(the Dirichlet 
>> boundary id, not necessarily zero) and 
>> MGConstrainedDoFs::initialize(dof_handler)(the hanging nodes), respectively.
>> However, in case other BC like compute_normal_flux_constraints or 
>> compute_no_normal_flux_constraints exists, mg_constrained_dofs has no way 
>> to include it. Therefore the MGLevelObject mg_matrices is intrinsically 
>> different to global system_matrix. Then mg_matrices[0] cannot be a good 
>> approximation of coarsest grid. 
>> Is there some hint for such cases?
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f9725cd2-260e-40bb-bb51-6fae595aa38en%40googlegroups.com.


[deal.II] Re: Boundary conditions besides Dirichlet BC for Multigrid

2022-12-20 Thread 'yy.wayne' via deal.II User Group
An alternative choice is applying the boundary conditions weakly(during 
assemble). Will weakly implemented boundary condition behaves worse than 
explicitly through AffineConstraint class, when it's not Dirichlet BC?
在2022年12月21日星期三 UTC+8 09:44:42 写道:

> Hi guys, I have a question on setting the boundary constraints for 
> mg_constrained_dofs object.
> In many problems where the only boundary conditions(for AffineConstraints) 
> are Dirichlet BC and hanging nodes, mg_constrained knows them by 
> MGConstrainedDoFs::make_zero_boundary_constraints(the Dirichlet boundary 
> id, not necessarily zero) and 
> MGConstrainedDoFs::initialize(dof_handler)(the hanging nodes), respectively.
> However, in case other BC like compute_normal_flux_constraints or 
> compute_no_normal_flux_constraints exists, mg_constrained_dofs has no way 
> to include it. Therefore the MGLevelObject mg_matrices is intrinsically 
> different to global system_matrix. Then mg_matrices[0] cannot be a good 
> approximation of coarsest grid. 
> Is there some hint for such cases?
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4ca7f505-332c-4b9d-ac48-249af61267bdn%40googlegroups.com.


[deal.II] Boundary conditions besides Dirichlet BC for Multigrid

2022-12-20 Thread 'yy.wayne' via deal.II User Group
Hi guys, I have a question on setting the boundary constraints for 
mg_constrained_dofs object.
In many problems where the only boundary conditions(for AffineConstraints) 
are Dirichlet BC and hanging nodes, mg_constrained knows them by 
MGConstrainedDoFs::make_zero_boundary_constraints(the Dirichlet boundary 
id, not necessarily zero) and 
MGConstrainedDoFs::initialize(dof_handler)(the hanging nodes), respectively.
However, in case other BC like compute_normal_flux_constraints or 
compute_no_normal_flux_constraints exists, mg_constrained_dofs has no way 
to include it. Therefore the MGLevelObject mg_matrices is intrinsically 
different to global system_matrix. Then mg_matrices[0] cannot be a good 
approximation of coarsest grid. 
Is there some hint for such cases?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/77599fdf-dd25-46fc-86d5-15ad6051e914n%40googlegroups.com.


Re: [deal.II] Importing .msh file from CUBIT

2022-12-07 Thread 'yy.wayne' via deal.II User Group
I never notice that limitation before. Thank you.

在2022年12月7日星期三 UTC+8 22:54:57 写道:

> That error is independent of the mesh format that you are using. deal.II 
> does not support boundary id on internal faces. I agree that Cubit is 
> better for hex meshes but I think most people use GMSH because it's easier 
> to get. With Cubit, you can also export the file as an ExodusII file and we 
> use Trilinos to read that format so that may give you better support across 
> Cubit versions. This doesn't change the fact that deal.II does not support 
> boundary id on internal faces.
>
> Bruno
>
> Le mer. 7 déc. 2022 à 09:32, 'yy.wayne' via deal.II User Group <
> dea...@googlegroups.com> a écrit :
>
>> I see ... I forgot that Cubit and Coreform Cubit are different, the 
>> former one is for gov. and latter one is for commercial & academic use.
>> I can import an .inp mesh exported from Coreform Cubit when no SideSet 
>> are asigned. With SideSet it gives following error: 
>> [image: Snipaste_2022-12-07_22-27-05.png]
>> Cubit is easier to use than GMSH when generating hex meshes and saves a 
>> lot time. Any recommended open source software to generate
>> hex mesh that deal.II can read?
>> 在2022年12月7日星期三 UTC+8 22:18:10 写道:
>>
>>> I don't know what's the relationship between Trelis and Cubit. I got 
>>> Cubit directly from Sandia https://cubit.sandia.gov/ but it's not 
>>> available to everybody. It looks like Coreform has its own versioning that 
>>> doesn't match the one from Sandia :-( Try something that's from 2021 or 
>>> older.
>>>
>>> Best,
>>>
>>> Bruno
>>>
>>> Le mer. 7 déc. 2022 à 05:20, 'yy.wayne' via deal.II User Group <
>>> dea...@googlegroups.com> a écrit :
>>>
>>>> Sorry but I want to add a question here. What's the recommanded  way to 
>>>> import a mesh from cubit to deal.ii?
>>>> I tried read_abaqus and read_ucd with following error:
>>>> [image: Snipaste_2022-12-07_18-14-08.png]
>>>> I‘m using Cubit 2022.4. In deal.II documentation it supports Cubit 11.x 
>>>> to 15.x, does it mean Trelis 15.x? [Cubit download 
>>>> <https://coreform.com/products/downloads/>]
>>>> At least in Trelis 16.5 there's no "Export using Cubit ID's" to uncheck.
>>>>
>>>> 在2022年12月7日星期三 UTC+8 00:30:51 写道:
>>>>
>>>>> On 12/6/22 08:36, HIMAL MAGAR wrote: 
>>>>> > I am trying to import .msh file that I have created in CUBIT in 
>>>>> step-17. 
>>>>> > However, the code doesn't read the file and produces following 
>>>>> error: 
>>>>>
>>>>> The error happens in read_ucd(), but that is clearly the wrong 
>>>>> function for 
>>>>> reading .msh files: You need to use read_msh(). 
>>>>>
>>>>> That said, the error happens at the very top of the function. I 
>>>>> believe that 
>>>>> you are trying to read from a file that doesn't exist or can't be 
>>>>> opened. Did 
>>>>> you spell the file name and directory name correctly in your program? 
>>>>> Does the 
>>>>> file you are trying to read from actually exist? 
>>>>>
>>>>> Best 
>>>>> W. 
>>>>>
>>>>> -- 
>>>>>  
>>>>>
>>>>> Wolfgang Bangerth email: bang...@colostate.edu 
>>>>> www: http://www.math.colostate.edu/~bangerth/ 
>>>>>
>>>>>
>>>>> -- 
>>>> The deal.II project is located at http://www.dealii.org/
>>>> For mailing list/forum options, see 
>>>> https://groups.google.com/d/forum/dealii?hl=en
>>>> --- 
>>>> You received this message because you are subscribed to a topic in the 
>>>> Google Groups "deal.II User Group" group.
>>>> To unsubscribe from this topic, visit 
>>>> https://groups.google.com/d/topic/dealii/qd-n-iiMx-k/unsubscribe.
>>>> To unsubscribe from this group and all its topics, send an email to 
>>>> dealii+un...@googlegroups.com.
>>>>
>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/dealii/3a506bf2-1731-4823-af09-3f0bff35232en%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/dealii/3a506bf2-1731-4823-af09-3f0bff35232en%40googlegroups.com?utm_mediu

[deal.II] Re: Electromagnetic problem in 3d

2022-12-05 Thread 'yy.wayne' via deal.II User Group
Thank you Abbas! That's wonderful.

在2022年12月5日星期一 UTC+8 23:24:12 写道:

> Here you go and good luck. 
>
> On Monday, December 5, 2022 at 4:52:22 PM UTC+2 yy.wayne wrote:
>
>> I'm extending step-81, the time-harmonic maxwell problem into 3d, however 
>> the result is
>> not correct. Both Nedelec and NedelecSZ elements are tested. However all 
>> physical 
>> formulas are kept same in 2d and 3d, with an absorption boundary 
>> condition rather than 
>> PML, but 2d test is correct even with little wavelength till boundary 
>> (1.5 waves) and low 
>> relolution (6 grids per wave). Is there a tutorial on EM in 3 dimensional?
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4861f390-2e56-4b6a-b94a-d3a210f5d353n%40googlegroups.com.


[deal.II] Electromagnetic problem in 3d

2022-12-05 Thread 'yy.wayne' via deal.II User Group
I'm extending step-81, the time-harmonic maxwell problem into 3d, however 
the result is
not correct. Both Nedelec and NedelecSZ elements are tested. However all 
physical 
formulas are kept same in 2d and 3d, with an absorption boundary condition 
rather than 
PML, but 2d test is correct even with little wavelength till boundary (1.5 
waves) and low 
relolution (6 grids per wave). Is there a tutorial on EM in 3 dimensional?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c6f0d1f3-31a3-4fa4-a56a-d5a7c57ba928n%40googlegroups.com.
/* -
 *
 * Copyright (C) 2021 - 2022 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE.md at
 * the top level directory of deal.II.
 *
 * -

 *
 * Author: Marc Fehling, Colorado State University, 2021
 * Peter Munch, Technical University of Munich and Helmholtz-Zentrum
 *  hereon, 2021
 * Wolfgang Bangerth, Colorado State University, 2021
 */

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 

#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 

// For load balancing we will assign individual weights on cells, and for that
// we will use the class parallel::CellWeights.
#include 

// The solution function requires a transformation from Cartesian to polar
// coordinates. The GeometricUtilities::Coordinates namespace provides the
// necessary tools.
#include 
#include 

// The following include files will enable the MatrixFree functionality.
#include 
#include 
#include 

// We will use LinearAlgebra::distributed::Vector for linear algebra operations.
#include 

// We are left to include the files needed by the multigrid solver.
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#define OMEGA 1.0/4*M_PI*1e9
#define C_CONST 3e8

namespace Nodal_EM
{
  using namespace dealii;
  const unsigned int fe_degree = 0;


  template 
  Tensor<1,dim,std::complex> J_a_func(const Point ,
 types::material_id /*id*/)
  {
  Tensor<1,dim,std::complex> J_a;
  double dipole_radius = 0.2;
//  Point dipole_position(0., 0., 0.);
//  Tensor<1,dim> dipole_orientation{{0., 0., 1.}};
  Point dipole_position(0., 0.);
  Tensor<1,dim> dipole_orientation{{0., 1.}};
  std::complex dipole_strength = 1;

  const auto distance = (dipole_position - point).norm() / dipole_radius;
  if(distance > 1.)
  return J_a;
  double scale = std::cos(distance * M_PI / 2.) *
   std::cos(distance * M_PI / 2.) / (M_PI / 2. - 2. / M_PI) /
   dipole_radius / dipole_radius;
  J_a = dipole_strength * dipole_orientation * scale;
  return J_a;
  }

  template 
  DEAL_II_ALWAYS_INLINE inline Tensor<1, dim, std::complex>
  tangential_part(const Tensor<1, dim, std::complex> ,
  const Tensor<1, dim> &  normal)
  {
auto result = tensor;
result[0]   = normal[1] * (tensor[0] * normal[1] - tensor[1] * normal[0]);
result[1]   = -normal[0] * (tensor[0] * normal[1] - tensor[1] * normal[0]);
// result in 2dim is -nxnxtensor, with component in z = 0.
return result;

//result[0] = normal[1]*tensor[2] - normal[2]*tensor[1];
//result[1] = -(normal[0]*tensor[2] - normal[2]*tensor[0]);
//result[2] = normal[0]*tensor[1]* - normal[1]*tensor[0];
//return result;
  }

  template 
  DEAL_II_ALWAYS_INLINE inline Tensor<1, dim, std::complex>
  cross(const Tensor<1, dim> &  normal,
  const Tensor<1, dim, std::complex> )
  {
auto result = tensor;
//result[0]   = normal[1] * (tensor[0] * normal[1] - tensor[1] * normal[0]);
//result[1]   = -normal[0] * (tensor[0] * normal[1] - tensor[1] * normal[0]);
//return result;
if(dim 

[deal.II] Re: MatrixFree support for Nedelec element in deal.II

2022-12-05 Thread 'yy.wayne' via deal.II User Group
Thanks Abbas,

I considered about DG method, because DG allows discontinuity and 
implements boundary condition easier than FE_Q. 
However DG requires more DoF than continuous finite element, especially in 
frequency-domain. 

Best,
Wayne

在2022年12月5日星期一 UTC+8 22:35:12 写道:

> Maybe you can look into using DG for discritizing the problem? You're 
> already using penalty terms so might as well. 
> One thing I am afraid of is that Nedelec elements inherently satisfy the 
> div free condition, while DG might need careful treatment or your 
> preconditioner would fail at high frequency. 
> Just my 2 cents. Maybe you know someone else who can help you with this.   
>
>
> On Monday, December 5, 2022 at 4:29:35 PM UTC+2 yy.wayne wrote:
>
>> Hi deal.II community,
>>
>> I'm working on MatrixFree electromagnetic problems with deal.II. For 2d 
>> TE waves FE_Q element works good, but for cases like in step-81 and in 3d, 
>> Nedelec element in the appropriate one for EM.
>>
>> I posted a discussion about this 3 months ago. May I ask what's the 
>> current state for MatriFree support of Nedelec elements in deal.II? 
>> Implementing MatrixFree Nedelec is quite difficult for me. Currently I plan 
>> to solve 3d EM problem with FE_Q elements with some penalty terms, but of 
>> course that's not a nice policy because of spurious solution and 
>> singularity when solving EM numerically with nodal elements.
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/e8f7393c-6440-47c9-9942-6d440949a6b0n%40googlegroups.com.


[deal.II] MatrixFree support for Nedelec element in deal.II

2022-12-05 Thread 'yy.wayne' via deal.II User Group
Hi deal.II community,

I'm working on MatrixFree electromagnetic problems with deal.II. For 2d TE 
waves FE_Q element works good, but for cases like in step-81 and in 3d, 
Nedelec element in the appropriate one for EM.

I posted a discussion about this 3 months ago. May I ask what's the current 
state for MatriFree support of Nedelec elements in deal.II? Implementing 
MatrixFree Nedelec is quite difficult for me. Currently I plan to solve 3d 
EM problem with FE_Q elements with some penalty terms, but of course that's 
not a nice policy because of spurious solution and singularity when solving 
EM numerically with nodal elements.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/63916a81-2f0d-4b49-b231-cc3992817f91n%40googlegroups.com.


Re: [deal.II] How to do DoFRenumbering::component_wise for multilevel grid

2022-11-28 Thread 'yy.wayne' via deal.II User Group
I was debugging in debug mode. Just changed make to release mode and the 
error gone! Why running in debug mode
causes this error?

As for the side note, that's very helpful! I worried about the write 
mesh price but don't know what techniques can I use.
That's exactly what I want. Thank you.

Best,
Wayne

在2022年11月29日星期二 UTC+8 12:30:15 写道:

>
> > The error is from compute_component_wise on level = 0, the result it 
> returns 
> > don't equal to dof_handler.n_dofs(level).
> > I added a MPI_Barrier between first and second loop but error is not 
> eliminated.
> > Snipaste_2022-11-28_12-53-02.png
>
> I do not see this error. Are you running in debug mode? And with how many 
> MPI 
> processes?
>
> As a side note: The detour by writing the data to a file and then reading 
> it 
> again is very inefficient, in particular if you have many MPI processes. 
> If 
> you want to use GridOut/GridIn, let the former write the data into an 
> object 
> of type std::ostringstream, then use Utilities::MPI::broadcast() to send 
> the 
> string you get out of std::ostringstream to the other processes, and give 
> it 
> to a std::istringstream object. Then use GridIn on the std::istringstream 
> object. This way, no data transfer to or from disk is necessary -- it all 
> happens in memory.
>
> Best
> Wolfgang
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9be828ee-3e6d-4cee-ae54-6fcf390752c6n%40googlegroups.com.


Re: [deal.II] How to do DoFRenumbering::component_wise for multilevel grid

2022-11-27 Thread 'yy.wayne' via deal.II User Group
Think I locate the error.
There bug do not come from mesh, but FESystem. For 
tests/multigrid/renumbering_06.cc, if line 90 the finite element is changed 
to 2 components, we get same error.

在2022年11月28日星期一 UTC+8 12:59:59 写道:

> The error is from compute_component_wise on level = 0, the result it 
> returns don't equal to dof_handler.n_dofs(level). 
> I added a MPI_Barrier between first and second loop but error is not 
> eliminated. 
> [image: Snipaste_2022-11-28_12-53-02.png]
>
> 在2022年11月28日星期一 UTC+8 12:49:15 写道:
>
>> On 11/27/22 21:09, 'yy.wayne' via deal.II User Group wrote: 
>> > 
>> > Sorry that I forget to mention *it only breaks when runnning with MPI, 
>> run in 
>> > serialization is good.* PersistentTriangulation Class may not address 
>> this error. 
>> > The intention of write & read a mesh here is to create a not-to-coarse 
>> coarse 
>> > grid (so it's refine several times before output). 
>> > I read the output grid and expect all cells are on level = 0. 
>> Preserving the 
>> > multigrid structure is not preferred in this case. 
>>
>> Oh, I misread your message then. What is the error you observe? 
>>
>> Looking at your code, when you have this... 
>>
>> if(Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0) 
>> { 
>> Triangulation tria_coarse; 
>> GridGenerator::hyper_shell(tria_coarse, 
>> center, 
>> inner_r, 
>> outer_r, 
>> n_cells); 
>> tria_coarse.refine_global(3); 
>>
>> // write and re-read the mesh, so it becomes coarse mesh 
>> std::cout << "write mesh" << std::endl; 
>> GridOut grid_out; 
>> grid_out.set_flags(GridOutFlags::Vtu(true)); 
>> std::ofstream out("coarse_mesh.vtk"); 
>> grid_out.write_vtk(tria_coarse, out); 
>> out.close(); 
>> } 
>>
>> {// read coarse grid 
>> tria.clear(); 
>> std::cout << "read mesh" << std::endl; 
>> GridIn gridin; 
>> gridin.attach_triangulation(tria); 
>> std::ifstream fin("coarse_mesh.vtk"); 
>> gridin.read_vtk(fin); 
>> fin.close(); 
>> std::cout << "read mesh done" << std::endl; 
>> } 
>>
>> ...then you will need a MPI_Barrier between the two blocks to make sure 
>> that 
>> process 1 doesn't start reading the file before process 0 has completed 
>> writing it. 
>>
>> Best 
>> W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/97d7d680-43f9-433a-8192-9c02c1b8265an%40googlegroups.com.


Re: [deal.II] How to do DoFRenumbering::component_wise for multilevel grid

2022-11-27 Thread 'yy.wayne' via deal.II User Group
Hi Prof. Bangerth,

Sorry that I forget to mention *it only breaks when runnning with MPI, run 
in serialization is good.* PersistentTriangulation Class may not address 
this error. 
The intention of write & read a mesh here is to create a not-to-coarse 
coarse grid (so it's refine several times before output). 
I read the output grid and expect all cells are on level = 0. Preserving 
the multigrid structure is not preferred in this case.

Do using a straight GridOut destroys the mesh structure even I expect to 
eliminate multigrid information?

Best,
Wayne
在2022年11月28日星期一 UTC+8 11:48:55 写道:

> Thanks for your quick reply.
>
> I write mesh this way in order to create a refined coarse mesh. I tried 
> generate a distributed mesh in serial but it gives other bugs when further 
> refine it, therefore decide to write a mesh generated serialized and read 
> it back.
> I'll learn the PersistentTriangulation class then, thank you.
>
> Best,
> Wayne
>
> 在2022年11月28日星期一 UTC+8 11:26:04 写道:
>
>> On 11/27/22 19:14, 'yy.wayne' via deal.II User Group wrote: 
>> > 
>> > I get an error from compute_component_wise with level=0. I reduced the 
>> problem 
>> > to a minimal working example, where I write a mesh and read to create a 
>> dense 
>> > coarse level. The error shows even there's only one triangulation 
>> level. I 
>> > assume the write & read mesh is causing this bug, since other part is 
>> similiar 
>> > to dealii/tests/multigrid/renumbering_06.cc. 
>>
>> Writing the mesh just outputs the active cells, and loses the hierarchy 
>> that 
>> resulted from refinement. When you read it back in, you get a mesh with 
>> just 
>> one level. 
>>
>> You need a way to write and read the mesh that preserves the refinement 
>> levels. You can either to this via serialization, or using the 
>> PersistentTriangulation class. 
>>
>> Best 
>> W. 
>> -- 
>>  
>> Wolfgang Bangerth email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9bb446b8-f36c-41d5-8d4d-ac1c313b2daan%40googlegroups.com.


Re: [deal.II] How to do DoFRenumbering::component_wise for multilevel grid

2022-11-27 Thread 'yy.wayne' via deal.II User Group
Thanks for your quick reply.

I write mesh this way in order to create a refined coarse mesh. I tried 
generate a distributed mesh in serial but it gives other bugs when further 
refine it, therefore decide to write a mesh generated serialized and read 
it back.
I'll learn the PersistentTriangulation class then, thank you.

Best,
Wayne

在2022年11月28日星期一 UTC+8 11:26:04 写道:

> On 11/27/22 19:14, 'yy.wayne' via deal.II User Group wrote:
> > 
> > I get an error from compute_component_wise with level=0. I reduced the 
> problem 
> > to a minimal working example, where I write a mesh and read to create a 
> dense 
> > coarse level. The error shows even there's only one triangulation level. 
> I 
> > assume the write & read mesh is causing this bug, since other part is 
> similiar 
> > to dealii/tests/multigrid/renumbering_06.cc.
>
> Writing the mesh just outputs the active cells, and loses the hierarchy 
> that 
> resulted from refinement. When you read it back in, you get a mesh with 
> just 
> one level.
>
> You need a way to write and read the mesh that preserves the refinement 
> levels. You can either to this via serialization, or using the 
> PersistentTriangulation class.
>
> Best
> W.
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0cf870e8-0864-4991-9cee-468ccf068490n%40googlegroups.com.


Re: [deal.II] Solve block Trilinos matrix directly

2022-11-22 Thread 'yy.wayne' via deal.II User Group
Hi Prof. Bangerth,

By comparing residuals between normal matrix, block matrix, and direct and 
iterative solvers as you said, I believe the mistake is somewhere else. 
Thank you.

Best,
Wayne

在2022年11月22日星期二 UTC+8 01:25:12 写道:

> On 11/21/22 07:45, 'yy.wayne' via deal.II User Group wrote:
> > 
> > However, by copy to and from between 
> LinearAlgebra::distributed::BlockVector 
> > and LinearAlgebra::distributed::Vector, it produces wrong result. Though 
> I 
> > cannot confirm the wrong result comes from transfer between Vector and 
> > BlockVector(I tested simple x=inv(A)*b and result is right, but treating 
> it as 
> > the coarse solver for bigger problem the result is wrong), I reassembled 
> the 
> > matrix in Trilinos BlockSparseMatrix and solve it with a iterative 
> solver and 
> > the result is correct. While solving coarse problem iteratively works 
> for very 
> > small matrix, my system is indefinite so I have to use a direct solver.
> > 
> > So given a BlockSparseMatrix and BlockVector src and dst, is there a 
> > recommended strategy to solve it directly(or block diagonal 
> precondition)? The 
> > matrix has form [A, B; -B^T A]
>
> No, not without copying data back and forth.
>
> But you can test whether the copying introduces the error. Let's say you 
> have 
> copied the data into a nonblock system, solved it, and copied it back to 
> the 
> block system. Let's say your block system consists of a matrix A, right 
> hand 
> side b, and the solution vector x you obtained by copying the solution of 
> the 
> nonblock version into x. All of these objects consist of blocks.
>
> Then if x solved the nonblock version, you would expect that the residual,
> b-Ax
> is small. You can test that via the BlockSparseMatrix::residual() 
> function. 
> Specifically, you would expect that
> ||b-Ax|| / ||b||
> is, say, 1e-12. You can compute this via
> BlockVector r(...);
> double relative_residual = A.residual (r,x,b) / b.l2_norm();
>
> If the relative residual is not small, then something must have gone wrong 
> during the copying step. If it is small, then the solution of the nonblock 
> system, copied into the block vector x, is also the solution of the block 
> system.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3234f2cd-a0c3-4990-9971-fcac084dc999n%40googlegroups.com.


[deal.II] Solve block Trilinos matrix directly

2022-11-21 Thread 'yy.wayne' via deal.II User Group
I've post a similar question before(Solve distributed BlockVector with 
direct solver 
),
 
where I had a Trilinos SparseMatrix(the system has 2 components) and 
LinearAlgebra::distributed::BlockVector as dst and src for that 
SparseMatirx. I choose  LinearAlgebra::distributed::BlockVector because the 
bigger problem is intend to solve by MatrixFree operators, and use direct 
solver on coarsest grid.

However, by copy to and from between 
LinearAlgebra::distributed::BlockVector and 
LinearAlgebra::distributed::Vector, it produces wrong result. Though I 
cannot confirm the wrong result comes from transfer between Vector and 
BlockVector(I tested simple x=inv(A)*b and result is right, but treating it 
as the coarse solver for bigger problem the result is wrong), I reassembled 
the matrix in Trilinos BlockSparseMatrix and solve it with a iterative 
solver and the result is correct. While solving coarse problem iteratively 
works for very small matrix, my system is indefinite so I have to use a 
direct solver.

So given a BlockSparseMatrix and BlockVector src and dst, is there a 
recommended strategy to solve it directly(or block diagonal precondition)? 
The matrix has form [A, B; -B^T A]

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/2bd0540e-733f-4985-ad5b-cf12c8c014ben%40googlegroups.com.


[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-17 Thread 'yy.wayne' via deal.II User Group
Thank you Peter.

Best.
Wayne

在2022年11月18日星期五 UTC+8 13:32:27 写道:

> Hi Wayne,
>
> currently MGTransferBlockGlobalCoarsening takes in the constructor only a 
> single MGTransferGlobalCoarsening instance and uses this for all blocks 
> (i.e., uses the same AffineConstraints instance for all blocks): 
> https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.h#L691-L692
>  
>
> However, this can be simply generalized to add a second constructor that 
> takes a vector of MGTransferGlobalCoarsening objects. This should not be 
> too much work, since MGTransferBlockGlobalCoarsening is derived from 
> MGTransferGlobalCoarsening, which contains all the logic. The new 
> constructor would have false here 
> https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.templates.h#L3379
>  
> and the function 
> https://github.com/dealii/dealii/blob/37cb4ca903175732469461227567cd22f8476dd9/include/deal.II/multigrid/mg_transfer_global_coarsening.templates.h#L3385-L3393
>  
> would need to generalized to access entries within the vector.
>
> Hope this helps,
> Peter
>
> On Friday, 18 November 2022 at 02:53:29 UTC+1 yy.wayne wrote:
>
>> Hi Peter, sorry for coming back to this discussion again. 
>>
>> I've applied the vmult function and compute diagonal function for a 
>> complex MatrixFree object, and tested these functions. But there's one last 
>> ingredient absent in the multi-component MatrixFree framework with 
>> p-coarsening, transfers.
>>
>> Different to the your spirk article where it has two components with same 
>> constraints, in my case the Dirichlet BC on real dofs and imaginary dofs 
>> are different. However when constructing MGTransferBlockGlobalCoarsening or 
>> MGTransferGlobalCoarsening, a MGTwoLevelTransfer object is needed but it 
>> only receive a single dofhandler and constraint. The problem here is the 
>> single constraint, since dofhandler for real and imaginary parts are same. 
>> I haven't figured out the solution yet.
>>
>> Or I've notice that a GMG block transfer class MGTransferBlockMatrixFree 
>> 
>>  receives 
>> a vector of  MGConstrainedDoFs. Your implementation of a complex 
>> Helmholtz MatrixFree method with p-coarsening is very helpful, it address 
>> most of my problems, but only differ in different constrains for two 
>> components.
>>
>> Best,
>> Wayne
>> 在2022年11月4日星期五 UTC+8 09:40:50 写道:
>>
>>> Thanks for you patience and wonderful answer :)
>>>
>>> Best,
>>> Wayne
>>>
>>> 在2022年11月4日星期五 UTC+8 00:06:28 写道:
>>>
 Hi Wayne,

 in that case I would provide the same DoFHandler to MatrixFree twice 
 and two different AffineConstraint objects. See the version of 
 MatrixFree::reinit() that takes vectors 
 https://www.dealii.org/developer/doxygen/deal.II/classMatrixFree.html#abb081e301fadbfd94e0169611bae3838
 .

 Peter

 On Thursday, 3 November 2022 at 11:18:58 UTC+1 yy.wayne wrote:

> Hi Peter,
>
> Thanks for your comprehensive reply, really helpful. By assemble 
> actually I mean the MatrixFree integrator part which corresponds to 
> assembling in MatrixBased form :). But you resolve my question, since my 
> main concern is one scalar DoFHandler describing both real and imaginary 
> part. A last question is what will you do when real and imaginary parts 
> have different boundary conditions, if only on DoFHandler is provided? 
>
> Best,
> Wayne
> 在2022年11月3日星期四 UTC+8 15:08:03 写道:
>
>> Hi Wayne,
>>
>> > The solution in HeatProblem is a non-complex one then(half size of 
>> src and dst processed in MatrixFree operator)?
>>
>> We are solving equation (10) from the paper; it is the  reformulated 
>> version of equation (9), which is a complex system. The heat equation is 
>> indeed not complex; but the diagonalization approach in the case of 
>> fully 
>> implicit stage parallel IRK makes the system complex (inter term of 
>> equation (2) consists of complex blocks that need to be solved).
>>
>> > In your github code, you actually assemble a block matrix system(in 
>> MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
>> There's no multi dof_handlers.
>>
>> I am not sure which lines you are referring to, but if I assemble the 
>> matrix it only happens for the coarse-grid problem; also I don't think I 
>> did setup the matrix for the complex system and instead used simply 
>> Chebyshev iterations around point Jacobi. I have another project, where 
>> we 
>> actually assemble the matrix.
>>
>> I am not sure what happens in the Stokes examples. But my guess is 
>> that they use a FESystem consisting of a FE for pressure and 

[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-17 Thread 'yy.wayne' via deal.II User Group
Hi Peter, sorry for coming back to this discussion again. 

I've applied the vmult function and compute diagonal function for a complex 
MatrixFree object, and tested these functions. But there's one last 
ingredient absent in the multi-component MatrixFree framework with 
p-coarsening, transfers.

Different to the your spirk article where it has two components with same 
constraints, in my case the Dirichlet BC on real dofs and imaginary dofs 
are different. However when constructing MGTransferBlockGlobalCoarsening or 
MGTransferGlobalCoarsening, a MGTwoLevelTransfer object is needed but it 
only receive a single dofhandler and constraint. The problem here is the 
single constraint, since dofhandler for real and imaginary parts are same. 
I haven't figured out the solution yet.

Or I've notice that a GMG block transfer class MGTransferBlockMatrixFree 

 receives 
a vector of  MGConstrainedDoFs. Your implementation of a complex Helmholtz 
MatrixFree method with p-coarsening is very helpful, it address most of my 
problems, but only differ in different constrains for two components.

Best,
Wayne
在2022年11月4日星期五 UTC+8 09:40:50 写道:

> Thanks for you patience and wonderful answer :)
>
> Best,
> Wayne
>
> 在2022年11月4日星期五 UTC+8 00:06:28 写道:
>
>> Hi Wayne,
>>
>> in that case I would provide the same DoFHandler to MatrixFree twice and 
>> two different AffineConstraint objects. See the version of 
>> MatrixFree::reinit() that takes vectors 
>> https://www.dealii.org/developer/doxygen/deal.II/classMatrixFree.html#abb081e301fadbfd94e0169611bae3838
>> .
>>
>> Peter
>>
>> On Thursday, 3 November 2022 at 11:18:58 UTC+1 yy.wayne wrote:
>>
>>> Hi Peter,
>>>
>>> Thanks for your comprehensive reply, really helpful. By assemble 
>>> actually I mean the MatrixFree integrator part which corresponds to 
>>> assembling in MatrixBased form :). But you resolve my question, since my 
>>> main concern is one scalar DoFHandler describing both real and imaginary 
>>> part. A last question is what will you do when real and imaginary parts 
>>> have different boundary conditions, if only on DoFHandler is provided? 
>>>
>>> Best,
>>> Wayne
>>> 在2022年11月3日星期四 UTC+8 15:08:03 写道:
>>>
 Hi Wayne,

 > The solution in HeatProblem is a non-complex one then(half size of 
 src and dst processed in MatrixFree operator)?

 We are solving equation (10) from the paper; it is the  reformulated 
 version of equation (9), which is a complex system. The heat equation is 
 indeed not complex; but the diagonalization approach in the case of fully 
 implicit stage parallel IRK makes the system complex (inter term of 
 equation (2) consists of complex blocks that need to be solved).

 > In your github code, you actually assemble a block matrix system(in 
 MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
 There's no multi dof_handlers.

 I am not sure which lines you are referring to, but if I assemble the 
 matrix it only happens for the coarse-grid problem; also I don't think I 
 did setup the matrix for the complex system and instead used simply 
 Chebyshev iterations around point Jacobi. I have another project, where we 
 actually assemble the matrix.

 I am not sure what happens in the Stokes examples. But my guess is that 
 they use a FESystem consisting of a FE for pressure and a vectorial FE for 
 velocity. To be able to use block vector in this case the DoFs need to be 
 renumbered so that DoFs of a component are contiguous, which is not the 
 case normally.

 In my case, I am using a single a single scalar DoFHandler. This 
 describes both the real and the imaginary part. As a consequence, one only 
 has to pass a single DoFHander/AffineConstraints to MatrixFree. This is 
 somewhat different approach than it is done in the other tutorials and has 
 been adopted from our two-phase solver adaflo (
 https://github.com/kronbichler/adaflo), where we extensively use it 
 reduce overhead for the level-set problem (single DoFHandler that 
 describes 
 the level-set field, normal, and curvature). MatrixFree works quite nicely 
 (just like in the case if you would use FESystem) if you use it with a 
 scalar DoFHandler and pass a BlockVector to the matrix-free loops. I think 
 we don't have a tutorial for this way to work with block vectors in the 
 context of MatrixFree. However, we made lot of progress in improving the 
 usability in the last two years, since we use this feature in 2-3 projects 
 with external collaboration partners. Unfortunately, we have not moved all 
 utility functions to deal.II yet.

 > Can I implement p-multigrid precondition with given 
 MGTransferBlockGlobalCoarsening?

 Yes. It is used in the code.

 > It's a parallel 

[deal.II] Re: Solve distributed BlockVector with direct solver

2022-11-05 Thread 'yy.wayne' via deal.II User Group
Thank you Peter
在2022年11月5日星期六 UTC+8 15:21:42 写道:

> If it is not too expensive, I would copy the block vector to the normal 
> distributed vector. For this purpose, I use functions like these:
>
> ```
> template 
> void
> split_up_components_fast(const VectorType , BlockVectorType )
> {
>   for (unsigned int i = 0, j = 0;
>i < src.get_partitioner()->locally_owned_size();
>++j)
> for (unsigned int b = 0; b < dst.n_blocks(); ++b)
>   dst.block(b).local_element(j) = src.local_element(i++);
> }
>
> template 
> void
> merge_components_fast(const BlockVectorType , VectorType )
> {
>   for (unsigned int i = 0, j = 0;
>i < dst.get_partitioner()->locally_owned_size();
>++j)
> for (unsigned int b = 0; b < src.n_blocks(); ++b)
>   dst.local_element(i++) = src.block(b).local_element(j);
> }
> ```
>
> However, you have make sure that the DoFs are enumerated consistently for 
> LinearAlgebra::distributed::Vector.
>
> Peter
> On Saturday, 5 November 2022 at 07:21:40 UTC+1 yy.wayne wrote:
>
>> Hello,
>>
>> I assembled a matrix for a multi-component problem. I want to solve it 
>> with a direct solver, but neither Trilinos solver(SolverDirect amesos_klu) 
>> nor SparseDirectUMFPACK solver receive a 
>> LinearAlgebra::distributed::BlockVector as input. The Trilinos direct 
>> solver receives Vector and BlockVector, while SparseDirectUMFPACK solver 
>> receives LinearAlgebra::distributed::Vector. What direct solver should I 
>> use?
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4a658f2c-38b4-4226-8b48-cab11a515aadn%40googlegroups.com.


[deal.II] Solve distributed BlockVector with direct solver

2022-11-05 Thread 'yy.wayne' via deal.II User Group
Hello,

I assembled a matrix for a multi-component problem. I want to solve it with 
a direct solver, but neither Trilinos solver(SolverDirect amesos_klu) nor 
SparseDirectUMFPACK solver receive a 
LinearAlgebra::distributed::BlockVector as input. The Trilinos direct 
solver receives Vector and BlockVector, while SparseDirectUMFPACK solver 
receives LinearAlgebra::distributed::Vector. What direct solver should I 
use?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9dba4cf7-e729-4e91-9d9a-8767dcc36934n%40googlegroups.com.


[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-03 Thread 'yy.wayne' via deal.II User Group
Thanks for you patience and wonderful answer :)

Best,
Wayne

在2022年11月4日星期五 UTC+8 00:06:28 写道:

> Hi Wayne,
>
> in that case I would provide the same DoFHandler to MatrixFree twice and 
> two different AffineConstraint objects. See the version of 
> MatrixFree::reinit() that takes vectors 
> https://www.dealii.org/developer/doxygen/deal.II/classMatrixFree.html#abb081e301fadbfd94e0169611bae3838
> .
>
> Peter
>
> On Thursday, 3 November 2022 at 11:18:58 UTC+1 yy.wayne wrote:
>
>> Hi Peter,
>>
>> Thanks for your comprehensive reply, really helpful. By assemble actually 
>> I mean the MatrixFree integrator part which corresponds to assembling in 
>> MatrixBased form :). But you resolve my question, since my main concern is 
>> one scalar DoFHandler describing both real and imaginary part. A last 
>> question is what will you do when real and imaginary parts have different 
>> boundary conditions, if only on DoFHandler is provided? 
>>
>> Best,
>> Wayne
>> 在2022年11月3日星期四 UTC+8 15:08:03 写道:
>>
>>> Hi Wayne,
>>>
>>> > The solution in HeatProblem is a non-complex one then(half size of src 
>>> and dst processed in MatrixFree operator)?
>>>
>>> We are solving equation (10) from the paper; it is the  reformulated 
>>> version of equation (9), which is a complex system. The heat equation is 
>>> indeed not complex; but the diagonalization approach in the case of fully 
>>> implicit stage parallel IRK makes the system complex (inter term of 
>>> equation (2) consists of complex blocks that need to be solved).
>>>
>>> > In your github code, you actually assemble a block matrix system(in 
>>> MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
>>> There's no multi dof_handlers.
>>>
>>> I am not sure which lines you are referring to, but if I assemble the 
>>> matrix it only happens for the coarse-grid problem; also I don't think I 
>>> did setup the matrix for the complex system and instead used simply 
>>> Chebyshev iterations around point Jacobi. I have another project, where we 
>>> actually assemble the matrix.
>>>
>>> I am not sure what happens in the Stokes examples. But my guess is that 
>>> they use a FESystem consisting of a FE for pressure and a vectorial FE for 
>>> velocity. To be able to use block vector in this case the DoFs need to be 
>>> renumbered so that DoFs of a component are contiguous, which is not the 
>>> case normally.
>>>
>>> In my case, I am using a single a single scalar DoFHandler. This 
>>> describes both the real and the imaginary part. As a consequence, one only 
>>> has to pass a single DoFHander/AffineConstraints to MatrixFree. This is 
>>> somewhat different approach than it is done in the other tutorials and has 
>>> been adopted from our two-phase solver adaflo (
>>> https://github.com/kronbichler/adaflo), where we extensively use it 
>>> reduce overhead for the level-set problem (single DoFHandler that describes 
>>> the level-set field, normal, and curvature). MatrixFree works quite nicely 
>>> (just like in the case if you would use FESystem) if you use it with a 
>>> scalar DoFHandler and pass a BlockVector to the matrix-free loops. I think 
>>> we don't have a tutorial for this way to work with block vectors in the 
>>> context of MatrixFree. However, we made lot of progress in improving the 
>>> usability in the last two years, since we use this feature in 2-3 projects 
>>> with external collaboration partners. Unfortunately, we have not moved all 
>>> utility functions to deal.II yet.
>>>
>>> > Can I implement p-multigrid precondition with given 
>>> MGTransferBlockGlobalCoarsening?
>>>
>>> Yes. It is used in the code.
>>>
>>> > It's a parallel version but vectors are not initialized with 
>>> solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So 
>>> matrix_free->initialize_dof_vector has same function done?
>>>
>>> Yes, initialize_dof_vector() does the same just smarter. A general 
>>> recommendation: always use initialize_dof_vector. If MatrixFree notices 
>>> that you use the internal partitioner, it can do some short cuts. Else, it 
>>> needs to do some additional checks to figure out if the vectors are 
>>> comptible.
>>>
>>> Hope this helps,
>>> Peter
>>>
>>> On Thursday, 3 November 2022 at 03:47:05 UTC+1 yy.wayne wrote:
>>>
 Thank you peter.

 It looks awesome! I've got a few question about it.

1. In your github code, you actually assemble a block matrix 
system(in MatrixFree) but didn't renumber dofs as done in most Stokes 
examples? There's no multi dof_handlers. The solution in HeatProblem is 
 a 
non-complex one then(half size of src and dst processed in MatrixFree 
operator)?
2. Can I implement p-multigrid precondition with given 
MGTransferBlockGlobalCoarsening?
3. It's a parallel version but vectors are not initialized with 
solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). 
 So 

[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-03 Thread 'yy.wayne' via deal.II User Group
Hi Peter,

Thanks for your comprehensive reply, really helpful. By assemble actually I 
mean the MatrixFree integrator part which corresponds to assembling in 
MatrixBased form :). But you resolve my question, since my main concern is 
one scalar DoFHandler describing both real and imaginary part. A last 
question is what will you do when real and imaginary parts have different 
boundary conditions, if only on DoFHandler is provided? 

Best,
Wayne
在2022年11月3日星期四 UTC+8 15:08:03 写道:

> Hi Wayne,
>
> > The solution in HeatProblem is a non-complex one then(half size of src 
> and dst processed in MatrixFree operator)?
>
> We are solving equation (10) from the paper; it is the  reformulated 
> version of equation (9), which is a complex system. The heat equation is 
> indeed not complex; but the diagonalization approach in the case of fully 
> implicit stage parallel IRK makes the system complex (inter term of 
> equation (2) consists of complex blocks that need to be solved).
>
> > In your github code, you actually assemble a block matrix system(in 
> MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
> There's no multi dof_handlers.
>
> I am not sure which lines you are referring to, but if I assemble the 
> matrix it only happens for the coarse-grid problem; also I don't think I 
> did setup the matrix for the complex system and instead used simply 
> Chebyshev iterations around point Jacobi. I have another project, where we 
> actually assemble the matrix.
>
> I am not sure what happens in the Stokes examples. But my guess is that 
> they use a FESystem consisting of a FE for pressure and a vectorial FE for 
> velocity. To be able to use block vector in this case the DoFs need to be 
> renumbered so that DoFs of a component are contiguous, which is not the 
> case normally.
>
> In my case, I am using a single a single scalar DoFHandler. This describes 
> both the real and the imaginary part. As a consequence, one only has to 
> pass a single DoFHander/AffineConstraints to MatrixFree. This is somewhat 
> different approach than it is done in the other tutorials and has been 
> adopted from our two-phase solver adaflo (
> https://github.com/kronbichler/adaflo), where we extensively use it 
> reduce overhead for the level-set problem (single DoFHandler that describes 
> the level-set field, normal, and curvature). MatrixFree works quite nicely 
> (just like in the case if you would use FESystem) if you use it with a 
> scalar DoFHandler and pass a BlockVector to the matrix-free loops. I think 
> we don't have a tutorial for this way to work with block vectors in the 
> context of MatrixFree. However, we made lot of progress in improving the 
> usability in the last two years, since we use this feature in 2-3 projects 
> with external collaboration partners. Unfortunately, we have not moved all 
> utility functions to deal.II yet.
>
> > Can I implement p-multigrid precondition with given 
> MGTransferBlockGlobalCoarsening?
>
> Yes. It is used in the code.
>
> > It's a parallel version but vectors are not initialized with 
> solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So 
> matrix_free->initialize_dof_vector has same function done?
>
> Yes, initialize_dof_vector() does the same just smarter. A general 
> recommendation: always use initialize_dof_vector. If MatrixFree notices 
> that you use the internal partitioner, it can do some short cuts. Else, it 
> needs to do some additional checks to figure out if the vectors are 
> comptible.
>
> Hope this helps,
> Peter
>
> On Thursday, 3 November 2022 at 03:47:05 UTC+1 yy.wayne wrote:
>
>> Thank you peter.
>>
>> It looks awesome! I've got a few question about it.
>>
>>1. In your github code, you actually assemble a block matrix 
>>system(in MatrixFree) but didn't renumber dofs as done in most Stokes 
>>examples? There's no multi dof_handlers. The solution in HeatProblem is a 
>>non-complex one then(half size of src and dst processed in MatrixFree 
>>operator)?
>>2. Can I implement p-multigrid precondition with given 
>>MGTransferBlockGlobalCoarsening?
>>3. It's a parallel version but vectors are not initialized with 
>>solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So 
>>matrix_free->initialize_dof_vector has same function done?
>>
>> Best,
>> Wayne
>> 在2022年11月3日星期四 UTC+8 04:44:53 写道:
>>
>>> Hi
>>>
>>> The block version of MGTransferGlobalCoarsening is 
>>> MGTransferBlockGlobalCoarsening (see  
>>> https://www.dealii.org/developer/doxygen/deal.II/classMGTransferBlockGlobalCoarsening.html
>>> ).
>>>
>>> I don't know the details of step-29. But what I would recommend you to 
>>> do is to take a look at 
>>> https://github.com/peterrum/dealii-spirk/blob/bb52df5d1023f9c41ca51a082be769bc0171bf75/include/operator.h#L616-L660.
>>>  
>>> Here, I am solving a complex-valued Helmholtz operator (in my case a sum of 
>>> mass and Laplace matrix, which arises from 

[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-02 Thread 'yy.wayne' via deal.II User Group
Thank you peter.

It looks awesome! I've got a few question about it.

   1. In your github code, you actually assemble a block matrix system(in 
   MatrixFree) but didn't renumber dofs as done in most Stokes examples? 
   There's no multi dof_handlers. The solution in HeatProblem is a non-complex 
   one then(half size of src and dst processed in MatrixFree operator)?
   2. Can I implement p-multigrid precondition with given 
   MGTransferBlockGlobalCoarsening?
   3. It's a parallel version but vectors are not initialized with 
   solution(locally_owned_dofs, locally_relevant_dofs, mpi_communicator). So 
   matrix_free->initialize_dof_vector has same function done?

Best,
Wayne
在2022年11月3日星期四 UTC+8 04:44:53 写道:

> Hi
>
> The block version of MGTransferGlobalCoarsening is 
> MGTransferBlockGlobalCoarsening (see  
> https://www.dealii.org/developer/doxygen/deal.II/classMGTransferBlockGlobalCoarsening.html
> ).
>
> I don't know the details of step-29. But what I would recommend you to do 
> is to take a look at 
> https://github.com/peterrum/dealii-spirk/blob/bb52df5d1023f9c41ca51a082be769bc0171bf75/include/operator.h#L616-L660.
>  
> Here, I am solving a complex-valued Helmholtz operator (in my case a sum of 
> mass and Laplace matrix, which arises from complex implicit Runge-Kutta 
> formulation; see https://arxiv.org/abs/2209.06700). This complex code was 
> the motivation for writing MGTransferBlockGlobalCoarsening in the fist 
> place^^
>
> I think the code should answer most of your questions.
>
> Regarding
>
> > FEEvaluation<...,1> real(matrix_free_data, 0) and FEEvaluation<...,1> 
> real(matrix_free_data, 1)
>
> I guess you have to replace the "1" by a "0" in the second FEEval.
>
> Hope this helps,
> Peter
>
> On Wednesday, 2 November 2022 at 17:55:44 UTC+1 yy.wayne wrote:
>
>> Well a sad thing I notice is a function used for p-multigrid in step-75: 
>> reinit() in MGTwoLevelTransfer, is imlpemented only for 
>> LinearAlgebra::distributed::Vector, not  
>> LinearAlgebra::distributed::BlockVector.
>> [image: 202211-3.png]
>>
>> I'm fine with writing some unimplemented functions myself, but a guidence 
>> is really helpful. While going through this problem I found different 
>> choices and not sure what's the best way having this done. (For example, 
>> LinearAlgebraTrilinos::MPI::BlockVector or 
>> LinearAlgebra::distributed::BlockVector? Whether we need 
>> locally_relevant_dofs when reinit solution vector?(It's not used here 
>> stokes_computation 
>> )
>>  
>> Or is the non-specialized MGTransferLevel Class works as well for p 
>> coarsening?)
>>
>> In short, I'm solving step-29(2 components) with matrix-free and 
>> p-multigrid technique. Is current dealii capable of solving it? If not 
>> what's the fastest way to implement it? Thanks!
>> 在2022年11月2日星期三 UTC+8 21:40:36 写道:
>>
>>> I tried to modify my code that FEEvaluation is constructed without 
>>> matrix_free_data as in the above .cc in dealii/test, but get following 
>>> error information:
>>> "FEEvaluation was initialized without a matrix-free object. 
>>> Integer indexing is not possible".
>>>
>>> Guess the only method to do MatrixFree problem with multi-components now 
>>> is through BlockVector in matrix_vector_Stokes test 
>>> ?
>>>  
>>> Since I never write dealii prokect with blocked dofs before I'm affraid 
>>> codes besides MatrixFree parts should be changed as well... Should I try 
>>> rearrange step-29 like problem with block vector? 
>>>
>>> 在2022年11月2日星期三 UTC+8 17:24:23 写道:
>>>
 In test/matrix-free/assemble_matrix_02.cc 
 
  a 
 Stokes matrix is assembled in MatrixFree form, and DoFs correspond to 
 velocity and pressure are called by FEEvaluation<...,dim> velocity(..., 0) 
 and  FEEvaluation<...,1> velocity(..., dim), respectively.

 I have a multi-comp system two as in step-29 with real and imaginary 
 parts. But initializing the 2 FEEvaluations by FEEvaluation<...,1> 
 real(matrix_free_data, 0) and FEEvaluation<...,1> real(matrix_free_data, 
 1) 
 gives error, because matrix_free_data.n_components() is 1 not 2. Maybe 
 it's 
 because I'm using fe_collection, not just FESystem in the 
 assemble_matrix_02 test.

 If I have a FEEvaluation<...,dim=2> integrator including real and 
 imaginary parts, is there a way to decouple them like in FEValues (for 
 example, fe.system_to_component_index, or fe_values_real = 
 fe_values[FEValuesExtractors])?

 Best,
 Wayne

>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 

[deal.II] Re: How to specify finite element component in MatrixFree?

2022-11-02 Thread 'yy.wayne' via deal.II User Group
I tried to modify my code that FEEvaluation is constructed without 
matrix_free_data as in the above .cc in dealii/test, but get following 
error information:
"FEEvaluation was initialized without a matrix-free object. Integer 
indexing is not possible".

Guess the only method to do MatrixFree problem with multi-components now is 
through BlockVector in matrix_vector_Stokes test 
?
 
Since I never write dealii prokect with blocked dofs before I'm affraid 
codes besides MatrixFree parts should be changed as well... Should I try 
rearrange step-29 like problem with block vector? 

在2022年11月2日星期三 UTC+8 17:24:23 写道:

> In test/matrix-free/assemble_matrix_02.cc 
> 
>  a 
> Stokes matrix is assembled in MatrixFree form, and DoFs correspond to 
> velocity and pressure are called by FEEvaluation<...,dim> velocity(..., 0) 
> and  FEEvaluation<...,1> velocity(..., dim), respectively.
>
> I have a multi-comp system two as in step-29 with real and imaginary 
> parts. But initializing the 2 FEEvaluations by FEEvaluation<...,1> 
> real(matrix_free_data, 0) and FEEvaluation<...,1> real(matrix_free_data, 1) 
> gives error, because matrix_free_data.n_components() is 1 not 2. Maybe it's 
> because I'm using fe_collection, not just FESystem in the 
> assemble_matrix_02 test.
>
> If I have a FEEvaluation<...,dim=2> integrator including real and 
> imaginary parts, is there a way to decouple them like in FEValues (for 
> example, fe.system_to_component_index, or fe_values_real = 
> fe_values[FEValuesExtractors])?
>
> Best,
> Wayne
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/78313b35-b4a5-4a7b-84af-72c256e3db26n%40googlegroups.com.


[deal.II] How to specify finite element component in MatrixFree?

2022-11-02 Thread 'yy.wayne' via deal.II User Group
In test/matrix-free/assemble_matrix_02.cc 

 a 
Stokes matrix is assembled in MatrixFree form, and DoFs correspond to 
velocity and pressure are called by FEEvaluation<...,dim> velocity(..., 0) 
and  FEEvaluation<...,1> velocity(..., dim), respectively.

I have a multi-comp system two as in step-29 with real and imaginary parts. 
But initializing the 2 FEEvaluations by FEEvaluation<...,1> 
real(matrix_free_data, 0) and FEEvaluation<...,1> real(matrix_free_data, 1) 
gives error, because matrix_free_data.n_components() is 1 not 2. Maybe it's 
because I'm using fe_collection, not just FESystem in the 
assemble_matrix_02 test.

If I have a FEEvaluation<...,dim=2> integrator including real and imaginary 
parts, is there a way to decouple them like in FEValues (for example, 
fe.system_to_component_index, or fe_values_real = 
fe_values[FEValuesExtractors])?

Best,
Wayne

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/fc5ddaf6-8968-4cdd-9eff-3dfac8a4aa2bn%40googlegroups.com.


[deal.II] Re: Apply boundary integral to diagonal in MatrixFree

2022-11-01 Thread 'yy.wayne' via deal.II User Group
Maybe I should describe my question again.

In short, to use Chebyshev smoother and solve mg coarse grid straightly, we 
need *MatrixFreeTools::compute_diagonal and 
MatrixFreeTools::compute_matrix,* respectively.

However, for problems involve Robin boundary condition where integral is 
required, those 2 function will not work(I think). That's because 
FEFaceEvaluation should be included as well.

I think I will write new functions if there's no appropriate one. Any hint, 
especially on FEFaceEvaluation and multi-component MatrixFree formulation 
is welcomed.

在2022年10月27日星期四 UTC+8 21:50:39 写道:

> To write the boundary_integral in MatrixFree for step-29 is kind of 
> confusing as well...
> For FEFaceEvaluation with 2 components(real and imaginary parts), whether 
> a index correspond to src(alpha and beta) or basis function(phi_0 for phi 
> and phi_1 for psi) is a problem. I cannot test this code yet due to the 
> former question.
> [image: 20221027_1.png]
> [image: 20221027_2.png]
>
> 在2022年10月27日星期四 UTC+8 16:01:31 写道:
>
>> Hi there,
>>
>> I've been applying MatrixFree method to Step-29 where face integrals are 
>> involved, different from step-37, step-75 and so on. Firstly I think loop() 
>> instead of cell_loop() should be used. Secondly, similar to step-75, 
>> Chebyshev smoother and a coarsest level matrix are needed in my code. In 
>> step-75 we get diagonal vector by *MatrixFreeTools::compute_diagonal and 
>> MatrixFreeTools::compute_matrix*.
>>
>> However, the face integral makes them different. Both two functions only 
>> take cell_operations(FEEvaluation), not face_operations(FEFaceEvaluation). 
>> My question is how to compute diagonal and matrix when my weak form has 
>> face integrals terms (only boundary integral, no interface integral). 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0871a8bb-f799-4a31-ac8d-a26e1a2e76e0n%40googlegroups.com.


Re: [deal.II] Will locally_relevant_dofs be needed when reinit a vector?

2022-10-29 Thread 'yy.wayne' via deal.II User Group
Thank you!
在2022年10月28日星期五 UTC+8 22:50:37 写道:

> On 10/28/22 00:50, 'yy.wayne' via deal.II User Group wrote:
> > I have a question that if ghost_indices should be past to a vector's 
> reinit? 
> > In step-40 and line 89 of this test 
> > <
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fblob%2Fmaster%2Ftests%2Fmpi%2Fsolution_transfer_04.cc%23%3A~%3Atext%3Dsolution.reinit(locally_owned_dofs%2C%2520MPI_COMM_WORLD)%3B=05%7C01%7CWolfgang.Bangerth%40colostate.edu%7C7703891c22db47014d6308dab8b0c6be%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C638025366645029265%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=Q%2FZxqQEdJR%2FeoBSz8SNn0LAocgqvLtSV6tAmkUeeCeA%3D=0>,
>  
>
> > system_rhs and solution are reinit with reinit(local_dofs, 
> mpi_communicator) 
> > only. But in line 95 of that test 
> > <
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdealii%2Fdealii%2Fblob%2Fmaster%2Ftests%2Fmpi%2Fsolution_transfer_04.cc%23%3A~%3Atext%3Dold_solution.reinit(locally_owned_dofs%2C=05%7C01%7CWolfgang.Bangerth%40colostate.edu%7C7703891c22db47014d6308dab8b0c6be%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C638025366645029265%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=YI0OrnoL5OxFwMaqTC7lxVpwgqL%2BS3dbkNDMPfI%2FmUo%3D=0>,
>  
>
> > we need ghost_indices as argument. That's same for one of my code, where 
> I 
> > found a bug because I didn't use locally_relevant_dofs when reinit a 
> > system_rhs. To reduce mistakes, should vectors always be reinit with 
> > ghost_indices then ?
>
> No. The question is whether you need ghost elements in the vector or not, 
> and 
> that is typically a question on whether you need to *read* individual 
> entries. 
> When you build the right hand side and when you call a solver to compute 
> the 
> solution vector, we use "completely distributed vectors" that do not have 
> ghost elements. When you want to output the solution graphically, however, 
> you 
> need to be able to read individual vector elements also for ghost entries, 
> and 
> so then you need the locally relevant dofs. Vectors with ghost elements 
> are 
> read-only.
>
> Best
> W.
>
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7d28042c-c028-43f9-9768-f365c19da1f9n%40googlegroups.com.


[deal.II] Will locally_relevant_dofs be needed when reinit a vector?

2022-10-28 Thread 'yy.wayne' via deal.II User Group
I have a question that if ghost_indices should be past to a vector's 
reinit? In step-40 and line 89 of this test 
,
 
system_rhs and solution are reinit with reinit(local_dofs, 
mpi_communicator) only. But in line 95 of that test 
,
 
we need ghost_indices as argument. That's same for one of my code, where I 
found a bug because I didn't use locally_relevant_dofs when reinit a 
system_rhs. To reduce mistakes, should vectors always be reinit with 
ghost_indices then ?

Thank you

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a6f7c108-040e-418a-81cb-38a719eef8fcn%40googlegroups.com.


[deal.II] Apply boundary integral to diagonal in MatrixFree

2022-10-27 Thread 'yy.wayne' via deal.II User Group
Hi there,

I've been applying MatrixFree method to Step-29 where face integrals are 
involved, different from step-37, step-75 and so on. Firstly I think loop() 
instead of cell_loop() should be used. Secondly, similar to step-75, 
Chebyshev smoother and a coarsest level matrix are needed in my code. In 
step-75 we get diagonal vector by *MatrixFreeTools::compute_diagonal and 
MatrixFreeTools::compute_matrix*.

However, the face integral makes them different. Both two functions only 
take cell_operations(FEEvaluation), not face_operations(FEFaceEvaluation). 
My question is how to compute diagonal and matrix when my weak form has 
face integrals terms (only boundary integral, no interface integral). 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c343562d-1d15-463a-8e26-cdbcbd14e463n%40googlegroups.com.


Re: [deal.II] Error reading a comsol mphtxt file

2022-10-24 Thread 'yy.wayne' via deal.II User Group
That's great. Thanks! :)

在2022年10月24日星期一 UTC+8 23:36:44 写道:

> On 10/16/22 01:45, 'yy.wayne' via deal.II User Group wrote:
> > *** Caution: EXTERNAL Sender ***
> > 
> > I'm currently using deal.II 9.4.0 version (virtual box 1.22). When I 
> > read a comsol mphtxt mesh(attached below, a 2D plane in 3D space), it 
> > throws an error on grid_in.cc line 1557 that s!="4 Mesh".
> > 
> > I trace back the error with whole_file.get(). It turns out after the 
> > line "0 0 1" I get char 32(space), 13(^M), 10(^J) and then 52(number 4). 
> > I guess it's a bug of deal.II since I never modify the comsol mphtxt 
> > file?bug_read_mphtxt.png
>
> yy.wayne:
> I took a look at your mesh last week and there are a few bugs in the 
> deal.II code that show up in your case. I put a patch for one set of 
> problems into
> https://github.com/dealii/dealii/pull/14374
> but this does not fix the problem in its entirety. I will look into the 
> remaining issues later this week or next. I mainly wanted to let you 
> know that the problem is not forgotten.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a0e83dae-c0cc-4e31-9a9f-1e88a016fd9cn%40googlegroups.com.


Re: [deal.II] Re: Run time analysis for step-75 with matrix-free or matrix-based method

2022-10-19 Thread 'yy.wayne' via deal.II User Group
Besides, the Trilinos direct solver applied is Amesos_Lapack(a mistake). 
Changing to Klu therefore save more time.

在2022年10月19日星期三 UTC+8 20:30:53 写道:

> I run both matrix-based and matrix-free mode with release mode, both speed 
> up a lot. The matrix-free CG iteration speeds up 30 times compared to debug 
> mode. Coarse grid solver doesn't get speed up because I'm using Trilinos 
> interface.
> Matrix-based:
> [image: matrix-based_opt.png]
> 20.6 = mg_matrices
>
> Matrix-free:
> [image: matrix-free_opt.png]
>
> Thank you so much Martin.
>
> Best,
> Wayne
>
> 在2022年10月19日星期三 UTC+8 19:52:36 写道:
>
>> Dear Wayne,
>>
>> For performance it certainly matters, because some components of our 
>> codes have more low-level checks in debug mode than others, and because the 
>> compiler optimizations do not have the same effect on all parts of our 
>> code. Make sure to test the release mode and see if it makes more sense. 
>> We'd be happy to help from there.
>>
>> Best,
>> Martin
>> On 19.10.22 13:50, 'yy.wayne' via deal.II User Group wrote:
>>
>> Thanks Martin ! 
>>
>> I never considered about Debug or optimized mode before. Cmake result 
>> says I'm using Debug mode.
>>
>> Some more information: The computaiton is done in deal.ii 9.4.0 oracle 
>> virtualBox, with 1 mpi process in qtcreator, and CPU is intel 10600kf. I 
>> didn't change the CMakeLists and just copy from examples, so I think by 
>> default it's debug mode.
>>
>> Best,
>> Wayne
>>
>> 在2022年10月19日星期三 UTC+8 19:40:01 写道:
>>
>>> Dear Wayne,
>>>
>>> I am a bit surprised by your numbers and find them rather high, at least 
>>> with the chosen problem sizes. I would expect the matrix-free solver to run 
>>> in less than a second for 111,000 unknowns on typical computers, not almost 
>>> 10 seconds. I need to honestly say that I do not have a good explanation at 
>>> this point. I did not write this tutorial program, but I know more or less 
>>> what should happen. Let me ask a basic question first: Did you record the 
>>> timings with release mode? The numbers would make more sense if they are 
>>> based on the debug mode.
>>>
>>> Best,
>>> Martin
>>>
>>>
>>> On 19.10.22 12:08, 'yy.wayne' via deal.II User Group wrote:
>>>
>>> Thanks for your reply Peter, 
>>>
>>> The matrix-free run is basic same as in step-75 except I substitute 
>>> coarse grid solver. For fe_degree=6 without GMG and fe_degree in each level 
>>> decrease by 1 for pMG, the solve_system() function runtime is 24.1s. It's 
>>> decomposed to *MatrixFree MG operators construction*(1.36s), MatrixFree 
>>> MG transfers(2.73s),  KLU coarse grid solver(5.7s), *setting 
>>> smoother_data and compute_inverse_diagonal for level matrices*(3.4s) CG 
>>> iteration(9.8s).
>>>
>>> The two bold texts cost a lot more(133s and 62s, respectively) in 
>>> matrix-based multigrid case. I noticed just as in step-16, the finest level 
>>> matrix is assembled twice(one for system_matrix and one for 
>>> mg_matrices[maxlevel]) so assembling time cost more.
>>>
>>> Best,
>>> Wayne
>>>
>>> 在2022年10月19日星期三 UTC+8 17:10:27 写道:
>>>
>>>> Hi Wayne, 
>>>>
>>>> your numbers make totally sense. Don't forget that you are running for 
>>>> high order: degree=6! The number of non-zeroes per element-stiffness 
>>>> matrix 
>>>> is ((degree + 1)^dim)^2 and the cost of computing the element stiffness 
>>>> matrix is even ((degree + 1)^dim)^3 if I am not mistaken (3 nested loop: 
>>>> i, 
>>>> j and q). Higher orders are definitely made for matrix-free algorithms!
>>>>
>>>> Out of curiosity: how large is the setup cost of MG in the case of the 
>>>> matrix-free run? As a comment: don't be surprised that the setup costs are 
>>>> relatively high compared to the solution process: you are probably setting 
>>>> up a new Triangulation-, DoFHander-, MatrixFree-, ... -object per level. 
>>>> In 
>>>> many simulations, you can reuse these objects, since you don't perform AMR 
>>>> every time step. 
>>>>
>>>> Peter
>>>>
>>>> On Wednesday, 19 October 2022 at 10:38:34 UTC+2 yy.wayne wrote:
>>>>
>>>>> Hello everyone, 
>>>>>
>>>>> I modified step-75 a little bit and try to test it's runtime. However 
&

Re: [deal.II] Re: Run time analysis for step-75 with matrix-free or matrix-based method

2022-10-19 Thread 'yy.wayne' via deal.II User Group
Thanks Martin !

I never considered about Debug or optimized mode before. Cmake result says 
I'm using Debug mode.

Some more information: The computaiton is done in deal.ii 9.4.0 oracle 
virtualBox, with 1 mpi process in qtcreator, and CPU is intel 10600kf. I 
didn't change the CMakeLists and just copy from examples, so I think by 
default it's debug mode.

Best,
Wayne

在2022年10月19日星期三 UTC+8 19:40:01 写道:

> Dear Wayne,
>
> I am a bit surprised by your numbers and find them rather high, at least 
> with the chosen problem sizes. I would expect the matrix-free solver to run 
> in less than a second for 111,000 unknowns on typical computers, not almost 
> 10 seconds. I need to honestly say that I do not have a good explanation at 
> this point. I did not write this tutorial program, but I know more or less 
> what should happen. Let me ask a basic question first: Did you record the 
> timings with release mode? The numbers would make more sense if they are 
> based on the debug mode.
>
> Best,
> Martin
>
>
> On 19.10.22 12:08, 'yy.wayne' via deal.II User Group wrote:
>
> Thanks for your reply Peter, 
>
> The matrix-free run is basic same as in step-75 except I substitute coarse 
> grid solver. For fe_degree=6 without GMG and fe_degree in each level 
> decrease by 1 for pMG, the solve_system() function runtime is 24.1s. It's 
> decomposed to *MatrixFree MG operators construction*(1.36s), MatrixFree 
> MG transfers(2.73s),  KLU coarse grid solver(5.7s), *setting 
> smoother_data and compute_inverse_diagonal for level matrices*(3.4s) CG 
> iteration(9.8s).
>
> The two bold texts cost a lot more(133s and 62s, respectively) in 
> matrix-based multigrid case. I noticed just as in step-16, the finest level 
> matrix is assembled twice(one for system_matrix and one for 
> mg_matrices[maxlevel]) so assembling time cost more.
>
> Best,
> Wayne
>
> 在2022年10月19日星期三 UTC+8 17:10:27 写道:
>
>> Hi Wayne, 
>>
>> your numbers make totally sense. Don't forget that you are running for 
>> high order: degree=6! The number of non-zeroes per element-stiffness matrix 
>> is ((degree + 1)^dim)^2 and the cost of computing the element stiffness 
>> matrix is even ((degree + 1)^dim)^3 if I am not mistaken (3 nested loop: i, 
>> j and q). Higher orders are definitely made for matrix-free algorithms!
>>
>> Out of curiosity: how large is the setup cost of MG in the case of the 
>> matrix-free run? As a comment: don't be surprised that the setup costs are 
>> relatively high compared to the solution process: you are probably setting 
>> up a new Triangulation-, DoFHander-, MatrixFree-, ... -object per level. In 
>> many simulations, you can reuse these objects, since you don't perform AMR 
>> every time step. 
>>
>> Peter
>>
>> On Wednesday, 19 October 2022 at 10:38:34 UTC+2 yy.wayne wrote:
>>
>>> Hello everyone, 
>>>
>>> I modified step-75 a little bit and try to test it's runtime. However 
>>> the result is kind of inexplainable from my point of view, especially on 
>>> *disproportionate 
>>> assemble time and solve time*. Here are some changes:
>>> 1. a matrix-based version of step75 is contructed to compare with 
>>> matrix-free one.
>>> 2. no mesh refinement and no GMG, and fe_degree is constant across all 
>>> cells within every cycle. Fe_degree adds one after each cycle. I make this 
>>> setting to compare runtime due to fe_degree.
>>> 3. a direct solver on coareset grid. I think it won't affect runtime 
>>> since coarest grid never change
>>>
>>> For final cycle it has fe_degree=6 and DoFs=111,361.
>>> For matrix-based method, overall runtime is 301s where setup system(84s) 
>>> and solve system(214s) take up most. In step-75 solve system actually did 
>>> both multigrid matrices assembling, smoother construction, and CG solving. 
>>> Runtime of this case is shown:
>>> [image: matrix-based.png]
>>> On each level I print time assembling level matrix. *The solve system 
>>> is mostly decomposed to MG matrices assembling(83.9+33.6+...=133s), 
>>> smoother set up(65s), coarse grid solve(6s) and CG solve(2.56).* My 
>>> doubt is why actual CG solve only takes 2.56 out of 301 seconds for this 
>>> problem? The time spent on assembling and smoother construction account too 
>>> much that they seems a burden.
>>>
>>> For matrix-free method however, runtime is much smaller without 
>>> assembling matrices. Besides, CG solve cost more because of more 
>>> computation required by matrix-free I guess. But *smoother construction 
>>> time reduces 

[deal.II] Re: Run time analysis for step-75 with matrix-free or matrix-based method

2022-10-19 Thread 'yy.wayne' via deal.II User Group
Thanks for your reply Peter,

The matrix-free run is basic same as in step-75 except I substitute coarse 
grid solver. For fe_degree=6 without GMG and fe_degree in each level 
decrease by 1 for pMG, the solve_system() function runtime is 24.1s. It's 
decomposed to *MatrixFree MG operators construction*(1.36s), MatrixFree MG 
transfers(2.73s),  KLU coarse grid solver(5.7s), *setting smoother_data and 
compute_inverse_diagonal for level matrices*(3.4s) CG iteration(9.8s).

The two bold texts cost a lot more(133s and 62s, respectively) in 
matrix-based multigrid case. I noticed just as in step-16, the finest level 
matrix is assembled twice(one for system_matrix and one for 
mg_matrices[maxlevel]) so assembling time cost more.

Best,
Wayne

在2022年10月19日星期三 UTC+8 17:10:27 写道:

> Hi Wayne,
>
> your numbers make totally sense. Don't forget that you are running for 
> high order: degree=6! The number of non-zeroes per element-stiffness matrix 
> is ((degree + 1)^dim)^2 and the cost of computing the element stiffness 
> matrix is even ((degree + 1)^dim)^3 if I am not mistaken (3 nested loop: i, 
> j and q). Higher orders are definitely made for matrix-free algorithms!
>
> Out of curiosity: how large is the setup cost of MG in the case of the 
> matrix-free run? As a comment: don't be surprised that the setup costs are 
> relatively high compared to the solution process: you are probably setting 
> up a new Triangulation-, DoFHander-, MatrixFree-, ... -object per level. In 
> many simulations, you can reuse these objects, since you don't perform AMR 
> every time step. 
>
> Peter
>
> On Wednesday, 19 October 2022 at 10:38:34 UTC+2 yy.wayne wrote:
>
>> Hello everyone,
>>
>> I modified step-75 a little bit and try to test it's runtime. However the 
>> result is kind of inexplainable from my point of view, especially on 
>> *disproportionate 
>> assemble time and solve time*. Here are some changes:
>> 1. a matrix-based version of step75 is contructed to compare with 
>> matrix-free one.
>> 2. no mesh refinement and no GMG, and fe_degree is constant across all 
>> cells within every cycle. Fe_degree adds one after each cycle. I make this 
>> setting to compare runtime due to fe_degree.
>> 3. a direct solver on coareset grid. I think it won't affect runtime 
>> since coarest grid never change
>>
>> For final cycle it has fe_degree=6 and DoFs=111,361.
>> For matrix-based method, overall runtime is 301s where setup system(84s) 
>> and solve system(214s) take up most. In step-75 solve system actually did 
>> both multigrid matrices assembling, smoother construction, and CG solving. 
>> Runtime of this case is shown:
>> [image: matrix-based.png]
>> On each level I print time assembling level matrix. *The solve system is 
>> mostly decomposed to MG matrices assembling(83.9+33.6+...=133s), smoother 
>> set up(65s), coarse grid solve(6s) and CG solve(2.56).* My doubt is why 
>> actual CG solve only takes 2.56 out of 301 seconds for this problem? The 
>> time spent on assembling and smoother construction account too much that 
>> they seems a burden.
>>
>> For matrix-free method however, runtime is much smaller without 
>> assembling matrices. Besides, CG solve cost more because of more 
>> computation required by matrix-free I guess. But *smoother construction 
>> time reduces significantly* as well is out of my expectation.
>> [image: matrix-free.png]
>>
>> Matrix-free framework saves assembling time but it seems too efficient to 
>> be real. The text in bold are my main confusion. May someone share some 
>> experience on matrix-free and multigrid methods' time consumption?
>>
>> Best,
>> Wayne
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/58dae75e-644e-49d8-bee2-e212c5184e1cn%40googlegroups.com.


Re: [deal.II] step-54 extends to 3d geometry

2022-10-17 Thread 'yy.wayne' via deal.II User Group
Thanks

Let me simplify my question here for someone to answer it later.

In short, I want to import a mesh and a CAD file, set the CAD's manifold to 
mesh, and refine mesh according to CAD's geometry like step-54 but in 
3-dimensional. For example, a sphere shell.

   1. What's the best CAD and MESH software to generate the files for 
   deal.ii to read and *process*? I hope boundary tags on internal faces 
   are kept. COMSOL, GMSH, Cubit, etc.
   2. How to adapt the code in step-54? My IGES file gives more than one 
   shell and wires.


在2022年10月17日星期一 UTC+8 21:49:46 写道:

> On 10/17/22 00:09, 'yy.wayne' via deal.II User Group wrote:
> > 
> > So for a imported geometry one has to manually set these properties to 
> > corresponding faces and lines? What's the suggested CAD software to view 
> > IGES's TODO_Shape? In my case, reading the sphere shell into deal.II 
> gives 9 
> > shell, but from GMSH or COMSOL I cannot view those 9 shells.
> > 
> > Secondlly, which mesh format keeps boundary information & solid 
> information 
> > when imported to deal.ii? I read the .msh file from GMSH but boundary 
> tags are 
> > discarded by deal.ii. It's ideal if internal boundary tags are kept.
>
> This exceeds my knowledge of meshing and geometry software. Maybe others 
> here 
> can help with those questions?
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f01879ef-24b1-4831-86fc-5193d57b5b66n%40googlegroups.com.


Re: [deal.II] step-54 extends to 3d geometry

2022-10-17 Thread 'yy.wayne' via deal.II User Group
I think my mistake is I only set one wires TODO_Shape and one shell 
TODO_Shape to the projector...

So for a imported geometry one has to manually set these properties to 
corresponding faces and lines? What's the suggested CAD software to view 
IGES's TODO_Shape? In my case, reading the sphere shell into deal.II gives 
9 shell, but from GMSH or COMSOL I cannot view those 9 shells.

Secondlly, which mesh format keeps boundary information & solid information 
when imported to deal.ii? I read the .msh file from GMSH but boundary tags 
are discarded by deal.ii. It's ideal if internal boundary tags are kept.

Best,
Wayne

在2022年10月17日星期一 UTC+8 12:37:47 写道:

> Thanks it seems manifold ids are set correctly now.
>
> But I still get "point is not on manifold" error now even with big 
> tolerance.I think my coarse mesh conform to the geometry similar to the 
> bow_surface in step-54. The IGES geometry is generate by COMSOL, and mesh 
> file transformed from COMSOL to msh by gmsh. 
>
> Best,
> Wayne
> 在2022年10月17日星期一 UTC+8 12:16:49 写道:
>
>> On 10/16/22 19:21, 'yy.wayne' via deal.II User Group wrote: 
>> > 
>> > But I don't find a iterator for edge in CellAccessor. There are cell, 
>> face, 
>> > and vertex iterators, but lArclengthProjectionLineManifold requires a 
>> 1d 
>> > topology. Which part of  mesh should I set manifold? 
>>
>> cell->line(l) or cell->face(f)->line(l) gives you what you want. 
>>
>> Best 
>> W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/10b32bde-d1f7-49c4-a6e0-c491e42b9362n%40googlegroups.com.


Re: [deal.II] step-54 extends to 3d geometry

2022-10-16 Thread 'yy.wayne' via deal.II User Group
Thanks it seems manifold ids are set correctly now.

But I still get "point is not on manifold" error now even with big 
tolerance.I think my coarse mesh conform to the geometry similar to the 
bow_surface in step-54. The IGES geometry is generate by COMSOL, and mesh 
file transformed from COMSOL to msh by gmsh. 

Best,
Wayne
在2022年10月17日星期一 UTC+8 12:16:49 写道:

> On 10/16/22 19:21, 'yy.wayne' via deal.II User Group wrote:
> > 
> > But I don't find a iterator for edge in CellAccessor. There are cell, 
> face, 
> > and vertex iterators, but lArclengthProjectionLineManifold requires a 1d 
> > topology. Which part of  mesh should I set manifold?
>
> cell->line(l) or cell->face(f)->line(l) gives you what you want.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6a83df88-a6c3-49e4-a294-3c67dbbb1e54n%40googlegroups.com.


Re: [deal.II] step-54 extends to 3d geometry

2022-10-16 Thread 'yy.wayne' via deal.II User Group
There are only several typedef of line_iterator, and a private function 
begin_line. I don't think they works here.

在2022年10月17日星期一 UTC+8 09:21:08 写道:

> Thanks for your reply.
>
> But I don't find a iterator for edge in CellAccessor. There are cell, 
> face, and vertex iterators, but lArclengthProjectionLineManifold requires a 
> 1d topology. Which part of  mesh should I set manifold?
>
> 在2022年10月17日星期一 UTC+8 06:49:45 写道:
>
>> On 10/16/22 09:00, 'yy.wayne' via deal.II User Group wrote: 
>> > 
>> > Then I met trouble for 3d case. Now "edge of mesh" is 1d and "face of 
>> mesh" is 
>> > 2d. My understanding is now only surface projection is required and 
>> line 
>> > projection is meaningless. However I keep run into error that "point is 
>> not on 
>> > manifold" even tolerance is 10 times of original iges tolerance. 
>>
>> In 3d, we first refine the edges and then the faces. You need manifolds 
>> for 
>> both 1d and 2d objects. 
>>
>> Best 
>> W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f725638c-97fc-482b-831d-37b51c5ce707n%40googlegroups.com.


Re: [deal.II] step-54 extends to 3d geometry

2022-10-16 Thread 'yy.wayne' via deal.II User Group
Thanks for your reply.

But I don't find a iterator for edge in CellAccessor. There are cell, face, 
and vertex iterators, but lArclengthProjectionLineManifold requires a 1d 
topology. Which part of  mesh should I set manifold?

在2022年10月17日星期一 UTC+8 06:49:45 写道:

> On 10/16/22 09:00, 'yy.wayne' via deal.II User Group wrote:
> > 
> > Then I met trouble for 3d case. Now "edge of mesh" is 1d and "face of 
> mesh" is 
> > 2d. My understanding is now only surface projection is required and line 
> > projection is meaningless. However I keep run into error that "point is 
> not on 
> > manifold" even tolerance is 10 times of original iges tolerance.
>
> In 3d, we first refine the edges and then the faces. You need manifolds 
> for 
> both 1d and 2d objects.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6bc4802b-9cf3-49ba-a126-40bc7a686620n%40googlegroups.com.


[deal.II] step-75 interpolate BC inside multigrid matrices construction

2022-10-14 Thread 'yy.wayne' via deal.II User Group
Hello, everyone.

I've got a question about step-75 for the use of 
VectorTools::interpolate_boundary_values inside solve_with_gmg() function, 
where mg_matrices are built.

The question is here interpolate_boundary_values uses a ZeroFunction, while 
the real PDE in this tutorial has non-zero Dirichlet BC on all boundaries. 
Why set to ZeroFunction and why the result is not impacted here?

I came across this problem when I'm combining step-29 and step-75, the 
former inquires an Absorption Boundary Condition(ABC) so it's not Dirichlet 
BC everywhere. An it seems all ABC are not applied, matrix entries 
corresponding to boundary DoFs are non-zero only on diagonal. This implies 
they are treated as Dirichlet BC. 

Thank you

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5a99e0e9-2273-49ac-89b6-ce30cbde8123n%40googlegroups.com.


[deal.II] Re: Segmentation fault in Trilinos direct solver

2022-10-11 Thread 'yy.wayne' via deal.II User Group
... sadly after some modification the errors show again(I adapt step-29 
into this framework(involves FESystem), with matrix-based p-multigrid). I 
think the error is not on direct solver, since the coarse matrix is 
initialized successfully in Klu direct solver. However the worst thing is *no 
error information is returned*,  making it really difficult to debug. Maybe 
the problem is from transfer between MG levels? 

I'm runing on the 1.22 version deal.II virtualBox, so environment should be 
fine. 

Best,
Wayne
在2022年10月12日星期三 UTC+8 09:16:17 写道:

> Thank for your reply,
>  
> Actually I just kill this error by writing the code again from scratch. I 
> think the problem might be "system_matrix.clear()" is missed befor 
> system_matrix.reinit().
>
> 在2022年10月12日星期三 UTC+8 04:05:32 写道:
>
>> Hello,
>>
>> I can't identify the problem right away, but here are some tips in 
>> debugging it:
>>
>>- Does the problem persist if you disable both h- and p-refinement? 
>>If so, than an issue might be that your data structures haven't been 
>>reinitialized after you updated your discretization.
>>- Do you run the code in Debug mode?
>>- Do you use the latest version of Trilinos?
>>
>> Marc
>>
>> On Sunday, October 9, 2022 at 7:02:26 AM UTC-6 yy.wayne wrote:
>>
>>> In step-75 I try to replace the coarse mg solver with a direct solver, 
>>> which is TrilinosWrappers::SolverDirect. However I get Segmentation error 
>>> calling the direct solver, at the last step of SolverDirect::solve():
>>>
>>> solver_control.check(0,0);
>>>
>>>  Where could be the mistake?  (The deal.ii version is 9.4.0)
>>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3e8340b6-59d1-44b4-a023-d0e7f91c7e8en%40googlegroups.com.


[deal.II] Re: Segmentation fault in Trilinos direct solver

2022-10-11 Thread 'yy.wayne' via deal.II User Group
Thank for your reply,
 
Actually I just kill this error by writing the code again from scratch. I 
think the problem might be "system_matrix.clear()" is missed befor 
system_matrix.reinit().

在2022年10月12日星期三 UTC+8 04:05:32 写道:

> Hello,
>
> I can't identify the problem right away, but here are some tips in 
> debugging it:
>
>- Does the problem persist if you disable both h- and p-refinement? If 
>so, than an issue might be that your data structures haven't been 
>reinitialized after you updated your discretization.
>- Do you run the code in Debug mode?
>- Do you use the latest version of Trilinos?
>
> Marc
>
> On Sunday, October 9, 2022 at 7:02:26 AM UTC-6 yy.wayne wrote:
>
>> In step-75 I try to replace the coarse mg solver with a direct solver, 
>> which is TrilinosWrappers::SolverDirect. However I get Segmentation error 
>> calling the direct solver, at the last step of SolverDirect::solve():
>>
>> solver_control.check(0,0);
>>
>>  Where could be the mistake?  (The deal.ii version is 9.4.0)
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a0bc1ad7-2d97-47c8-aba7-2b2fc95d1e51n%40googlegroups.com.


[deal.II] Segmentation fault in Trilinos direct solver

2022-10-09 Thread 'yy.wayne' via deal.II User Group
In step-75 I try to replace the coarse mg solver with a direct solver, 
which is TrilinosWrappers::SolverDirect. However I get Segmentation error 
calling the direct solver, at the last step of SolverDirect::solve():

solver_control.check(0,0);

 Where could be the mistake?  (The deal.ii version is 9.4.0)

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/209a16e2-b3ee-4454-90e0-0fca8b787efen%40googlegroups.com.
/* -
 *
 * Copyright (C) 2021 by the deal.II authors
 *
 * This file is part of the deal.II library.
 *
 * The deal.II library is free software; you can use it, redistribute
 * it, and/or modify it under the terms of the GNU Lesser General
 * Public License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * The full text of the license can be found in the file LICENSE.md at
 * the top level directory of deal.II.
 *
 * -

 *
 * Author: Marc Fehling, Colorado State University, 2021
 * Peter Munch, Technical University of Munich and Helmholtz-Zentrum
 *  hereon, 2021
 * Wolfgang Bangerth, Colorado State University, 2021
 */


// @sect3{Include files}
//
// The following include files have been used and discussed in previous tutorial
// programs, especially in step-27 and step-40.
#include 
#include 
#include 
#include 
#include 

#include 
#include 

#include 
#include 

#include 

#include 
#include 

#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 

// For load balancing we will assign individual weights on cells, and for that
// we will use the class parallel::CellWeights.
#include 

// The solution function requires a transformation from Cartesian to polar
// coordinates. The GeometricUtilities::Coordinates namespace provides the
// necessary tools.
#include 
#include 

// The following include files will enable the MatrixFree functionality.
#include 
#include 
#include 

// We will use LinearAlgebra::distributed::Vector for linear algebra operations.
#include 

// We are left to include the files needed by the multigrid solver.
#include 
#include 
#include 
#include 
#include 
#include 
#include 

//just for reading .h files
#include 
namespace Step75
{
  using namespace dealii;

  ConditionalOStream &
  getPCOut()
  {
static ConditionalOStream pcout(
  std::cout, (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0));
return pcout;
  }

  TimerOutput &
  getTimer()
  {
static TimerOutput computing_timer(MPI_COMM_WORLD,
   getPCOut(),
   TimerOutput::never,
   TimerOutput::wall_times);
return computing_timer;
  }

  // @sect3{The Solution class template}

  // We have an analytic solution to the scenario at our disposal. We will use
  // this solution to impose boundary conditions for the numerical solution of
  // the problem. The formulation of the solution requires a transformation to
  // polar coordinates. To transform from Cartesian to spherical coordinates, we
  // will use a helper function from the GeometricUtilities::Coordinates
  // namespace. The first two coordinates of this transformation correspond to
  // polar coordinates in the x-y-plane.
  template 
  class Solution : public Function
  {
  public:
Solution()
  : Function()
{}

virtual double value(const Point ,
 const unsigned int /*component*/) const override
{
  const std::array p_sphere =
GeometricUtilities::Coordinates::to_spherical(p);

  constexpr const double alpha = 2. / 3.;
  return std::pow(p_sphere[0], alpha) * std::sin(alpha * p_sphere[1]);
}
  };



  // @sect3{Parameters}

  // For this tutorial, we will use a simplified set of parameters. It is also
  // possible to use a ParameterHandler class here, but to keep this tutorial
  // short we decided on using simple structs. The actual intention of all these
  // parameters will be described in the upcoming classes at their respective
  // location where they are used.
  //
  // The following parameter set controls the coarse-grid solver, the smoothers,
  // and the inter-grid transfer scheme of the multigrid mechanism.
  // We populate it with default parameters.
  struct MultigridParameters
  

Re: [deal.II] Re: Matrix-based p-multigrid preconditioning

2022-10-08 Thread 'yy.wayne' via deal.II User Group
I find the problem, that FEValuesBase is not an accessor of Subscriptor in 
9.3.0, while is in 9.4.0. So it's deal.II version making this error.

在2022年10月8日星期六 UTC+8 21:40:01 写道:

> I modify step-75 into a matrix-based, basiclly combining the hpbox repo in 
> github. I came across a error while building:
>
> /home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:710:
>  
> error: ‘class dealii::hp::FEValues<2, 2>’ has no member named ‘unsubscribe’
> In file included from 
> /home/ubuntu/deal.II/installed/include/deal.II/distributed/tria.h:22,
>  from 
> /home/ubuntu/deal.II/installed/include/deal.II/distributed/grid_refinement.h:24,
>  from 
> /home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:34:
> /home/ubuntu/deal.II/installed/include/deal.II/base/smartpointer.h: In 
> instantiation of ‘dealii::SmartPointer::~SmartPointer() [with T = 
> dealii::hp::FEValues<2, 2>; P = void]’:
> /home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:220:9:
>  
>   required from ‘void std::default_delete<_Tp>::operator()(_Tp*) const 
> [with _Tp = Step75::MatrixBasedOperator<2, double>]’
> /usr/include/c++/9/bits/unique_ptr.h:292:17:   required from 
> ‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = 
> Step75::MatrixBasedOperator<2, double>; _Dp = 
> std::default_delete >]’
> /home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:710:9:
>  
>   required from here
> /home/ubuntu/deal.II/installed/include/deal.II/base/smartpointer.h:277:8: 
> error: ‘class dealii::hp::FEValues<2, 2>’ has no member named ‘unsubscribe’
>   277 | t->unsubscribe(_to_object_is_alive, id);
>   | ~~~^~~
>
> I cannot figure out why this is wrong since it's same as hpbox. Could this 
> be due to different version (mine is 9.3.0)?
> 在2022年10月7日星期五 UTC+8 13:45:41 写道:
>
>> Yes.
>>
>> Peter
>> On 07.10.22 07:44, 'yy.wayne' via deal.II User Group wrote:
>>
>> I see. So that's why we create a new dof_handlers in solve_with_gmg() 
>> function instead of continue using dof_handler as other tutorials. If I 
>> want to build mg_matrices and their SparsityPattern, they should based on 
>> dof_handlers as well, is it correct? 
>>
>> Thank you
>>
>> 在2022年10月7日星期五 UTC+8 13:35:16 写道:
>>
>>> Hej, 
>>>
>>> > One thing I'm confused is that why we use 
>>> "make_hanging_node_constraints" and "interpolate_boundary_values" for a new 
>>> constraints in solve_with_gmg in step-75, instead of the 
>>> boundary_constraints as done in previous MG tutorials.
>>>
>>> Your observation is correct. It is related to the fact that p-multigrid 
>>> in deal.II uses the new global-coarsening infrastructure (it was introduced 
>>> in release 7.3). In the other tutorials, the local-smoothing infrastructure 
>>> is used. 
>>>
>>> In a nutshell, the difference is that local smoothing is working on 
>>> refinement levels, while global coarsening works on the complete active 
>>> levels of multiple Triangulation/DoFHandler objects. In the case of 
>>> h-multigrid, one works on multiple meshes, which are created by globally 
>>> coarsening the finest mesh (this is the reason for the name "global 
>>> coarsening"); in the case of p-multigrid, one would work with the same 
>>> Triangulation but would distribute different FEs on different DoFHandler 
>>> objects. Since you work on the active level, you can simple use the 
>>> functions you would use if you don't work on refinement levels.
>>>
>>> For more details see our publication: https://arxiv.org/abs/2203.12292
>>>
>>> Hope this help,
>>> Peter
>>>
>>> On Wednesday, 5 October 2022 at 14:17:03 UTC+2 yy.wayne wrote:
>>>
>>>> Hello sir, 
>>>>
>>>> One thing I'm confused is that why we use 
>>>> "make_hanging_node_constraints" and "interpolate_boundary_values" for a 
>>>> new 
>>>> constraints in solve_with_gmg in step-75, instead of the 
>>>> boundary_constraints as done in previous MG tutorials.
>>>>
>>>> 在2022年10月4日星期二 UTC+8 00:11:49 写道:
>>>>
>>>>> Hi!
>>>>>
>>>>> The "Possibilities for extensions" section of step-75 gives you a few 
>>>>> hints on how to achieve p-multigrid preconditioning with matrix-based 
>>>>> methods:
>>>>>
>>>&g

Re: [deal.II] Re: Matrix-based p-multigrid preconditioning

2022-10-08 Thread 'yy.wayne' via deal.II User Group
I modify step-75 into a matrix-based, basiclly combining the hpbox repo in 
github. I came across a error while building:

/home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:710: 
error: ‘class dealii::hp::FEValues<2, 2>’ has no member named ‘unsubscribe’
In file included from 
/home/ubuntu/deal.II/installed/include/deal.II/distributed/tria.h:22,
 from 
/home/ubuntu/deal.II/installed/include/deal.II/distributed/grid_refinement.h:24,
 from 
/home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:34:
/home/ubuntu/deal.II/installed/include/deal.II/base/smartpointer.h: In 
instantiation of ‘dealii::SmartPointer::~SmartPointer() [with T = 
dealii::hp::FEValues<2, 2>; P = void]’:
/home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:220:9:
 
  required from ‘void std::default_delete<_Tp>::operator()(_Tp*) const 
[with _Tp = Step75::MatrixBasedOperator<2, double>]’
/usr/include/c++/9/bits/unique_ptr.h:292:17:   required from 
‘std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = 
Step75::MatrixBasedOperator<2, double>; _Dp = 
std::default_delete >]’
/home/ubuntu/Desktop/my_projects/step-75-matrixbased/source/step-75-mb.cc:710:9:
 
  required from here
/home/ubuntu/deal.II/installed/include/deal.II/base/smartpointer.h:277:8: 
error: ‘class dealii::hp::FEValues<2, 2>’ has no member named ‘unsubscribe’
  277 | t->unsubscribe(_to_object_is_alive, id);
  | ~~~^~~

I cannot figure out why this is wrong since it's same as hpbox. Could this 
be due to different version (mine is 9.3.0)?
在2022年10月7日星期五 UTC+8 13:45:41 写道:

> Yes.
>
> Peter
> On 07.10.22 07:44, 'yy.wayne' via deal.II User Group wrote:
>
> I see. So that's why we create a new dof_handlers in solve_with_gmg() 
> function instead of continue using dof_handler as other tutorials. If I 
> want to build mg_matrices and their SparsityPattern, they should based on 
> dof_handlers as well, is it correct? 
>
> Thank you
>
> 在2022年10月7日星期五 UTC+8 13:35:16 写道:
>
>> Hej, 
>>
>> > One thing I'm confused is that why we use 
>> "make_hanging_node_constraints" and "interpolate_boundary_values" for a new 
>> constraints in solve_with_gmg in step-75, instead of the 
>> boundary_constraints as done in previous MG tutorials.
>>
>> Your observation is correct. It is related to the fact that p-multigrid 
>> in deal.II uses the new global-coarsening infrastructure (it was introduced 
>> in release 7.3). In the other tutorials, the local-smoothing infrastructure 
>> is used. 
>>
>> In a nutshell, the difference is that local smoothing is working on 
>> refinement levels, while global coarsening works on the complete active 
>> levels of multiple Triangulation/DoFHandler objects. In the case of 
>> h-multigrid, one works on multiple meshes, which are created by globally 
>> coarsening the finest mesh (this is the reason for the name "global 
>> coarsening"); in the case of p-multigrid, one would work with the same 
>> Triangulation but would distribute different FEs on different DoFHandler 
>> objects. Since you work on the active level, you can simple use the 
>> functions you would use if you don't work on refinement levels.
>>
>> For more details see our publication: https://arxiv.org/abs/2203.12292
>>
>> Hope this help,
>> Peter
>>
>> On Wednesday, 5 October 2022 at 14:17:03 UTC+2 yy.wayne wrote:
>>
>>> Hello sir, 
>>>
>>> One thing I'm confused is that why we use 
>>> "make_hanging_node_constraints" and "interpolate_boundary_values" for a new 
>>> constraints in solve_with_gmg in step-75, instead of the 
>>> boundary_constraints as done in previous MG tutorials.
>>>
>>> 在2022年10月4日星期二 UTC+8 00:11:49 写道:
>>>
>>>> Hi!
>>>>
>>>> The "Possibilities for extensions" section of step-75 gives you a few 
>>>> hints on how to achieve p-multigrid preconditioning with matrix-based 
>>>> methods:
>>>>
>>>> https://www.dealii.org/current/doxygen/deal.II/step_75.html#Solvewithmatrixbasedmethods
>>>>
>>>> If you look for an example implementation, the "hpbox" tool 
>>>> demonstrates both matrix-based and matrix-free implementations of the 
>>>> p-multigrid method.
>>>> https://zenodo.org/record/6425947
>>>> https://github.com/marcfehling/hpbox/
>>>>
>>>> Best,
>>>> Marc
>>>>
>>>> On Monday, October 3, 2022 at 3:39:51 AM UTC-6 yy.wayne wrote:
>>>>

[deal.II] Re: Matrix-based p-multigrid preconditioning

2022-10-06 Thread 'yy.wayne' via deal.II User Group
I see. So that's why we create a new dof_handlers in solve_with_gmg() 
function instead of continue using dof_handler as other tutorials. If I 
want to build mg_matrices and their SparsityPattern, they should based on 
dof_handlers as well, is it correct?

Thank you

在2022年10月7日星期五 UTC+8 13:35:16 写道:

> Hej,
>
> > One thing I'm confused is that why we use 
> "make_hanging_node_constraints" and "interpolate_boundary_values" for a new 
> constraints in solve_with_gmg in step-75, instead of the 
> boundary_constraints as done in previous MG tutorials.
>
> Your observation is correct. It is related to the fact that p-multigrid in 
> deal.II uses the new global-coarsening infrastructure (it was introduced in 
> release 7.3). In the other tutorials, the local-smoothing infrastructure is 
> used. 
>
> In a nutshell, the difference is that local smoothing is working on 
> refinement levels, while global coarsening works on the complete active 
> levels of multiple Triangulation/DoFHandler objects. In the case of 
> h-multigrid, one works on multiple meshes, which are created by globally 
> coarsening the finest mesh (this is the reason for the name "global 
> coarsening"); in the case of p-multigrid, one would work with the same 
> Triangulation but would distribute different FEs on different DoFHandler 
> objects. Since you work on the active level, you can simple use the 
> functions you would use if you don't work on refinement levels.
>
> For more details see our publication: https://arxiv.org/abs/2203.12292
>
> Hope this help,
> Peter
>
> On Wednesday, 5 October 2022 at 14:17:03 UTC+2 yy.wayne wrote:
>
>> Hello sir,
>>
>> One thing I'm confused is that why we use "make_hanging_node_constraints" 
>> and "interpolate_boundary_values" for a new constraints in solve_with_gmg 
>> in step-75, instead of the boundary_constraints as done in previous MG 
>> tutorials.
>>
>> 在2022年10月4日星期二 UTC+8 00:11:49 写道:
>>
>>> Hi!
>>>
>>> The "Possibilities for extensions" section of step-75 gives you a few 
>>> hints on how to achieve p-multigrid preconditioning with matrix-based 
>>> methods:
>>>
>>> https://www.dealii.org/current/doxygen/deal.II/step_75.html#Solvewithmatrixbasedmethods
>>>
>>> If you look for an example implementation, the "hpbox" tool demonstrates 
>>> both matrix-based and matrix-free implementations of the p-multigrid method.
>>> https://zenodo.org/record/6425947
>>> https://github.com/marcfehling/hpbox/
>>>
>>> Best,
>>> Marc
>>>
>>> On Monday, October 3, 2022 at 3:39:51 AM UTC-6 yy.wayne wrote:
>>>
 Hello,

 I plan to test the converge of polynomial multigrid preconditioning on 
 a problem, with matrix-based framework. However, step-75 is a bit 
 confusing 
 since I cannot clearly separate classses contributed by matrix-free and 
 p-multigrid. 

 Specifically, in step-16 (with MeshWorker) and step-56 (assemble 
 cell_matrix by hand) there are assemble_system() and assemble_multigrid(), 
 while in step-75 there are solve_system() which calls solve_with_gmg and 
 mg_solve. It is due to matrix-free or p-multigrid? By the way, is there a 
 code gallery for matrix-based polynomial-multigrid ? 

 Thanks

>>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/55c95bd6-48b2-4dcf-9e6c-6544b85be53en%40googlegroups.com.


[deal.II] Re: Matrix-based p-multigrid preconditioning

2022-10-05 Thread 'yy.wayne' via deal.II User Group
Hello sir,

One thing I'm confused is that why we use "make_hanging_node_constraints" 
and "interpolate_boundary_values" for a new constraints in solve_with_gmg 
in step-75, instead of the boundary_constraints as done in previous MG 
tutorials.

在2022年10月4日星期二 UTC+8 00:11:49 写道:

> Hi!
>
> The "Possibilities for extensions" section of step-75 gives you a few 
> hints on how to achieve p-multigrid preconditioning with matrix-based 
> methods:
>
> https://www.dealii.org/current/doxygen/deal.II/step_75.html#Solvewithmatrixbasedmethods
>
> If you look for an example implementation, the "hpbox" tool demonstrates 
> both matrix-based and matrix-free implementations of the p-multigrid method.
> https://zenodo.org/record/6425947
> https://github.com/marcfehling/hpbox/
>
> Best,
> Marc
>
> On Monday, October 3, 2022 at 3:39:51 AM UTC-6 yy.wayne wrote:
>
>> Hello,
>>
>> I plan to test the converge of polynomial multigrid preconditioning on a 
>> problem, with matrix-based framework. However, step-75 is a bit confusing 
>> since I cannot clearly separate classses contributed by matrix-free and 
>> p-multigrid. 
>>
>> Specifically, in step-16 (with MeshWorker) and step-56 (assemble 
>> cell_matrix by hand) there are assemble_system() and assemble_multigrid(), 
>> while in step-75 there are solve_system() which calls solve_with_gmg and 
>> mg_solve. It is due to matrix-free or p-multigrid? By the way, is there a 
>> code gallery for matrix-based polynomial-multigrid ? 
>>
>> Thanks
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4d217db0-6b03-4286-8b3f-ccbf01417768n%40googlegroups.com.


[deal.II] Re: Matrix-based p-multigrid preconditioning

2022-10-04 Thread 'yy.wayne' via deal.II User Group
Thank you, I'll read the implmentation in "hpbox" then.

在2022年10月4日星期二 UTC+8 00:11:49 写道:

> Hi!
>
> The "Possibilities for extensions" section of step-75 gives you a few 
> hints on how to achieve p-multigrid preconditioning with matrix-based 
> methods:
>
> https://www.dealii.org/current/doxygen/deal.II/step_75.html#Solvewithmatrixbasedmethods
>
> If you look for an example implementation, the "hpbox" tool demonstrates 
> both matrix-based and matrix-free implementations of the p-multigrid method.
> https://zenodo.org/record/6425947
> https://github.com/marcfehling/hpbox/
>
> Best,
> Marc
>
> On Monday, October 3, 2022 at 3:39:51 AM UTC-6 yy.wayne wrote:
>
>> Hello,
>>
>> I plan to test the converge of polynomial multigrid preconditioning on a 
>> problem, with matrix-based framework. However, step-75 is a bit confusing 
>> since I cannot clearly separate classses contributed by matrix-free and 
>> p-multigrid. 
>>
>> Specifically, in step-16 (with MeshWorker) and step-56 (assemble 
>> cell_matrix by hand) there are assemble_system() and assemble_multigrid(), 
>> while in step-75 there are solve_system() which calls solve_with_gmg and 
>> mg_solve. It is due to matrix-free or p-multigrid? By the way, is there a 
>> code gallery for matrix-based polynomial-multigrid ? 
>>
>> Thanks
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4286909e-2e2d-49a7-8f1b-d48fcf9542edn%40googlegroups.com.


[deal.II] Matrix-based p-multigrid preconditioning

2022-10-03 Thread 'yy.wayne' via deal.II User Group
Hello,

I plan to test the converge of polynomial multigrid preconditioning on a 
problem, with matrix-based framework. However, step-75 is a bit confusing 
since I cannot clearly separate classses contributed by matrix-free and 
p-multigrid. 

Specifically, in step-16 (with MeshWorker) and step-56 (assemble 
cell_matrix by hand) there are assemble_system() and assemble_multigrid(), 
while in step-75 there are solve_system() which calls solve_with_gmg and 
mg_solve. It is due to matrix-free or p-multigrid? By the way, is there a 
code gallery for matrix-based polynomial-multigrid ? 

Thanks

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/660fce9d-fd51-4033-8478-794370aa2c3cn%40googlegroups.com.


Re: [deal.II] Fast solver for multigrid coarse level

2022-09-29 Thread 'yy.wayne' via deal.II User Group
Thank you Heister,

AMG will not fit my problem (not converge), but I think SparseDirectUMFPACK 
can be applied this way.

在2022年9月29日星期四 UTC+8 21:44:02 写道:

> We have a number of tests that do something more sophisticated like an
> AMG (which is probably what you want to do if you are running in
> parallel with many coarse cells). See for example
>
> https://github.com/dealii/dealii/blob/915ca70fa7c3c2cfa68b632aa162226145b1907f/tests/multigrid/mg_coarse_01.cc#L434
>
> You should be able to get SparseDirectUMFPACK to work in the same way.
>
> On Thu, Sep 29, 2022 at 3:40 AM 'yy.wayne' via deal.II User Group
>  wrote:
> >
> > Hello everyone!
> >
> > In tutorials the coarest level multigrid is solved either by householder 
> direct sovler for FullMatrix or by a iterative solver based on 
> MGCoarseGridHouseholder and MGCoarseGridIterativeSolver, respectively. 
> However I want to solve the coarest level with a sparse direct solver. Both 
> 2 MGCoarseGrid class access MGCoarseGridBase, so SparseDirectUMFPACK cannot 
> be used.
> >
> > Take step-16 for example. The only way possible I my mind is using 
> PETScMUMPS and put it in MGCoarseGridIterativeSolver, and matrix & vector 
> type need to be changed to PETSc accordingly. Is this gonna work?
> >
> > --
> > The deal.II project is located at http://www.dealii.org/
> > For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> > ---
> > You received this message because you are subscribed to the Google 
> Groups "deal.II User Group" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to dealii+un...@googlegroups.com.
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/103ebcb1-ecfc-4bc8-bc06-a0ed07968834n%40googlegroups.com
> .
>
>
>
> -- 
> Timo Heister
> http://www.math.clemson.edu/~heister/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/93b5421a-73c2-4cc6-bea2-eee49c27e120n%40googlegroups.com.


[deal.II] Fast solver for multigrid coarse level

2022-09-29 Thread 'yy.wayne' via deal.II User Group
Hello everyone!

In tutorials the coarest level multigrid is solved either by householder 
direct sovler for FullMatrix or by a iterative solver based on 
MGCoarseGridHouseholder and MGCoarseGridIterativeSolver, respectively. *However 
I want to solve the coarest level with a sparse direct solver.* Both 2 
MGCoarseGrid class access MGCoarseGridBase, so SparseDirectUMFPACK cannot 
be used.

Take step-16 for example. The only way possible I my mind is *using 
PETScMUMPS and put it in MGCoarseGridIterativeSolver,* and matrix & vector 
type need to be changed to PETSc accordingly. Is this gonna work?

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/103ebcb1-ecfc-4bc8-bc06-a0ed07968834n%40googlegroups.com.


Re: [deal.II] Error in complex GMRES

2022-09-27 Thread 'yy.wayne' via deal.II User Group
Yeah I think it make sense. Thank you again.

在2022年9月28日星期三 UTC+8 11:08:17 写道:

> On 9/27/22 21:02, 'yy.wayne' via deal.II User Group wrote:
> > 
> > Just one last question. So if I have the same problem coded in 
> real+imaginary 
> > or complex and solve with SparseDirect solver or Krylov subspace 
> methods. Will 
> > there be difference in time consumption (due to larger matirx / or 
> harder to 
> > solve complex arithmetic embedded in those solver)?
>
> For Krylov subspace methods, everything depends on the preconditioner and 
> it 
> is not possible to make general predictions without knowing what you want 
> to 
> use for the preconditioner.
>
> For SparseDirectUMFPACK, I suspect that using complex arithmetic is faster 
> than using separate real/imaginary components. By how much is something I 
> have 
> never measured -- you probably need to measure this yourself. It would not 
> greatly surprise me if it was faster by a factor of 4 or 6, but I don't 
> think 
> it should be a factor of 20 for example.
>
> Best
> Wolfgang
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/f6e6257b-33df-4e91-9bfb-bd3b05d1973dn%40googlegroups.com.


Re: [deal.II] Error in complex GMRES

2022-09-27 Thread 'yy.wayne' via deal.II User Group
Thank you sir.

Just one last question. So if I have the same problem coded in 
real+imaginary or complex and solve with SparseDirect solver or Krylov 
subspace methods. Will there be difference in time consumption (due to 
larger matirx / or harder to solve complex arithmetic embedded in those 
solver)?

在2022年9月28日星期三 UTC+8 10:57:13 写道:

> On 9/27/22 20:52, 'yy.wayne' via deal.II User Group wrote:
> > I'm using the Oracle VirtualBox environment by default, so I guess it's 
> not 
> > complied with PETSc complex arithmetic.
>
> Yes, that's probably true. You will have to install things from scratch on 
> your system if you want PETSc with complex arithmetic. It is a pity that 
> PETSc 
> doesn't make that easy :-(
>
>
> > By the way, is using complex coding approved or separate to real and 
> imaginary 
> > parts? I read another discussion in gouglegroup said complex coding 
> might lead 
> > to error for   integral. But separate a problem into real and 
> > imaginary doubles the matirx size and DoFs, isn't it?
>
> Well yes, but then you do need to store real and imaginary parts somewhere 
> anyway. So this isn't likely going to be a big bottleneck.
>
> The post you refer to simply points out that it is easy to make mistakes 
> if 
> you work in complex, rather than separate real/imaginary, arithmetic. But 
> it 
> can be done correctly, as shown in step-58.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/04ced81f-0cb7-4b9c-970c-84926ae4fb14n%40googlegroups.com.


Re: [deal.II] Error in complex GMRES

2022-09-27 Thread 'yy.wayne' via deal.II User Group
Thank you.
I'm using the Oracle VirtualBox environment by default, so I guess it's not 
complied with PETSc complex arithmetic.
By the way, is using complex coding approved or separate to real and 
imaginary parts? I read another discussion in gouglegroup said complex 
coding might lead to error for   integral. But separate a problem 
into real and imaginary doubles the matirx size and DoFs, isn't it?

在2022年9月28日星期三 UTC+8 10:47:51 写道:

> On 9/27/22 00:25, 'yy.wayne' via deal.II User Group wrote:
> > As I changing built-in solver to PETScWrappers, I failed to converse a 
> > "PETSc::Wrappers:Vector" variable to a "Vector>" 
> one, as 
> > shown in line 464 of my code. I tried
> > "Vector> a = b; (where b is a PETSc Vector)"
> > or
> > "Vector> a(b)"
> > but both failed. In step-17 however, both is good. Is it because the 
> complex 
> > type or other mistake inmy code?
> > 
>
> I don't know. You may want to show the compiler's error message.
>
> Did you compile PETSc with the flag necessary to support complex 
> arithmetic?
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/28ac7468-1671-4b11-9fac-07678908d794n%40googlegroups.com.


Re: [deal.II] Error in complex GMRES

2022-09-27 Thread 'yy.wayne' via deal.II User Group
Thank you Wolfgang.
As I changing built-in solver to PETScWrappers, I failed to converse a 
"PETSc::Wrappers:Vector" variable to a "Vector>" one, 
as shown in line 464 of my code. I tried 
"Vector> a = b; (where b is a PETSc Vector)"
or
"Vector> a(b)"
but both failed. In step-17 however, both is good. Is it because the 
complex type or other mistake inmy code?

Best
Wayne

在2022年9月26日星期一 UTC+8 23:32:40 写道:

> On 9/26/22 09:22, 'yy.wayne' via deal.II User Group wrote:
> > I'm trying to apply a Krylov solver for complex-valued problem. Basiclly 
> > I have modified step-16 into a complex-valued problem (do not seperate 
> > real and imaginary parts) and solve with direct solver successfully. 
> > However it failed on GMRES solver. Does deal.II built-in GMRES solver 
> > only supprts double type?
>
> Yes. You probably want to take a look at step-58 as well, along with the 
> discussion in the results section there.
>
> Best
> W.
>
> -- 
> 
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ffc4cafc-73b6-4b61-97e5-cc82b2be91c3n%40googlegroups.com.
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 

#include 
#include 

#include 
#include 



#include 

#include 

#include 
#include 



// complex valued version of Step29
// adaptive mesh
namespace Step29
{
  using namespace dealii;


  template 
  class DirichletBoundaryValues : public Function>
  {
  public:
DirichletBoundaryValues()
  : Function>(1)
{}

virtual std::complex value(const Point , const unsigned int component = 0) const override;
  };

  template 
  std::complex DirichletBoundaryValues::value(const Point , const unsigned int component) const
  {
(void)component;
Assert(component==0, ExcIndexRange(component,0,1));
return  std::complex(1,0);
  }


  class ParameterReader : public Subscriptor
  {
  public:
ParameterReader(ParameterHandler &);
void read_parameters(const std::string &);

  private:
void  declare_parameters();
ParameterHandler 
  };

  ParameterReader::ParameterReader(ParameterHandler )
: prm(paramhandler)
  {}


  void ParameterReader::declare_parameters()
  {
prm.enter_subsection("Mesh & geometry parameters");
{
  prm.declare_entry("Number of refinements",
"6",
Patterns::Integer(0),
"Number of global mesh refinement steps "
"applied to initial coarse grid");

  prm.declare_entry("Focal distance",
"0.3",
Patterns::Double(0),
"Distance of the focal point of the lens "
"to the x-axis");
}
prm.leave_subsection();

prm.enter_subsection("Physical constants");
{
  prm.declare_entry("c", "1.5e5", Patterns::Double(0), "Wave speed");

  prm.declare_entry("omega", "5.0e7", Patterns::Double(0), "Frequency");
}
prm.leave_subsection();


prm.enter_subsection("Output parameters");
{
  prm.declare_entry("Output filename",
"solution",
Patterns::Anything(),
"Name of the output file (without extension)");

  DataOutInterface<1>::declare_parameters(prm);
}
prm.leave_subsection();
  }


  void ParameterReader::read_parameters(const std::string _file)
  {
declare_parameters();

prm.parse_input(parameter_file);
  }






  template 
  class ComputeIntensity : public DataPostprocessorScalar
  {
  public:
ComputeIntensity();

virtual void evaluate_vector_field(
  const DataPostprocessorInputs::Vector ,
  std::vector> _quantities) const override;
  };

  template 
  ComputeIntensity::ComputeIntensity()
: DataPostprocessorScalar("Intensity", upd

Re: [deal.II] C++ language in step16

2022-09-26 Thread 'yy.wayne' via deal.II User Group
Thank you Daniel.


在2022年9月26日星期一 UTC+8 20:42:54 写道:

> cell_worker and copier are lambda functions. You can think of them as 
> in-place structs with a call operator (operator()) with the specified 
> signature. We define a new cell_worker object because the place we are 
> passing it into expects the object to have a call operator that does what 
> this->cell_worker implements.
>
> Best,
> Daniel
>
> On Sun, Sep 25, 2022 at 11:02 PM 'yy.wayne' via deal.II User Group <
> dea...@googlegroups.com> wrote:
>
>> In step16 function assemble_system() and assemble_multigrid(), there are 
>> 2 variables/functions defined as:
>> auto cell_worker =   
>> [&](const typename DoFHandler::active_cell_iterator , 
>> ScratchData & scratch_data, 
>> CopyData & copy_data) 
>>{ this->cell_worker(cell, scratch_data, copy_data); }; 
>> auto copier = 
>> [&](const CopyData ) 
>>{ this->constraints.distribute_local_to_global(cd.cell_matrix,
>>   
>>  cd.cell_rhs, 
>>   
>>  cd.local_dof_indices, 
>>   
>>  system_matrix, system_rhs); };
>> I'm not expert in C++ and these codes seem confusing to me. Are 
>> *cell_worker* and *copier* defined here variables or functions? And why 
>> we define a new cell_worker here (is it because we cannot directly pass 
>> this->cell_worker to MeshWorker::mesh_loop) ? Sorry it might be silly. 
>> Should I know the terminology for this "technique" I can google myself.
>> Thank you.
>>
>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to dealii+un...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/c284ce35-13ec-4afd-8ef3-5fdde8fd68can%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/dealii/c284ce35-13ec-4afd-8ef3-5fdde8fd68can%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/360f2ab1-1faa-4386-908e-dd37fbfee07bn%40googlegroups.com.


[deal.II] C++ language in step16

2022-09-25 Thread 'yy.wayne' via deal.II User Group
In step16 function assemble_system() and assemble_multigrid(), there are 2 
variables/functions defined as:
auto cell_worker =   
[&](const typename DoFHandler::active_cell_iterator , 
ScratchData & scratch_data, 
CopyData & copy_data) 
   { this->cell_worker(cell, scratch_data, copy_data); }; 
auto copier = 
[&](const CopyData ) 
   { this->constraints.distribute_local_to_global(cd.cell_matrix,

   cd.cell_rhs, 

   cd.local_dof_indices, 

   system_matrix, system_rhs); };
I'm not expert in C++ and these codes seem confusing to me. Are 
*cell_worker* and *copier* defined here variables or functions? And why we 
define a new cell_worker here (is it because we cannot directly pass 
this->cell_worker to MeshWorker::mesh_loop) ? Sorry it might be silly. 
Should I know the terminology for this "technique" I can google myself.
Thank you.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c284ce35-13ec-4afd-8ef3-5fdde8fd68can%40googlegroups.com.


[deal.II] Re: Error when modify Step 29 into complex-valued problem

2022-09-24 Thread 'yy.wayne' via deal.II User Group
interpolate_boudary_values instead of  interpolate_boundary_condition, typos

在2022年9月25日星期日 UTC+8 10:59:27 写道:

> Hi my friends,
> I'm trying to modify step29 into a complex-valued problem as my project 
> for deal.II, where you do not seperate real part and imaginary part like 
> the original tutorial.
> However error shows up in interpolate_boundary_condition funtion. Here is 
> the error output but I cannot locate where dimensions are not fitting.
> [image: Snipaste_2022-09-25_10-58-31.png]
> and the c++ code file:
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/a3158eef-454a-493b-a77f-d18e5a3b5bd6n%40googlegroups.com.


Re: [deal.II] Re: matrix-free FEM for electromagnetic problem

2022-09-18 Thread 'yy.wayne' via deal.II User Group
Thank you Mr. Kronbichler. Looking forward to deal.II's development on 
matrix-free method.

在2022年9月18日星期日 UTC+8 15:07:17 写道:

> At this moment, there is no support yet for Nedelec-type elements in the 
> matrix-free framework. However, I would expect these to get ready within 
> the next 2-3 months: We have recently added support for Raviart-Thomas 
> elements, which have similar demands in terms of algorithms, therefore we 
> have most of the necessary preparations. If you would be interested in this 
> development, we could share what we have at this point and see how we would 
> proceed.
>
> Regarding your second question, it is correct that both structured and 
> unstructured meshes are support by the matrix-free framework. As you say, 
> what differs is the level of compression in the metric terms: On affine 
> mesh elements (including Cartesian ones), we only need a single dim x dim 
> tensor per mesh cell, whereas the general (curved) case has different terms 
> in every point. As a result, the memory access and the achievable 
> performance differs, but both are supported.
>
> The big topic, as you say, is the question of multigrid methods. There is 
> some reference on a problem I believe might be related to yours, 
> https://doi.org/10.1002/nla.2348 - that solver builds on auxiliary space 
> methods. Unfortunately, we do not have support for these kinds of methods 
> in deal.II at this point, but I believe that the investment to get them 
> running would not be huge.
>
> Best regards,
> Martin
> On 18.09.22 04:21, 'yy.wayne' via deal.II User Group wrote:
>
> Answer from any aspect is welcomed. Since it's just a yes-or-no question 
> on deal.II's capability, I guess it won't take much time. 
>
> 在2022年9月16日星期五 UTC+8 23:08:08 写道:
>
>> The discussion in 2019  interpolate FE_Nelelec 
>> <https://groups.google.com/g/dealii/c/wG3j4yD1rpw/m/3dWJVsFlAQAJ#:~:text=%EE%90%89-,interpolate%20FE_Nelelec,-%E5%B7%B2%E6%9F%A5%E7%9C%8B%20125>
>>  is 
>> really similar to mine, both for physical background and concerns. I've try 
>> Domain Decomposition method as well, since it's one of the only good 
>> preconditioners for indefinite Maxwell problem with complex 3D geometries. 
>> However here I want to apply matrix-free method since they are appealing 
>> for large-scale computation, though they may not converge in this case, 
>> even combined with multigrid methods.
>>
>> 在2022年9月16日星期五 UTC+8 21:33:23 写道:
>>
>>> Hello everyone! 
>>> I'm new to deal.II but find it really exciting.
>>> The problem I plan to solve is the frequency-domain Maxwell wave 
>>> equation with (curlcurl+k^2)E = f form. One of the advantages of deal.II is 
>>> its matrix-free method for iterative solver, so I'd like to implement it 
>>> into frequency-domain Maxwell problem. 
>>>
>>> However step-37 says current matrix-free method only supports a portion 
>>> of basis functions. *My first question is are H1-curl conforming basis 
>>> functions such as FE_NedelecSz and FE_Nedelec supported for matrix-free 
>>> method in deal.II?*
>>>
>>> Besides I want to conform I understand matrix-free method.* My second 
>>> question is it correct that "matrix-free works for both structured (octree 
>>> like) and unstructured hex mesh, but differs in Jacobian matrix for each 
>>> cell has too be storaged for unstructured hex mesh" ?*
>>>
>>> In the end I'm looking for some vector indefinite Helmholtz like 
>>> programs implement with deal.II but hardly find them. *It's there any 
>>> reference for similar deal.II projects on EM problems ?*
>>>
>>> Thank you!
>>>
>>>
>>> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/d42d7f5e-e035-469c-a4cf-ff2543046b3bn%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/dealii/d42d7f5e-e035-469c-a4cf-ff2543046b3bn%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d0c4d19b-bc87-428e-955e-da323c632aean%40googlegroups.com.


[deal.II] Re: matrix-free FEM for electromagnetic problem

2022-09-17 Thread 'yy.wayne' via deal.II User Group
Answer from any aspect is welcomed. Since it's just a yes-or-no question on 
deal.II's capability, I guess it won't take much time. 

在2022年9月16日星期五 UTC+8 23:08:08 写道:

> The discussion in 2019  interpolate FE_Nelelec 
> 
>  is 
> really similar to mine, both for physical background and concerns. I've try 
> Domain Decomposition method as well, since it's one of the only good 
> preconditioners for indefinite Maxwell problem with complex 3D geometries. 
> However here I want to apply matrix-free method since they are appealing 
> for large-scale computation, though they may not converge in this case, 
> even combined with multigrid methods.
>
> 在2022年9月16日星期五 UTC+8 21:33:23 写道:
>
>> Hello everyone!
>> I'm new to deal.II but find it really exciting.
>> The problem I plan to solve is the frequency-domain Maxwell wave equation 
>> with (curlcurl+k^2)E = f form. One of the advantages of deal.II is its 
>> matrix-free method for iterative solver, so I'd like to implement it into 
>> frequency-domain Maxwell problem. 
>>
>> However step-37 says current matrix-free method only supports a portion 
>> of basis functions. *My first question is are H1-curl conforming basis 
>> functions such as FE_NedelecSz and FE_Nedelec supported for matrix-free 
>> method in deal.II?*
>>
>> Besides I want to conform I understand matrix-free method.* My second 
>> question is it correct that "matrix-free works for both structured (octree 
>> like) and unstructured hex mesh, but differs in Jacobian matrix for each 
>> cell has too be storaged for unstructured hex mesh" ?*
>>
>> In the end I'm looking for some vector indefinite Helmholtz like programs 
>> implement with deal.II but hardly find them. *It's there any reference 
>> for similar deal.II projects on EM problems ?*
>>
>> Thank you!
>>
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/d42d7f5e-e035-469c-a4cf-ff2543046b3bn%40googlegroups.com.


[deal.II] Re: matrix-free FEM for electromagnetic problem

2022-09-16 Thread 'yy.wayne' via deal.II User Group
The discussion in 2019  interpolate FE_Nelelec 

 is 
really similar to mine, both for physical background and concerns. I've try 
Domain Decomposition method as well, since it's one of the only good 
preconditioners for indefinite Maxwell problem with complex 3D geometries. 
However here I want to apply matrix-free method since they are appealing 
for large-scale computation, though they may not converge in this case, 
even combined with multigrid methods.

在2022年9月16日星期五 UTC+8 21:33:23 写道:

> Hello everyone!
> I'm new to deal.II but find it really exciting.
> The problem I plan to solve is the frequency-domain Maxwell wave equation 
> with (curlcurl+k^2)E = f form. One of the advantages of deal.II is its 
> matrix-free method for iterative solver, so I'd like to implement it into 
> frequency-domain Maxwell problem. 
>
> However step-37 says current matrix-free method only supports a portion of 
> basis functions. *My first question is are H1-curl conforming basis 
> functions such as FE_NedelecSz and FE_Nedelec supported for matrix-free 
> method in deal.II?*
>
> Besides I want to conform I understand matrix-free method.* My second 
> question is it correct that "matrix-free works for both structured (octree 
> like) and unstructured hex mesh, but differs in Jacobian matrix for each 
> cell has too be storaged for unstructured hex mesh" ?*
>
> In the end I'm looking for some vector indefinite Helmholtz like programs 
> implement with deal.II but hardly find them. *It's there any reference 
> for similar deal.II projects on EM problems ?*
>
> Thank you!
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/5c950830-04c5-4523-8466-9f79276ce5bcn%40googlegroups.com.


[deal.II] matrix-free FEM for electromagnetic problem

2022-09-16 Thread 'yy.wayne' via deal.II User Group
Hello everyone!
I'm new to deal.II but find it really exciting.
The problem I plan to solve is the frequency-domain Maxwell wave equation 
with (curlcurl+k^2)E = f form. One of the advantages of deal.II is its 
matrix-free method for iterative solver, so I'd like to implement it into 
frequency-domain Maxwell problem. 

However step-37 says current matrix-free method only supports a portion of 
basis functions. *My first question is are H1-curl conforming basis 
functions such as FE_NedelecSz and FE_Nedelec supported for matrix-free 
method in deal.II?*

Besides I want to conform I understand matrix-free method.* My second 
question is it correct that "matrix-free works for both structured (octree 
like) and unstructured hex mesh, but differs in Jacobian matrix for each 
cell has too be storaged for unstructured hex mesh" ?*

In the end I'm looking for some vector indefinite Helmholtz like programs 
implement with deal.II but hardly find them. *It's there any reference for 
similar deal.II projects on EM problems ?*

Thank you!


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/66055d3f-1a59-4524-8f4f-2d4faf0fb691n%40googlegroups.com.