Thank you, Daniel.
That's great, my mistake.

Best, 
Wayne

在2023年5月15日星期一 UTC+8 21:24:58<[email protected]> 写道:

> Wayne,
>
> > I have another suggestion that the 
> mg_constrained_dofs::make_zero_boundary() seems only accept one 
> > boundary_ids set + ComponentMask combination. So if there are 2 
> Dirichlet BC that mask different components
> > in vector problems, more make_zero_boundary functions might be added. 
> Not a big problem though.
>
> The documentation (
> https://www.dealii.org/current/doxygen/deal.II/classMGConstrainedDoFs.html#ad9f0da6bd4cb834c8ea3798d8ddacb92)
>  
> says:
>
> [...]
> This function can be called multiple times to allow considering different 
> sets of boundary_ids for different components.
> [...]
>
> Best,
> Daniel
>
> On Mon, May 15, 2023 at 8:57 AM 'yy.wayne' via deal.II User Group <
> [email protected]> wrote:
>
>> The modification is easy:
>> 1) on line 518 of include/deal.II/multigrid/mg_transfer_matrix_free.h 
>> <https://github.com/dealii/dealii/blob/master/include/deal.II/multigrid/mg_transfer_matrix_free.h>,
>>  
>> change first build() to:
>> void 
>>   build(const DoFHandler<dim, dim> &dof_handler, 
>>         const std::vector<std::shared_ptr<const 
>> Utilities::MPI::Partitioner>>
>>           &external_partitioners =
>>             std::vector<std::shared_ptr<const 
>> Utilities::MPI::Partitioner>>());
>> 2) on line 768 of source/multigrid/mg_transfer_matrix_free.cc 
>> <https://github.com/dealii/dealii/blob/master/source/multigrid/mg_transfer_matrix_free.cc>,
>>  
>> change first build() to
>> template <int dim, typename Number>
>> void
>> MGTransferBlockMatrixFree<dim, Number>::build(
>>   const DoFHandler<dim, dim> &dof_handler,
>>   const std::vector<std::shared_ptr<const Utilities::MPI::Partitioner>>
>>           &external_partitioners)
>> {
>>   AssertDimension(this->matrix_free_transfer_vector.size(), 1);
>>   this->matrix_free_transfer_vector[0].build(dof_handler,
>>                                              external_partitioners);
>> }
>>
>> I'm not familiar with github pulls so just list the modification here. I 
>> tested with either with external_partitioners
>> or without external_partitioners build() functions, and both show no 
>> error. By introducing external_partitioners 
>> the former bug diminishes.
>>
>> I have another suggestion that the 
>> mg_constrained_dofs::make_zero_boundary() seems only accept one 
>> boundary_ids set + ComponentMask combination. So if there are 2 Dirichlet 
>> BC that mask different components
>> in vector problems, more make_zero_boundary functions might be added. Not 
>> a big problem though.
>>
>> Best,
>> Wayne
>>
>>
>> 在2023年5月7日星期日 UTC+8 15:37:24<[email protected]> 写道:
>>
>>> Thanks!
>>> Peter
>>>
>>> On Sunday, 7 May 2023 at 09:27:34 UTC+2 yy.wayne wrote:
>>>
>>>> I'll try add build function to  MGTransferBlockMatrixFree  then.
>>>> Thank you.
>>>>
>>>> Best,
>>>> Wayne
>>>>
>>>> 在2023年5月7日星期日 UTC+8 15:23:12<[email protected]> 写道:
>>>>
>>>>> >  I wonder will the unmatched partitioners occur even coding is 
>>>>> correct?
>>>>>
>>>>> Yes. The requirements of MG and the level operators are not identical. 
>>>>> If you don't do anything special, the vector are set up for MG and not 
>>>>> the 
>>>>> level operators. With the help of external partitioners you can fix this 
>>>>> "problem".
>>>>>
>>>>> I guess you could introduce a new 
>>>>> function MGTransferBlockMatrixFree::build(). Shouldn't to be too much 
>>>>> work, 
>>>>> since all you need to do is to delegate the task 
>>>>> to MGTransferMatrixFree::build(). Happy to accept a patch!
>>>>>
>>>>> Peter
>>>>>
>>>>> On Sunday, 7 May 2023 at 08:18:37 UTC+2 yy.wayne wrote:
>>>>>
>>>>>> Thank you Peter,
>>>>>>
>>>>>> Currently I'm using MGTransferBlockMatrixFree where build with 
>>>>>> external partitioners is not supported.
>>>>>> But before moving to modify Block mg transfers, I wonder will the 
>>>>>> unmatched partitioners occur even coding is correct?
>>>>>> My problem is preconditioned with global coarsening h-MG, with 6 dofs 
>>>>>> per node for real and imaginary vectors in 3d,
>>>>>> BlockVectors are applied except for trilinos direct solver on 
>>>>>> coarsest level, where only Vectors rather than BlockVector 
>>>>>> is supported. Transfer between Vector and BlockVector is a potential 
>>>>>> risk though.
>>>>>>
>>>>>> Best,
>>>>>> Wayne
>>>>>>
>>>>>> 在2023年5月6日星期六 UTC+8 16:41:40<[email protected]> 写道:
>>>>>>
>>>>>>> Hi Wayne,
>>>>>>>
>>>>>>> my best guess it that the partitioners in the multigrid operator and 
>>>>>>> the matrix-free operators not match, with the result that a vector with 
>>>>>>> wrong ghosting coming from the multigrid operator is used as src/dst in 
>>>>>>> the 
>>>>>>> matrix-free loop resulting in the assert you show. Previously, one 
>>>>>>> would 
>>>>>>> need to fix the the ghosting in the operator (see 
>>>>>>> https://github.com/dealii/dealii/blob/a091f25c5ff77687fbd8a65f5b336b235f201100/include/deal.II/matrix_free/operators.h#L489-L498)
>>>>>>>  
>>>>>>> but nowadays one can pass in external partitioners to MG: see, e.g., 
>>>>>>> https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/multigrid_throughput.cc#L1802.
>>>>>>>  
>>>>>>> Similarly one can do this in the context of global-coarsening MG.
>>>>>>>
>>>>>>> Hope this helps!
>>>>>>>
>>>>>>> Peter
>>>>>>>
>>>>>>> On Saturday, 6 May 2023 at 10:22:58 UTC+2 yy.wayne wrote:
>>>>>>>
>>>>>>>> It might fails too for 1st basis order too ...
>>>>>>>> But all fails occur when mpi process > 2. If I run with only 2 mpi 
>>>>>>>> processes, no error shows.
>>>>>>>>
>>>>>>>> 在2023年5月6日星期六 UTC+8 15:22:23<yy.wayne> 写道:
>>>>>>>>
>>>>>>>>> Hello my friends,
>>>>>>>>>
>>>>>>>>> I'm testing a matrix-free code that, preconditioned by a geometric 
>>>>>>>>> multigrid.
>>>>>>>>> The code is not stable that it fails for some fe_degree + 
>>>>>>>>> coarse_level + total_mesh levels
>>>>>>>>> combination. 
>>>>>>>>> For example, any n_mpi_process + 1st order basis elements run 
>>>>>>>>> succesfully.
>>>>>>>>> But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 
>>>>>>>>> 2) 
>>>>>>>>> or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during 
>>>>>>>>> SolverGMRES.solve()*. 
>>>>>>>>> The error in debug mode is:
>>>>>>>>>     
>>>>>>>>> An error occurred in line <3403> of file 
>>>>>>>>> </home/ubuntu/deal.II/installed/include/deal.II/matrix_free/matrix_free.h>
>>>>>>>>>  
>>>>>>>>> in function
>>>>>>>>>     unsigned int dealii::internal::VectorDataExchange<dim, Number, 
>>>>>>>>> VectorizedArrayType>::find_vector_in_mf(const VectorType&, bool) 
>>>>>>>>> const 
>>>>>>>>> [with VectorType = 
>>>>>>>>> dealii::LinearAlgebra::distributed::Vector<double>; int 
>>>>>>>>> dim = 3; Number = double; VectorizedArrayType = 
>>>>>>>>> dealii::VectorizedArray<double, 2>]
>>>>>>>>> The violated condition was: 
>>>>>>>>>     false
>>>>>>>>> Additional information: 
>>>>>>>>>     Could not find partitioner that fits vector
>>>>>>>>>
>>>>>>>>> It might be my own coding mistake. I'm using 9.3.0 may upgrade 
>>>>>>>>> dealii to 9.4.0 help ?
>>>>>>>>>
>>>>>>>>> -- 
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see 
>> https://groups.google.com/d/forum/dealii?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "deal.II User Group" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/dealii/8159f346-f4b9-4a19-b676-2a7df2a3a85an%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/dealii/8159f346-f4b9-4a19-b676-2a7df2a3a85an%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/88c895ca-16e0-4c22-87aa-94f864f72268n%40googlegroups.com.

Reply via email to