Dear all,

It has been a very nice experience over the summer beginning to learn 
deal.ii and *mis*using it for my finite volume calculations.

In the meantime, I have a distributed code for the sparse matrix assembly 
using MPI. 
So the next challenge :) is to have an efficient/scalable way of solving 
this large sparse system.

Since my system is really the same as the one in 
https://www.sciencedirect.com/science/article/pii/S004578251930146X 
(equation 34, i.e. the isothermal case), I would ideally like to use the 
same solution/preconditioning approach as they used there.

However, when I think about how to implement this using deal.ii and 
Trilinos, I don't yet have a concrete plan in mind, how one would do this. 
Some of my questions are:

   - How can I globally re-order the dof's using cell_wise according to 
   material id's (e = electrolyte, a = anode, c = cathode) with ne,na,nc being 
   the number of cells that have that specific material id (this is the system 
   with blocks in eqn. 34) and solution component (u_1 and u_2, this might not 
   be necessary but could also help to treat different variables seperately), 
   such that the right hand side vector would look for example as follows: (I 
   use FE_DGQ(0), so one value per component per cell)
   u_{1,e,0} = first solution component of the first (arbitrary) cell in 
   domain e
   u_{2,c,nc} = second solution component of the last (arbitrary) cell in 
   domain c
   
   RHS = [u_{1,e,0,} ...., u_{1,e, ne}, u_{2,e,0},...,u_{2,e,ne}, 
   u_{1,a,0}, ...,u_{1,a,na}, u_{2,a,0}, ...., u_{2,a,na}, 
   u_{1,c,0},...,u_{1,c,nc},u_{2,c,0}, ..., u_{2,c,nc}]
   
   - For a single processor, I was able to write a function to compute the 
   new cell ordering and then pass it to DoFRenumbering::cell_wise(). However, 
   in parallel, cell_wise() only works on the locally owned cells, so I don't 
   know how I could produce the desired global reordering. 
   
   - I do remember the comment by Prof. Bangerth (
   https://groups.google.com/g/dealii/c/Mseg4sKKylw/m/kwxiWAaxAQAJ) 
   suggesting the use of " different vector conditions (-> step-46)." If you 
   think this would be the best way to extract the individual blocks of the 
   fully-coupled matrix, I can go back and try to re-write the assembly using 
   different FESystems for the different domains.
   
   - Also, I have a conceptual question related to distributed 
   preconditioning (as an example, the Block Gauss Seidel (BGS) preconditioner 
   described in section *3.3. BGS preconditioner *of the paper above. Of 
   course, the even better one is its combination with AMG, but starting with 
   the BGS preconditioner and a direct solver for the individual blocks, to 
   see how it works, would already be super nice:
   Now the question: 
   Assuming I managed to extract all the relevant blocks for the BGS 
   preconditioner (maybe with extract_submatrix_from() after reordering?), 
   and the corresponding right hand side pieces as a 
   TrilinosWrappers::sparse_matrix and TrilinosWrappers::MPI::Vector 
   respectively, 
   could I then *just* use TrilinosWrappers::sparse_matrix::vmult() 
   function to perform the necessary matrix operations for the BGS routine and 
   this would be done in parallel behind the scenes?

Again, I know these are many questions and I appreciate all help, feedback 
and thoughts very much.

Thank you very much,
Nik

  

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/c0b53a37-7d20-4c67-819b-de1dcfe436aan%40googlegroups.com.

Reply via email to