[deal.II] Re: Questions on parallel linear solver

2020-06-22 Thread Zelu Xu
On Mon, Jun 22, 2020 at 2:33 PM Zelu Xu  wrote:

> Hello Prof. Bangerth
> Thank you very much for your response!
>
>
>> Jane,
>> let me take this to the deal.II mailing list, since that is the place
>> where
>> these kinds of questions should be asked:
>>
>> > I recently finished an SUPG based incompressible unsteady Navier-Stokes
>> solver.
>> > The test case is the flow over cylinder problem. I parallized the code.
>> > Everything works well with the linear code. But when I switch to
>> parallel with
>> > an iterative linear solver (SolverFGMRES). It performs 10x slower than
>> the
>> > direct solver(SparseDirectUMFPACK) I used in the linear code. The
>> iterative
>> > solver typically takes 5-6 gmres iterations to solve the system.
>>
>> Out of curiosity, what is your preconditioner?
>>
>
>I used the inverse of diagonal terms of K
>In the auxiliary problem: (G^TK^-1G+C)delP = rhs
>use diag(K)^-1 instead of K^-1;
>Below is the code: I am wondering if dealii has a parallel version of
> precondition_Jacobi?
>
> system_matrix->block(0,1).vmult (tmp1, src); // multiply with the top
> right block: G
>
>
>
> //multiply with block diag(K)^-1
>
>//system_matrix->block(0,0).precondition_Jacobi (tmp2, tmp1); in
> serial code
>
> const auto  = system_matrix->block(0, 0);
>
> std::pair< int, int > Mrange =  M.local_range();
>
> for (int i = Mrange.first; i < Mrange.second; ++i)
>
> tmp2(i) = tmp1(i) / M.diag_element(i);
>
>
>
> //multiply with bottom left block G^T
>
> system_matrix->block(1,0).vmult (tmpm, tmp2);
>
>
> system_matrix->block(1,1).vmult (tmpc, src); // multiply with the
> bottom right block: C
>
>
> dst = tmpc;
>
> dst -= tmpm;
>
> > I tried to use the parallel direct solver SparseDirectMUMPS, but it was
>> not
>> > able to solve the system.
>>
>> What exactly happens?
>
>
> I lost the code and was not able to reproduce the error.
> I tried to reproduce the code but for a
> dealii::PETScWrappers::MPI::BlockSparseMatrix,  I was not able to use the
> solver, which has an argument type dealii::PETScWrappers::MatrixBase.
> For  solver.solve(system_matrix, newton_update, system_rhs);
> where system_matrix is a BlockSparseMatrix
>
> candidate constructor not viable: no known conversion from
>
>   'LA::MPI::BlockSparseMatrix' (aka
>
>   'dealii::PETScWrappers::MPI::BlockSparseMatrix') to 'const
>
>   dealii::PETScWrappers::MatrixBase &' for 1st argument
>
> MatrixBase(const MatrixBase &) = delete;
>
>
> I cannot remember what I did to avoid this. Do you have any suggestions?
>
>>
>>
>
>> > I am wondering if this is expected? Given the test case has a system
>> that is
>> > small ( 2D code with 10,000 dof). And is there any other parallel
>> > direct solver I can try?
>>
>> As a rule of thumb, direct solvers are typically faster than iterative
>> solvers
>> for problems with less than around 100,000 unknowns. Your problem is so
>> small
>> that it's not worth bothering with iterative solvers. It's probably also
>> so
>> small that it's not worth bothering with parallelization: parallelization
>> only
>> works well if you have more than around 50,000 unknowns per processor.
>> You're
>> still far away from that.
>>
>>   I see. I used a very coarse mesh and I was planning on switching to
> finer meshes (8x, 16x) later.
>   But as you point out, this may still not worth the effort.
>
> Best
>>   W.
>>
>> --
>> 
>> Wolfgang Bangerth  email: bange...@colostate.edu
>> www: http://www.math.colostate.edu/~bangerth/
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAAAchuWgmhfV4_EyC_-p4vjg_Bzie6F8DCNbb%2BMZL9aqJmxt8Q%40mail.gmail.com.


[deal.II] Re: Questions on parallel linear solver

2020-06-20 Thread Wolfgang Bangerth

Jane,
let me take this to the deal.II mailing list, since that is the place where 
these kinds of questions should be asked:



I recently finished an SUPG based incompressible unsteady Navier-Stokes solver.
The test case is the flow over cylinder problem. I parallized the code. 
Everything works well with the linear code. But when I switch to parallel with 
an iterative linear solver (SolverFGMRES). It performs 10x slower than the 
direct solver(SparseDirectUMFPACK) I used in the linear code. The iterative 
solver typically takes 5-6 gmres iterations to solve the system.


Out of curiosity, what is your preconditioner?


I tried to use the parallel direct solver SparseDirectMUMPS, but it was not 
able to solve the system.


What exactly happens?


I am wondering if this is expected? Given the test case has a system that is 
small ( 2D code with 10,000 dof). And is there any other parallel 
direct solver I can try?


As a rule of thumb, direct solvers are typically faster than iterative solvers 
for problems with less than around 100,000 unknowns. Your problem is so small 
that it's not worth bothering with iterative solvers. It's probably also so 
small that it's not worth bothering with parallelization: parallelization only 
works well if you have more than around 50,000 unknowns per processor. You're 
still far away from that.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/ab5915b7-d5a2-1c55-1efe-846247ffc0bc%40colostate.edu.