Hi Weixiong,

I did consider this problem so I wanted to avoid using a fake ILU like 
BlockJacobi.
As I experimented, using BlockJacobi will cause more iterations, and more
time in computation. However, PILUT is different, it should produce same 
matrices in parallel.
The algorithm that is used for PILUT comes from this article:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.4684&rep=rep1&type=pdf

In their paper there was an attractive speedup in parallel. In my case, the 
number of iterations
did not change much either (sometimes it decreased).
One thing that I'm curious is that my parallel performance is strongly 
related to the time
step size that I choose. More specifically, the properties (maybe the 
condition number) of
the matrix. When small time step size is chosen, the speedup can be 
obviously improved.
So I would agree with your opinion, maybe it is something inside the 
preconditioner, not my implementation.
By the way, I use the PILUT to solve the schur complement, which is 
supposed to be very stiff.

Thanks!
Feimi


On Saturday, April 7, 2018 at 1:15:58 AM UTC-4, Weixiong Zheng wrote:
>
> I would guess this would rather be the degeneration of ILUt in parallel 
> than the
> implementation.
>
> If I am not wrong, ILU-T in parallel is not ILU-like preconditioner 
> anymore, right? I remember
> the inter-core part is implemented in a block-Jacobi fashion.
>
> I've never used ILUT but used ILU a lot. For my past experience, it was 
> super easy for certain problems that ILU serial outperforms
> ILU in parallel. I experienced that in problems like when I have elliptic 
> equations in subdomains
> connected with hyperbolic-type (upwinding) interface conditions.
>
> 在 2018年4月5日星期四 UTC-7下午4:42:17,Wolfgang Bangerth写道:
>>
>> On 04/04/2018 08:38 AM, Feimi Yu wrote: 
>> > I wish I can only deal with small problem, but that is only a test 
>> > problem and 
>> > I guess we need to use this code to compute much larger 3-D cases. Now 
>> it 
>> > is too slow so I cannot test any larger cases than 400k dofs. I tried 
>> > 400k dofs, 
>> > still, 2 cores are slower than 1 core. 
>> > 
>> > The weirdest thing is the iterations does not change much, when using 
>> block 
>> > jacobi this would change, but not significantly. Sometimes with Pilut 
>> > the number 
>> > of iterations even decreased, but with more time. 
>>
>> It's hard to tell what exactly is the problem without seeing the code 
>> and/or more details. How many outer iterations do you need? How many 
>> iterations do you need to solve for the Schur complement? 
>>
>> I would say that a good preconditioner for a problem with constant 
>> coefficient should generally not require more than, say, 50-100 outer 
>> iterations, and at most a similar number of inner iterations. 
>>
>> Maybe explain how your matrix block structure looks like, and how your 
>> block solver/block preconditioner looks like? 
>>
>> Best 
>>   W. 
>>
>> -- 
>> ------------------------------------------------------------------------ 
>> Wolfgang Bangerth          email:                 bang...@colostate.edu 
>>                             www: http://www.math.colostate.edu/~bangerth/ 
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to