Hi, 

I'm using PETSC with a finite element incompressible two-phase Navier-Stokes 
code. In 3D I'm currently struggling with the parallel performance. The domain 
partitioning is done with ParMeTis and PETSC is used for matrix assembly and 
solving. Unfortunately, for this problem the number of processors has a large 
influence on the number of iterations leading to a poor scaling. I have already 
tested a lot of solver-preconditioner combinations. LGMRES with ASM 
preconditioning seems to perform best. Applying -sub_pc_type lu helped a lot in 
2D, but in 3D apart from reducing the number of iterations the whole solution 
takes more than 10 times longer. I attached the log_summary output for a 
problem with about 240000 unkowns (1 time step) using 4, 8 and 16 Intel Xeon 
E5450 processors (InfiniBand-connected). As far as I see the number of 
iterations seems to be the major issue here or am I missing something? I would 
appreciate any suggestions as to what I could try to improve the scaling.


Thanks
Henning


-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 4pe.txt
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110512/87d2acf2/attachment-0003.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 8pe.txt
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110512/87d2acf2/attachment-0004.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 16pe.txt
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110512/87d2acf2/attachment-0005.txt>
-------------- next part --------------



Reply via email to