[petsc-users] Parallel efficiency of the gmres solver with ASM

2015-06-25 Thread Lei Shi
Hello, I'm trying to improve the parallel efficiency of gmres solve in my. In my CFD solver, Petsc gmres is used to solve the linear system generated by the Newton's method. To test its efficiency, I started with a very simple inviscid subsonic 3D flow as the first testcase. The parallel

Re: [petsc-users] Parallel efficiency of the gmres solver with ASM

2015-06-25 Thread Lei Shi
Hi Matt, Thanks for your suggestions. Here is the output from Stream test on one node which has 20 cores. I run it up to 20. Attached are the dumped output with your suggested options. Really appreciate your help!!! Number of MPI processes 1 Function Rate (MB/s) Copy: 13816.9372

Re: [petsc-users] Set PC PythonContext within NPC

2015-06-25 Thread Lawrence Mitchell
On 24 Jun 2015, at 17:02, Asbjørn Nilsen Riseth ris...@maths.ox.ac.uk wrote: Hi all, I'm currently trying to set up a nonlinear solver that uses NGMRES with NEWTONLS as a right preconditioner. The NEWTONLS has a custom linear preconditioner. Everything is accessed through petsc4py.

Re: [petsc-users] Varying TAO optimization solve iterations using BLMVM

2015-06-25 Thread Jason Sarich
Hi Justin, I don't see anything obviously wrong that would be causing this variation in iterations due to number of processors. Is it at all feasible to send be an example code that reproduces the problem (perhaps a smaller version)? I'm still guessing the problem lies in numerical precision, it

Re: [petsc-users] Varying TAO optimization solve iterations using BLMVM

2015-06-25 Thread Justin Chang
Jason, I was experimenting with some smaller steady-state problems and I still get the same issue: every time I run the same problem on the same number of processors the number of iterations differs but the solutions remain the same. I think this is the root to why I get such erratic behavior in

Re: [petsc-users] Parallel efficiency of the gmres solver with ASM

2015-06-25 Thread Barry Smith
On Jun 25, 2015, at 3:48 PM, Lei Shi stonesz...@gmail.com wrote: Hi Justin, Thanks for your suggestion. I will test it asap. Another thing confusing me is the wclock time with 2 cores is almost the same as the serial run when I use asm with sub_ksp_type gmres and ilu0 on

Re: [petsc-users] Parallel efficiency of the gmres solver with ASM

2015-06-25 Thread Barry Smith
On Jun 25, 2015, at 9:25 PM, Lei Shi stonesz...@gmail.com wrote: Barry, Thanks a lot for your reply. Your explanation helps me understand my test results. So In this case, to compute the speedup for a strong scalability test, I should use the the wall clock time with multiple cores as

Re: [petsc-users] TSSolve problems

2015-06-25 Thread Barry Smith
Run with -ts_view -log_summary and send the output. This will tell the current solvers and where the time is being spent. Barry On Jun 25, 2015, at 6:37 PM, Li, Xinya xinya...@pnnl.gov wrote: Dear Sir, I am using the ts solver to solve a set of ODE and DAE. The Jacobian matrix is