Thank you very much for your answer!
But when I am trying to monitor convergence mfor my problem I do see 3 calls of SampleShellPCApply at each iteration step (simply printing out something inside this function). But looking at http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/impls/bcgs/bcgs.c, I see that there should be two for each iteration of do loop. Can this 3-d call happen due to the monitoring routines? Actually I run code with -ksp_monitor_max -ksp_monitor_true_residual and -ksp_monitor, with additional monitor routine which calls KSPBuildResidual() and then computes in NORM_1. Best regards, Kirill Voronin > > The source code can be found at src/ksp/ksp/impls/bcgs/bcgs.c or nicely > formatted at > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/impls/bcgs/bcgs.c. > html#KSPBCGS You can easily find it by typing KSPBCGS into google > clicking on the first link and then clicking on where it says Location: > > There are two KSP_PCApplyBAorAB() apply within the do loop. Depending on > left or right preconditioning (it supports both) there is an additional > apply before or after the do loop. > > Barry > > > >> On Jan 25, 2015, at 11:02 PM, Kirill Voronin <[email protected]> >> wrote: >> >> >> >> Hello! >> >> >> I'm solving the system with user-defined preconditioned BiCGStab from >> PETSc. >> >> >> The question is - how exactly preconditioned BiCGStab looks like (as an >> algorithm) in PETSC? It can be right or left or some other type of >> preconditioning. >> >> The output of my code shows that during each iteration PCApply (matvec >> of preconditioner) is called 3 times and looking at Saad's version of >> nonpreconditioned BiCGStab algorithm it is not that obvious for me >> where these 3 calls exactly occur. >> >> Saad's version: >> >> >> 1.Compute r0 = b - Ax0, r0* - arbitrary. >> 2.p0 := r0 >> 3.for j = 0, 1 ... till convergence, do >> 4. alpha_j := (r_j, r0*) / (Ap_j, r0*) >> 5. s_j := r_j - alpha_j Ap_j >> 6. omega_j = (As_j, s_j) / (As_j, As_j) >> 7. x_{j+1} := x_j + alpha_j p_j + omega_j s_j >> 8. r_{j+1} := s_j - omega_j As_j >> 9. beta_j := (r_{j+1}, r0*)/(r_j, r0*) * alpha_j/omega_j >> 10. p_{j+1} := r_{j+1} + beta_j (p_j - omega_j Ap_j) >> 11. end do >> >> >> (from Saad Y. "Iterative methods for sparce linear systems") >> >> >> Thank you in advance! >> >> >> -- >> >> >> Best regards, >> >> >> Kirill Voronin >> >> > > --
