On Thu, Nov 17, 2011 at 11:17, Rongliang Chen <rongliang.chan at gmail.com>wrote:
> In my log_summary output, I found that nearly 80% of the total time is > spent on KSPGMRESOrthog. I think this does not make sense ( the log_summary > output followed). Who has any idea about this? > Reductions are very expensive relative to everything else on the coarse level. You can try more levels or a different coarse level solver. You can also likely get away with solving the coarse problem inexactly. Alternatively, you can try getting Chebychev to help you out. Use -ksp_chebychev_estimate_eigenvalues to tune Chebychev (possibly to target a specific part of the spectrum). http://www.mcs.anl.gov/petsc/snapshots/petsc-dev/docs/manualpages/KSP/KSPChebychevSetEstimateEigenvalues.html > > Another question, I am using the two-level asm precondtioner. On the > coarse level I use one-level asm preconditioned GMRES to solve a coarse > problem. So both the fine level solver and coarse level solver call the > function KSPGMRESOrthog. In the log_summary output, I just know the total > time spent on KSPGMRESOrthog and how can I know how much time is spent on > the coarse level KSPGMRESOrthog and how much is spent on fine level > KSPGMRESOrthog? Thanks. > I assume you are using PCMG for this, so you can add -pc_mg_log to profile the time on each level independently. You seem to have many KSPGMRESOrthog steps per fine-level PCApply, so I think most of the time is in the coarse level. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111117/f6c1ca41/attachment-0001.htm>
