Thanks, Matt. 

Following is the output with: -ksp_monitor_lg_residualnorm -ksp_log -ksp_view 
-ksp_monitor_true_residual -ksp_converged_reason

  0 KSP preconditioned resid norm            inf true resid norm 
2.709083260443e+06 ||r(i)||/||b|| 1.000000000000e+00
Linear solve did not converge due to DIVERGED_NANORINF iterations 0
KSP Object: 12 MPI processes
  type: gmres
    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1000
  tolerances:  relative=1e-10, absolute=1e-50, divergence=10000
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object: 12 MPI processes
  type: bjacobi
    block Jacobi: number of blocks = 12
    Local solve is same for all blocks, in the following KSP and PC objects:
  KSP Object:  (sub_)   1 MPI processes
    type: preonly
    maximum iterations=10000, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
    left preconditioning
    using NONE norm type for convergence test
  PC Object:  (sub_)   1 MPI processes
    type: ilu
      ILU: out-of-place factorization
      0 levels of fill
      tolerance for zero pivot 2.22045e-14
      using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
      matrix ordering: natural
      factor fill ratio given 1, needed 1
        Factored matrix follows:
          Mat Object:           1 MPI processes
            type: seqaij
            rows=667070, cols=667070
            package used to perform factorization: petsc
            total: nonzeros=4.6765e+07, allocated nonzeros=4.6765e+07
            total number of mallocs used during MatSetValues calls =0
              using I-node routines: found 133414 nodes, limit used is 5
    linear system matrix = precond matrix:
    Mat Object:    ()     1 MPI processes
      type: seqaij
      rows=667070, cols=667070
      total: nonzeros=4.6765e+07, allocated nonzeros=5.473e+07
      total number of mallocs used during MatSetValues calls =0
        using I-node routines: found 133414 nodes, limit used is 5
  linear system matrix = precond matrix:
  Mat Object:  ()   12 MPI processes
    type: mpiaij
    rows=6723030, cols=6723030
    total: nonzeros=4.98852e+08, allocated nonzeros=5.38983e+08
    total number of mallocs used during MatSetValues calls =0
      using I-node (on process 0) routines: found 133414 nodes, limit used is 5


  Anything jumps out at you as odd? 

-Manav



> On Mar 26, 2015, at 9:34 AM, Matthew Knepley <[email protected]> wrote:
> 
> On Thu, Mar 26, 2015 at 9:21 AM, Manav Bhatia <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi,
> 
>   I am using the KSP linear solver for my system of equations, without any 
> command line options at this point. I have checked that the L1 norms of my 
> system matrix and the force vector are finite values, but the KSP solver is 
> returning with an “inf” residual in the very first iteration.
> 
>   The problem has 6.7M dofs and I have tried this on multiple machines with 
> different number of nodes with the same result.
> 
>    Is there a reason why the solver would return after the first iteration 
> with an inf?
> 
>    I am not sure on where to start debugging this case, so I would appreciate 
> any pointers.
> 
> For all solver questions, we want to see the output of
> 
>   -ksp_view -ksp_monitor_true_residual -ksp_converged_reason
> 
> The problem here would be that there is an error, so we would never see the 
> output
> of -ksp_view and know what solver you are using. If you are using something 
> complex,
> can you try using
> 
>   -pc_type jacobi
> 
> and send the output from the options above? Then we can figure out why the 
> other solver
> gets an inf.
> 
>   Thanks,
> 
>      Matt
>  
> Thanks,
> Manav
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener

Reply via email to