[petsc-users] How to skip checking in SNES
Dear All, I want to turn off checking the convergence in SNES actually when the KSP did not convergence due to low number of subspaces. I did the following: PetscErrorCode solver::SNESConvTest( SNES snes, PetscInt it, PetscReal xn, PetscReal gn, PetscReal f, SNESConvergedReason* res, void* ctx) { *res = SNES_CONVERGED_ITERATING; PetscFunctionReturn( 0 ); } but I still receives the stopping alert: Linear solve did not converge due to DIVERGED_ITS iterations 30 Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE Then what is the proper way? Regards, BehZad -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/f43c412f/attachment.htm
[petsc-users] How to skip checking in SNES
On Mon, Apr 30, 2012 at 3:09 AM, behzad baghapour behzad.baghapour at gmail.com wrote: Dear All, I want to turn off checking the convergence in SNES actually when the KSP did not convergence due to low number of subspaces. I did the following: PetscErrorCode solver::SNESConvTest( SNES snes, PetscInt it, PetscReal xn, PetscReal gn, PetscReal f, SNESConvergedReason* res, void* ctx) { *res = SNES_CONVERGED_ITERATING; PetscFunctionReturn( 0 ); } but I still receives the stopping alert:\ http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/SNES/SNESSetMaxLinearSolveFailures.html Matt Linear solve did not converge due to DIVERGED_ITS iterations 30 Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE Then what is the proper way? Regards, BehZad -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/a2a958a7/attachment.htm
[petsc-users] How to skip checking in SNES
It did work. Thanks a lot... However, I am really confused with convergence reasons in order to control KSP or SNES. Is there any note or example which I can follow to understand? -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/b4ebfb99/attachment.htm
[petsc-users] How to skip checking in SNES
On Mon, Apr 30, 2012 at 7:10 AM, behzad baghapour behzad.baghapour at gmail.com wrote: It did work. Thanks a lot... However, I am really confused with convergence reasons in order to control KSP or SNES. Is there any note or example which I can follow to understand? http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/SNES/SNESConvergedReason.html Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/e97c8da5/attachment.htm
[petsc-users] mumps solve with same nonzero pattern(Hong Zhang)
Wen : Reply to This is weird. Try 1) increase work space with -mat_mumps_icntl_14 50 (default is 20) 2) different matrix orderings with -mat_mumps_icntl_7 2 (or number from 0 to 6) Run your code with '-log_summary' and see which routine causes this huge difference. Why your '-log_summary' only gives KSPSolve 4 1.0 2.2645e+03 1.0 0.00e+00 0.0 3.9e+04 3.6e+02 5.4e+01 96 0 27 0 9 96 0 27 0 9 0 PCSetUp4 1.0 2.2633e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.4e+01 96 0 0 0 6 96 0 0 0 6 0 PCApply4 1.0 1.1641e+00 1.0 0.00e+00 0.0 3.9e+04 3.6e+02 2.0e+01 0 0 27 0 3 0 0 27 0 3 0 I get petsc-dev/src/ksp/ksp/examples/tutorialsmpiexec -n 2 ./ex2 -pc_type lu -pc_factor_mat_solver_package mumps -log_summary MatMult2 1.0 1.6904e-04 1.0 4.44e+02 1.0 4.0e+00 5.6e+01 0.0e+00 0 47 25 13 0 0 47 33 13 0 5 MatSolve 2 1.0 3.8259e-03 1.0 0.00e+00 0.0 8.0e+00 1.9e+02 6.0e+00 10 0 50 84 7 11 0 67 87 9 0 MatLUFactorSym 1 1.0 2.9058e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 7 0 0 0 9 8 0 0 0 11 0 MatLUFactorNum 1 1.0 2.0120e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 5 0 0 0 2 6 0 0 0 3 0 ... I like to check these functions. In addition, have you tried other matrix orderings? Hong Hong I just tested the problem according to what you suggested. I set icntl_14 = 50 and icntl_7 = 5 (METIS). The problem still persisted. The first solve took 920 second and second solve took 215 second with same nonzero pattern pc set up. I also attached the log_summary output file. Do you have any further suggestion? Thanks. Regards, Wen -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/d6f5b204/attachment.htm
[petsc-users] error in petsc-dev
Hi, Just a quick question. Following petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c, it seems that I need to call both MatMPIAIJSetPreallocation and MatSeqAIJSetPreallocation to be able to preallocate for both MPI and Seq matrices. Does petsc automatically chose the relevant function when the code is run in serial and parallel? In other words, what is the effect of MatMPIAIJSetPreallocation(MatSeqAIJSetPreallocation) when the code is run in serial(parallel)? I like how several functions are abstract and can be used both in serial and parallel (like MatCreate). Is there a similar way to just call a single MatSetPreallocation function? Thanks, Mohammad On Wed, Apr 25, 2012 at 4:04 PM, Mohammad Mirzadeh mirzadeh at gmail.comwrote: Thanks Hong; that fixed the problem. On Wed, Apr 25, 2012 at 11:31 AM, Hong Zhang hzhang at mcs.anl.gov wrote: Mohammad: MatCreate(comm, A); MatSetSizes(A, localRowSize, localColumnSize, globalRowSize, globalColumnSize); MatSetType(A, MATMPIAIJ); MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); MatSetFromOptions(A); MatGetOwnershipRange(A, rStart, rEnd); This (even without MatSetType(A, MATMPIAIJ);) works with 3.2-p6 but not dev. The only difference I can see is 1) the order of MatSetFromOptions and 2) I do not call MatSeqAIJSetPreallocation which I think I do not need anyway. Is there something I'm doing wrong? MatSetFromOptions() must be called before MatMPIAIJSetPreallocation(). If user set mattype at runtime, MatSetFromOptions() picks it and set the type accordingly. SetPreallocation() will be called after the type is set. Hong Mohammd -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/f09c61f5/attachment-0001.htm
[petsc-users] Solving a very simple time step problem:
I want to solve a very simple equation: u_t = F(t) u Where F(t) = H_0 + a(t) H' (H_0 and H' are constant matrices, and a(t) is a time dependent scalar). But I'm not sure how to go about doing this using the TS context. I don't have a Jacobian that I need to be worried about, so should I be doing: TSSetRHSFunction(ts,PETSC_NULL,myRHSFunction,appctx); TSSetRHSJacobian(ts,A,A,TSComputeRHSJacobianConstant,appctx); Where: myRHSFunction(TS ts,PetscReal t,Vec u,Vec F,void *ctx) { //Create temporary matrix A = H_0 + a(t) H' //then do F = A u } Or should I be doing something else? Thanks for the help, unfortunately, it looks like the documentation on TS in the manual isn't accurate. -Andrew
[petsc-users] error in petsc-dev
On Mon, Apr 30, 2012 at 7:01 PM, Mohammad Mirzadeh mirzadeh at gmail.comwrote: Hi, Just a quick question. Following petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c, it seems that I need to call both MatMPIAIJSetPreallocation and MatSeqAIJSetPreallocation to be able to preallocate for both MPI and Seq matrices. Does petsc automatically chose the relevant function when the code is run in serial and parallel? In other words, what is the effect of MatMPIAIJSetPreallocation(MatSeqAIJSetPreallocation) when the code is run in serial(parallel)? I like how several functions are abstract and can be used both in serial and parallel (like MatCreate). Is there a similar way to just call a single MatSetPreallocation function? http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatXAIJSetPreallocation.html Matt Thanks, Mohammad On Wed, Apr 25, 2012 at 4:04 PM, Mohammad Mirzadeh mirzadeh at gmail.comwrote: Thanks Hong; that fixed the problem. On Wed, Apr 25, 2012 at 11:31 AM, Hong Zhang hzhang at mcs.anl.gov wrote: Mohammad: MatCreate(comm, A); MatSetSizes(A, localRowSize, localColumnSize, globalRowSize, globalColumnSize); MatSetType(A, MATMPIAIJ); MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); MatSetFromOptions(A); MatGetOwnershipRange(A, rStart, rEnd); This (even without MatSetType(A, MATMPIAIJ);) works with 3.2-p6 but not dev. The only difference I can see is 1) the order of MatSetFromOptions and 2) I do not call MatSeqAIJSetPreallocation which I think I do not need anyway. Is there something I'm doing wrong? MatSetFromOptions() must be called before MatMPIAIJSetPreallocation(). If user set mattype at runtime, MatSetFromOptions() picks it and set the type accordingly. SetPreallocation() will be called after the type is set. Hong Mohammd -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/954c2b12/attachment.htm
[petsc-users] error in petsc-dev
On Apr 30, 2012, at 6:57 PM, Matthew Knepley wrote: On Mon, Apr 30, 2012 at 7:01 PM, Mohammad Mirzadeh mirzadeh at gmail.com wrote: Hi, Just a quick question. Following petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c, it seems that I need to call both MatMPIAIJSetPreallocation and MatSeqAIJSetPreallocation to be able to preallocate for both MPI and Seq matrices. Does petsc automatically chose the relevant function when the code is run in serial and parallel? Yes, it uses the relevant one and ignores any not relevant. This is a common trick in PETSc. You can think of the calls as methods specific to a particular subclass of the Mat class. PETSc automatically uses all the methods that are appropriate for the particular subclass and ignores all the other ones. Barry In other words, what is the effect of MatMPIAIJSetPreallocation(MatSeqAIJSetPreallocation) when the code is run in serial(parallel)? I like how several functions are abstract and can be used both in serial and parallel (like MatCreate). Is there a similar way to just call a single MatSetPreallocation function? http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatXAIJSetPreallocation.html Matt Thanks, Mohammad On Wed, Apr 25, 2012 at 4:04 PM, Mohammad Mirzadeh mirzadeh at gmail.com wrote: Thanks Hong; that fixed the problem. On Wed, Apr 25, 2012 at 11:31 AM, Hong Zhang hzhang at mcs.anl.gov wrote: Mohammad: MatCreate(comm, A); MatSetSizes(A, localRowSize, localColumnSize, globalRowSize, globalColumnSize); MatSetType(A, MATMPIAIJ); MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); MatSetFromOptions(A); MatGetOwnershipRange(A, rStart, rEnd); This (even without MatSetType(A, MATMPIAIJ);) works with 3.2-p6 but not dev. The only difference I can see is 1) the order of MatSetFromOptions and 2) I do not call MatSeqAIJSetPreallocation which I think I do not need anyway. Is there something I'm doing wrong? MatSetFromOptions() must be called before MatMPIAIJSetPreallocation(). If user set mattype at runtime, MatSetFromOptions() picks it and set the type accordingly. SetPreallocation() will be called after the type is set. Hong Mohammd -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener
[petsc-users] error in petsc-dev
Barry, Matt, Thank you both. Mohammad On Mon, Apr 30, 2012 at 6:05 PM, Barry Smith bsmith at mcs.anl.gov wrote: On Apr 30, 2012, at 6:57 PM, Matthew Knepley wrote: On Mon, Apr 30, 2012 at 7:01 PM, Mohammad Mirzadeh mirzadeh at gmail.com wrote: Hi, Just a quick question. Following petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c, it seems that I need to call both MatMPIAIJSetPreallocation and MatSeqAIJSetPreallocation to be able to preallocate for both MPI and Seq matrices. Does petsc automatically chose the relevant function when the code is run in serial and parallel? Yes, it uses the relevant one and ignores any not relevant. This is a common trick in PETSc. You can think of the calls as methods specific to a particular subclass of the Mat class. PETSc automatically uses all the methods that are appropriate for the particular subclass and ignores all the other ones. Barry In other words, what is the effect of MatMPIAIJSetPreallocation(MatSeqAIJSetPreallocation) when the code is run in serial(parallel)? I like how several functions are abstract and can be used both in serial and parallel (like MatCreate). Is there a similar way to just call a single MatSetPreallocation function? http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatXAIJSetPreallocation.html Matt Thanks, Mohammad On Wed, Apr 25, 2012 at 4:04 PM, Mohammad Mirzadeh mirzadeh at gmail.com wrote: Thanks Hong; that fixed the problem. On Wed, Apr 25, 2012 at 11:31 AM, Hong Zhang hzhang at mcs.anl.gov wrote: Mohammad: MatCreate(comm, A); MatSetSizes(A, localRowSize, localColumnSize, globalRowSize, globalColumnSize); MatSetType(A, MATMPIAIJ); MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); MatSetFromOptions(A); MatGetOwnershipRange(A, rStart, rEnd); This (even without MatSetType(A, MATMPIAIJ);) works with 3.2-p6 but not dev. The only difference I can see is 1) the order of MatSetFromOptions and 2) I do not call MatSeqAIJSetPreallocation which I think I do not need anyway. Is there something I'm doing wrong? MatSetFromOptions() must be called before MatMPIAIJSetPreallocation(). If user set mattype at runtime, MatSetFromOptions() picks it and set the type accordingly. SetPreallocation() will be called after the type is set. Hong Mohammd -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/d97f0d2a/attachment.htm