Re: [petsc-users] A number of questions about DMDA with SNES and Quasi-Newton methods

2017-10-02 Thread zakaryah .
I'm still working on this. I've made some progress, and it looks like the issue is with the KSP, at least for now. The Jacobian may be ill-conditioned. Is it possible to use -snes_test_display during an intermediate step of the analysis? I would like to inspect the Jacobian after several

Re: [petsc-users] Mat/Vec with empty ranks

2017-10-02 Thread Florian Lindner
Am 02.10.2017 um 21:04 schrieb Matthew Knepley: > On Mon, Oct 2, 2017 at 6:21 AM, Florian Lindner > wrote: > > Hello, > > I have a matrix and vector that live on 4 ranks, but only rank 2 and 3 > have values: > > Doing a simple

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Mark Adams
Well that is strange, the PETSc tests work. Wenbo, could you please: > git clone -b gamg-fix-eig-err https://bitbucket.org/petsc/petsc petsc2 > cd petsc2 and reconfigure, make, and then run your test without the -st_gamg_est_ksp_error_if_not_converged 0 fix, and see if this fixes the problem.

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Wenbo Zhao
On Tue, Oct 3, 2017 at 12:23 AM, Matthew Knepley wrote: > On Mon, Oct 2, 2017 at 11:56 AM, Wenbo Zhao > wrote: > >> >> >> On Mon, Oct 2, 2017 at 11:49 PM, Mark Adams wrote: >> >>> Whenbo, do you build your PETSc? >>> >>> Yes. >> My

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Matthew Knepley
On Mon, Oct 2, 2017 at 11:56 AM, Wenbo Zhao wrote: > > > On Mon, Oct 2, 2017 at 11:49 PM, Mark Adams wrote: > >> Whenbo, do you build your PETSc? >> >> Yes. > My configure option is listed below > ./configure --with-mpi=1 --with-shared-libraries=1 \ >

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Wenbo Zhao
On Mon, Oct 2, 2017 at 11:49 PM, Mark Adams wrote: > Whenbo, do you build your PETSc? > > Yes. My configure option is listed below ./configure --with-mpi=1 --with-shared-libraries=1 \ --with-64-bit-indices=1 --with-debugging=1 And I set PETSC_DIR, PETSC_ARCH and

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Mark Adams
Whenbo, do you build your PETSc? On Mon, Oct 2, 2017 at 11:45 AM, Mark Adams wrote: > This is normal: > > Linear st_gamg_est_ solve did not converge due to DIVERGED_ITS iterations > 10 > > It looks like ksp->errorifnotconverged got set somehow. If the default > changed in KSP

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Mark Adams
This is normal: Linear st_gamg_est_ solve did not converge due to DIVERGED_ITS iterations 10 It looks like ksp->errorifnotconverged got set somehow. If the default changed in KSP then (SAGG) GAMG would not ever work. I assume you don't have a .petscrc file with more (crazy) options in it ...

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Wenbo Zhao
On Mon, Oct 2, 2017 at 11:30 PM, Matthew Knepley wrote: > On Mon, Oct 2, 2017 at 11:15 AM, Mark Adams wrote: > >> non-smoothed aggregation is converging very fast. smoothed fails in the >> eigen estimator. >> >> Run this again with -st_gamg_est_ksp_view and

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Matthew Knepley
On Mon, Oct 2, 2017 at 11:15 AM, Mark Adams wrote: > non-smoothed aggregation is converging very fast. smoothed fails in the > eigen estimator. > > Run this again with -st_gamg_est_ksp_view and -st_gamg_est_ksp_monitor, > and see if you get more output (I'm not 100% sure about

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Mark Adams
It seems to solve fine but then fails on this: ierr = PetscObjectSAWsBlock((PetscObject)ksp);CHKERRQ(ierr); if (ksp->errorifnotconverged && ksp->reason < 0) SETERRQ(comm,PETSC_ERR_NOT_CONVERGED,"KSPSolve has not converged"); It looks like somehow ksp->errorifnotconverged got set. On Mon,

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Matthew Knepley
On Mon, Oct 2, 2017 at 11:21 AM, Mark Adams wrote: > Yea, it fails in the eigen estimator, but the Cheby eigen estimator works > in the solve that works: > > eigenvalue estimates used: min = 0.14, max = 1.10004 > eigenvalues estimate via gmres min 0.0118548,

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Wenbo Zhao
I get more output zhaowenbo@ubuntu:~/test_slepc/SPARK/spark$ make NCORE=1 runkr_smooth mpirun -n 1 ./step-41 \ -st_ksp_type gmres -st_ksp_view -st_ksp_monitor \ -st_pc_type gamg -st_pc_gamg_type agg -st_pc_gamg_agg_nsmooths 1 \ -mata AMAT.dat -matb BMAT.dat \ -st_gamg_est_ksp_view

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Mark Adams
Yea, it fails in the eigen estimator, but the Cheby eigen estimator works in the solve that works: eigenvalue estimates used: min = 0.14, max = 1.10004 eigenvalues estimate via gmres min 0.0118548, max 1.4 Why would it just give "KSPSolve has not converged". It is not

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Mark Adams
non-smoothed aggregation is converging very fast. smoothed fails in the eigen estimator. Run this again with -st_gamg_est_ksp_view and -st_gamg_est_ksp_monitor, and see if you get more output (I'm not 100% sure about these args). On Mon, Oct 2, 2017 at 11:06 AM, Wenbo Zhao

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Matthew Knepley
On Mon, Oct 2, 2017 at 11:06 AM, Wenbo Zhao wrote: > Matt, > Thanks Wenbo. > Test 1 nonsmooth > zhaowenbo@ubuntu:~/test_slepc/SPARK/spark$ make NCORE=1 runkr_nonsmooth > mpirun -n 1 ./step-41 \ >-st_ksp_type gmres -st_ksp_view -st_ksp_monitor \ >-st_pc_type

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Wenbo Zhao
Matt, Test 1 nonsmooth zhaowenbo@ubuntu:~/test_slepc/SPARK/spark$ make NCORE=1 runkr_nonsmooth mpirun -n 1 ./step-41 \ -st_ksp_type gmres -st_ksp_view -st_ksp_monitor \ -st_pc_type gamg -st_pc_gamg_type agg -st_pc_gamg_agg_nsmooths 0 \ -mata AMAT.dat -matb BMAT.dat \ -eps_nev 1

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Matthew Knepley
On Mon, Oct 2, 2017 at 10:43 AM, Wenbo Zhao wrote: > Mark, > > Thanks for your reply. > > On Mon, Oct 2, 2017 at 9:51 PM, Mark Adams wrote: > >> Please send the output with -st_ksp_view and -st_ksp_monitor and we can >> start to debug it. >> >> Test 1

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Mark Adams
Please send the output with -st_ksp_view and -st_ksp_monitor and we can start to debug it. You mentioned that B is not symmetric. I assume it is elliptic (diffusion). Where does the asymmetry come from? On Mon, Oct 2, 2017 at 9:39 AM, Wenbo Zhao wrote: > Matt, >

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Wenbo Zhao
Matt, Thanks for your reply. For the defalt option doesnt work firstly( -st_ksp_type gmres -st_pc_type gamg -st_pc_gamg_type agg -st_pc_gamg_agg_nsmooths 1), I tried to test those options. Wenbo On Mon, Oct 2, 2017 at 9:08 PM, Matthew Knepley wrote: > On Mon, Oct 2, 2017 at

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Mark Adams
GAMG will coarsen the problem until it is small and fast to solve with a direct solver (LU). You can use preonly if you have a perfect preconditioner. On Mon, Oct 2, 2017 at 9:08 AM, Matthew Knepley wrote: > On Mon, Oct 2, 2017 at 8:30 AM, Wenbo Zhao

Re: [petsc-users] Load distributed matrices from directory

2017-10-02 Thread Barry Smith
MPCs? If you have a collection of "overlapping matrices" on disk then you will be responsible for even providing the matrix-vector product for the operator which you absolutely need if you are going to use any Krylov based overlapping Schwarz method. How do you plan to perform the matrix

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Matthew Knepley
On Mon, Oct 2, 2017 at 8:30 AM, Wenbo Zhao wrote: > Matt > > Because I am not clear about what will happen using 'preonly' for large > scale problem. > The size of the problem has nothing to do with 'preonly'. All it means is to apply a preconditioner without a Krylov

Re: [petsc-users] Mat/Vec with empty ranks

2017-10-02 Thread Matthew Knepley
On Mon, Oct 2, 2017 at 6:21 AM, Florian Lindner wrote: > Hello, > > I have a matrix and vector that live on 4 ranks, but only rank 2 and 3 > have values: > > e.g. > > Vec Object: 4 MPI processes > type: mpi > Process [0] > Process [1] > 1.1 > 2.5 > 3. > 4. > Process [2] >

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Wenbo Zhao
Matt Because I am not clear about what will happen using 'preonly' for large scale problem. It seems to use a direct solver from below, http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPPREONLY.html Thanks! Wenbo On Mon, Oct 2, 2017 at 5:09 PM, Matthew Knepley

[petsc-users] Mat/Vec with empty ranks

2017-10-02 Thread Florian Lindner
Hello, I have a matrix and vector that live on 4 ranks, but only rank 2 and 3 have values: e.g. Vec Object: 4 MPI processes type: mpi Process [0] Process [1] 1.1 2.5 3. 4. Process [2] 5. 6. 7. 8. Process [3] Doing a simple LSQR solve does not converge. However, when the values are

Re: [petsc-users] Load distributed matrices from directory

2017-10-02 Thread Matthew Knepley
On Mon, Oct 2, 2017 at 4:12 AM, Matthieu Vitse wrote: > > Le 29 sept. 2017 à 17:43, Barry Smith a écrit : > > Or is your matrix generator code sequential and cannot generate the full > matrix so you want to generate chunks at a time and save to disk

Re: [petsc-users] Issue of mg_coarse_ksp not converge

2017-10-02 Thread Matthew Knepley
On Sun, Oct 1, 2017 at 9:53 PM, Wenbo Zhao wrote: > Matt, > Thanks for your reply. > It DOES make no sense for this problem. > But I am not clear about the 'preonly' option. Which solver is used in > preonly? I wonder if 'preonly' is suitable for large scale problem

Re: [petsc-users] Load distributed matrices from directory

2017-10-02 Thread Matthieu Vitse
> Le 29 sept. 2017 à 17:43, Barry Smith > a écrit : > > Or is your matrix generator code sequential and cannot generate the full > matrix so you want to generate chunks at a time and save to disk then load > them? Better for you to refactor