Since you get a SEGV - I would suggest running the code in the debugger - to check where its crashing.
Also run with valgrind to see where problems start.. Mostlikely the issues would be change in prototypes for PETSc functions - between releases. Satish On Wed, 23 Nov 2011, jean-frederic thebault wrote: > Thanks for your response. > > Sorry about that, to reduce the size of the log file, unfortunetly, I did > took out the bad lines... In the out.log I've put in this email, I've make > sure there are... > > Actually, I don't use MatSetOption, but MatSetFromOption instead. However, > when I called MatSetFromOption, the PETSC_COMM_WORLD was missing. But now, > it's getting worse !! (as you could see in the out.log included in this > email). > > Le 23 novembre 2011 15:55, Jed Brown <jedbrown at mcs.anl.gov> a ?crit : > > > On Wed, Nov 23, 2011 at 08:24, jean-frederic thebault < > > jean-frederic at thebault-net.com> wrote: > > > >> I'm wondering what's wrong in my code. I'm using PETSc to solve a linear > >> system, and willing to use a multi-processor computer. 9 years ago, I used > >> petsc-2.1.3 with success. Few weeks ago, I've update petsc with the 3.1-p8 > >> version and made the necessary changes to work with. No problem. And > >> recently, I've migrate to petsc-3.2-p5. Compilation is OK. But when I do > >> simulation, now, I have some PETSC-ERROR in the log file, even using only > >> one processor (see the out.log file in this email). > >> > > > > You are calling MatSetOption() with the wrong number of arguments. C > > compilers tell you about this, but Fortran compilers do not. > > > > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html > > > > > >> However, I think I defined MatMPI and VecMPI correctly, according to the > >> doc. The log file tell that something wrong with the nnz which should not > >> be greater than row length (??). > >> > > > > The log you sent does not say anything about nnz. Fix the call to > > MatSetOption(). > > > > And also, with the previous version of PETSc I've used, the were no > >> problem using -pc_type bjacobi and -sub_pc_type sor, juste to solve linear > >> system doing parallel computations and because SOR is not parallelized. But > >> now, when I use -pc_type bjacobi and -sub_pc_type sor, with 3 rank, I > >> experiment some convergence problem during my simulation. > >> > > > > These options should do the same thing they used to do. Make sure you are > > assembling correctly. If it's still confusing, run the old and new code with > > > > -ksp_monitor_true_residual -ksp_converged_reason -ksp_view -pc_type > > bjacobi -sub_pc_type sor > > > > and send the output of both for us to look at. > > > > Also note that you can use -pc_type sor even in parallel. There are > > options for local iterations and full iterations. > > >
