You can save a (or multiple) linear system(s) with additional options
 -ksp_view_mat  binary:filename -ksp_view_rhs binary:filename 

and then simply email the file named filename to [email protected] with 
number of processes you are using and solver options and we can run the same 
problem and thus hopefully reproduce the troublesome behavior.


  Barry

> On Jul 13, 2015, at 3:23 PM, Mark Adams <[email protected]> wrote:
> 
> Please respond to all, to get the petsc mailing list.
> 
> The out file here looks fine.  It would help of you run in debug mode (and 
> get -g) so that gdb can give line numbers.
> 
> If the error is in the dot product it looks like you are getting an Inf or 
> Nan someplace.  If the first evaluation of the norm was the problem then run 
> the bad code and add a VecView and MatView, and add a VecNorm and MatNorm so 
> that the error happens in your code, just to check.  If you had 1e200 number 
> in the vector then the 2 norm would give you this trap for overflow.  You 
> could test with the inf norm and it works then it is an overflow problem.
> 
> 
> 
> 
> ---------- Forwarded message ----------
> From: Greg Miller <[email protected]>
> Date: Mon, Jul 13, 2015 at 3:54 PM
> Subject: Re: Fwd: same petsc problem
> To: Mark Adams <[email protected]>
> Cc: "[email protected]" <[email protected]>
> 
> 
> thanks.  here it is.
> 
> I'm not getting the nan problem with this code - not sure why.  NaN  was 
> showing up KSP_solve in the first evaluation of the vector norm.
> In going from my original example to the minimal code I mailed today the 
> matrix coefficients got rounded through asci printing.
> That's the only difference I can think of.  The gdb stack from the old code 
> is shown below.
> G
> 
> 
> 
> Program received signal SIGFPE, Arithmetic exception.
> 0x00007ffff6c4ed15 in ddot_ () from /usr/lib/libblas.so.3
> (gdb) where
> #0  0x00007ffff6c4ed15 in ddot_ () from /usr/lib/libblas.so.3
> #1  0x00000000008f87da in VecNorm_Seq (xin=0x22e7ca0, type=NORM_2, 
> z=0x7fffffffba30)
>     at /home/usr/local/src/petsc-3.5.3/src/vec/vec/impls/seq/bvec2.c:614
> #2  0x00000000008c9eea in VecNorm (x=0x22e7ca0, type=NORM_2, 
> val=0x7fffffffba30)
>     at /home/usr/local/src/petsc-3.5.3/src/vec/vec/interface/rvector.c:242
> #3  0x00000000008cab40 in VecNormalize (x=0x22e7ca0, val=0x7fffffffbaa0) at 
> /home/usr/local/src/petsc-3.5.3/src/vec/vec/interface/rvector.c:337
> #4  0x0000000000d44a9d in KSPGMRESCycle (itcount=0x7fffffffbb08, 
> ksp=0x2288ff0)
>     at /home/usr/local/src/petsc-3.5.3/src/ksp/ksp/impls/gmres/gmres.c:161
> #5  0x0000000000d453e3 in KSPSolve_GMRES (ksp=0x2288ff0) at 
> /home/usr/local/src/petsc-3.5.3/src/ksp/ksp/impls/gmres/gmres.c:235
> 
> 
> On 07/13/2015 11:54 AM, Mark Adams wrote:
> > Greg, I am forwarding this to the PETSc mailing list.
> >
> > Please send the entire output from this run.  As I recall you were getting 
> > a message that all values were not the same on all processors in GMRES.  I 
> > have seen this when I get NaNs in the system.
> >
> > While you are doing this you should use a simple solver like change:
> >
> > -pressure_pc_type gamg
> >
> > to
> >
> > -pressure_pc_type jacobi
> >
> >
> > And add:
> >
> > *-*pressure_*ksp_monitor_true_residual*
> >
> > Mark
> >
> >
> > ---------- Forwarded message ----------
> > From: *Greg Miller* <[email protected] <mailto:[email protected]>>
> > Date: Mon, Jul 13, 2015 at 2:08 PM
> > Subject: same petsc problem
> > To: Mark Adams <[email protected] <mailto:[email protected]>>
> > Cc: David Trebotich <[email protected] <mailto:[email protected]>>
> >
> >
> > Hi Mark.  I'm still stuck on the same petsc problem.   Would you please try 
> > the attached minimal example and advise me?
> >
> > I'm running this without MPI:
> > make DIM=2 DEBUG=TRUE MPI=FALSE USE_PETSC=TRUE test
> >
> > There is no input file.
> >
> > Thank you,
> > Greg
> >
> > --
> > Greg Miller
> > Department of Chemical Engineering and Materials Science
> > University of California, Davis
> > One Shields Avenue
> > Davis, CA 95616
> > [email protected] <mailto:[email protected]>
> >
> 
> --
> Greg Miller
> Department of Chemical Engineering and Materials Science
> University of California, Davis
> One Shields Avenue
> Davis, CA 95616
> [email protected]
> 
> <out.txt>

Reply via email to