Hi Satish,

Thanks for your answer. In the attached program, I have declared the
following standard fortran arrays: real(dp),dimension(:,:),allocatable ::
D1X,D2X,D1Y,D2Y
Let's say these are real matrix derivatives that I need to insert in my
complex A matrix (it could also be some real baseflow from a similarity
solver). I have filled D1Y with 999.d0 just to test (see line 169). Then I
insert D1Y in the global (complex) matrix with call
MatSetValues(A,ny,idxm2,ny,idxn2,D1Y,INSERT_VALUES,ierr). I expected that
in the Matrix A, this would be automatically converted to 999.0 + 0.0i but
when I view A, I see 999 + 999 i or even 999 + 1.0326e-321 i. Is there a
way to insert D1Y as is and obtain the proper behavior? How would you do it?

Thanks

Anthony

On Fri, Jun 19, 2015 at 11:08 AM, Satish Balay <[email protected]> wrote:

> On Fri, 19 Jun 2015, Anthony Haas wrote:
>
> > Hi,
> >
> > I have a Fortran90 program that solves a complex linear generalized
> eigenvalue
> > problem (GEVP) using standard fortran 90 programming:
> >
> > Subroutines, modules, allocatable arrays, real(8), int,...
> >
> > This program uses Lapack to solve the GEVP. The program is mainly made
> off:
> >
> > 1) set dimensions of problem and initialize arrays,...
> > 2) compute the baseflow (for instance boundary layer flow)
> > 3) build the (stability) complex generalized eigenvalue problem ==> build
> > (dense) matrices A and B
> > 4) solve the GEVP with Lapack
> >
> > Now I want to use PETSc + SLEPc to use sparse matrices. Do I need to
> > rewrite/modify everything in terms of PETSc variables as follows:
> >
> > - int -> PetscInt
> > - real(8) -> PetscScalar
> perhaps you mean: PetscReal
>
> > - complex*16 -> PetscScalar
> >
> > or is it possible to reuse all that F90 code? For instance I have a
> similarity
> > solver that computes Blasius solution. If that similarity solver
> provides me
> > with u and v velocities in terms of standard fortran90 real(8)
> variables, how
> > should I do to use these variables to build my complex matrix? Should I
> > convert them to Petsc variables? How?
> >
>
> You can use the current datatypes used in your code - And always make
> sure the types match manually. [Fortran does not have typecheck anyway..]
>
> > what should I do with my Fortran90 allocatable arrays?
> >
> > real(dp),allocatable,dimension(:,:) :: u-->
> > PetscScalar,allocatable,dimension(:,:) :: u ????
>
> Either should work.
>
> You can add the following to your code:
>
> #if !defined(PETSC_USE_COMPLEX)
> #error "this code requires PETSc --with-scalar-type=complex build"
> #endif
> #if !defined(PETSC_USE_REAL_DOUBLE)
> #error "this code requires PETSc --with-precision=real build"
> #endif
>
> Satish
>
!
!  Description: Build complex matrices A and B in the context of a generalized
!  eigenvalue problem (GEVP)
!
!
!/*T
!  Concepts: Build complex matrices
!  Processors: n
!T*/
!
! -----------------------------------------------------------------------

      program main

      implicit none

! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!                    Include files
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!
!  This program uses CPP for preprocessing, as indicated by the use of
!  PETSc include files in the directory petsc/include/finclude.  This
!  convention enables use of the CPP preprocessor, which allows the use
!  of the #include statements that define PETSc objects and variables.
!
!  Use of the conventional Fortran include statements is also supported
!  In this case, the PETsc include files are located in the directory
!  petsc/include/foldinclude.
!
!  Since one must be very careful to include each file no more than once
!  in a Fortran routine, application programmers must exlicitly list
!  each file needed for the various PETSc components within their
!  program (unlike the C/C++ interface).
!

#include <finclude/petscsys.h>
#include <finclude/petscvec.h>
#include <finclude/petscmat.h>
#include <finclude/petscmat.h90>
#include <finclude/petscpc.h>
#include <finclude/petscksp.h>

!
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!                   Variable declarations
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!
!  Variables:
!     ksp      - linear solver context
!     ksp      - Krylov subspace method context
!     pc       - preconditioner context
!     x, b, u  - approx solution, right-hand-side, exact solution vectors
!     A        - matrix that defines linear system
!     its      - iterations for convergence
!     norm     - norm of error in solution
!     rctx     - random number generator context
!
!  Note that vectors are declared as PETSc "Vec" objects.  These vectors
!  are mathematical objects that contain more than just an array of
!  double precision numbers. I.e., vectors in PETSc are not just
!        double precision x(*).
!  However, local vector data can be easily accessed via VecGetArray().
!  See the Fortran section of the PETSc users manual for details.
!

! NON PETSC VARIABLES

      integer, parameter :: i8b = selected_int_kind(15) !                     ==> cf. integer(8)
      integer, parameter :: i4b = selected_int_kind(9)  ! -10**9 < n < 10**9  ==> cf. integer(4)
      integer, parameter :: i2b = selected_int_kind(4)  ! -10**4 < n < 10**4  ==> cf. integer(2)
      integer, parameter :: i1b = selected_int_kind(2)  ! -10**2 < n < 10**2  ==> byte kind
      
      integer, parameter :: dp  = kind(1.0d0)
      integer, parameter :: dpc = kind((1.0d0,1.0d0))

      complex(dpc),dimension(:),allocatable     :: xwork1
      integer(i4b),dimension(:),allocatable  :: loc
      integer(i4b),dimension(:),allocatable  :: idxm1,idxn1
      integer(i4b),dimension(:),allocatable  :: idxm2,idxn2

      complex(dpc) :: ip,ibeta

      real(dp) :: beta

      real(dp) :: start, finish
      
      real(dp),dimension(:,:),allocatable :: D1X,D2X,D1Y,D2Y

! PETSC VARIABLES

      PetscReal      norm,tol
      PetscInt       i,j,k,II,JJ,its

      PetscInt       ct,jmin,jmax
      PetscInt       ione,ntimes
      PetscErrorCode ierr
      PetscBool      flg
      PetscScalar    v,one,neg_one,rhs
      !PetscScalar,dimension(:,:),allocatable :: D1X,D2X,D1Y,D2Y
      
      KSP            ksp
      PetscRandom    rctx

! MPI 
      PetscMPIInt    rank,size
      

! Matrices indices

      PetscInt       nx,ny,nxy
      PetscInt       IstartA,IendA,IstartB,IendB

! Vectors

      Vec            i_vec
      Vec            u_vec,v_vec,w_vec
      Vec            ux_vec,vx_vec,wx_vec
      Vec            uy_vec,vy_vec,wy_vec
      Vec            uz_vec,vz_vec,wz_vec

      Vec            u,x

! Matrices

      Mat            A,B

! Complex numbers

!      PetscScalar    ip
      
!  Note: Any user-defined Fortran routines (such as MyKSPMonitor)
!  MUST be declared as external.

      external MyKSPMonitor,MyKSPConverged

! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!                 Beginning of program
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      call PetscInitialize(PETSC_NULL_CHARACTER,ierr)

#if !defined(PETSC_USE_COMPLEX)
#error "this code requires PETSc --with-scalar-type=complex build"
#endif
#if !defined(PETSC_USE_REAL_DOUBLE)
#error "this code requires PETSc --with-precision=real build"
#endif

      nx=3
      ny=5
      nxy=nx*ny

      beta = 0.1
      ip = (0.d0,1.d0)
      ibeta = ip*beta 

      !ip = PETSC_i

      allocate(D1X(nx,nx),D2X(nx,nx),D1Y(ny,ny),D2Y(ny,ny))

      do i=1,nx
         do j=1,nx
            D1X(i,j)=100+j+(i-1)*nx
         enddo
      enddo

      do i=1,ny
         do j=1,ny
            D1Y(i,j)=999.d0
         enddo
      enddo

      !D1Y=999.d0

      ione=1 ! integer 1

      call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-nx',nx,flg,ierr)
      call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-ny',ny,flg,ierr)
      call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)
      call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr)

      if (rank == 0)then

         allocate(xwork1(1:nxy))
         allocate(loc(1:nxy))
         
         do i=1,nxy
            loc(i)=i-1
            xwork1(i)=6.0
         enddo

         call VecCreateSeq(PETSC_COMM_SELF,nxy,i_vec,ierr) ! sequential vector

         call VecSetValues(i_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)

!  Assemble vector

         call VecAssemblyBegin(i_vec,ierr)
         call VecAssemblyEnd(i_vec,ierr)

         !display vector
         !call VecView(i_vec,PETSC_VIEWER_STDOUT_WORLD,ierr) ==> seems very slow
         
      endif

!  Create parallel matrix, specifying only its global dimensions.
!  When using MatCreate(), the matrix format can be specified at
!  runtime. Also, the parallel partitioning of the matrix is
!  determined by PETSc at runtime.
         
      !PARALLEL MATRICES

      !Create A

      call MatCreate(PETSC_COMM_WORLD,A,ierr)
      call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,4*nxy,4*nxy,ierr)
      call MatSetFromOptions(A,ierr)
      call MatSetUp(A,ierr)

      !Create B

      call MatCreate(PETSC_COMM_WORLD,B,ierr)
      call MatSetSizes(B,PETSC_DECIDE,PETSC_DECIDE,4*nxy,4*nxy,ierr)
      call MatSetFromOptions(B,ierr)
      call MatSetUp(B,ierr)

!  Currently, all PETSc parallel matrix formats are partitioned by
!  contiguous chunks of rows across the processors.  Determine which
!  rows of the matrix are locally owned.

      call MatGetOwnershipRange(A,IstartA,IendA,ierr)
      call MatGetOwnershipRange(B,IstartB,IendB,ierr)

      !write(*,*)'IstartA,IendA',IstartA,IendA
      !write(*,*)'IstartB,IendB',IstartB,IendB

!
!.... Build A sequentially from processor 0 for now
!

      call cpu_time(start)

      if (rank == 0)then

         do i=1,nxy        
            
            call MatSetValues(A,ione,i+2*nxy-1,ione,i+3*nxy-1,ibeta,INSERT_VALUES,ierr) ! iBeta (block 3,4)
            call MatSetValues(A,ione,i+3*nxy-1,ione,i+2*nxy-1,ibeta,INSERT_VALUES,ierr) ! iBeta (block 4,3)

            call MatSetValues(A,ione,i-1,ione,i+nxy-1,xwork1(i),INSERT_VALUES,ierr) ! Uy (block 1,2)  
            call MatSetValues(A,ione,i+nxy-1,ione,i-1,xwork1(i),INSERT_VALUES,ierr) ! Vx (block 2,1)

            call MatSetValues(A,ione,i+2*nxy-1,ione,i-1,2*xwork1(i),INSERT_VALUES,ierr) ! Wx (block 3,1)  
            call MatSetValues(A,ione,i+2*nxy-1,ione,i+nxy-1,2*xwork1(i),INSERT_VALUES,ierr) ! Wy (block 3,2)

         enddo

         allocate(idxm1(1:ny),idxn1(1:ny),idxm2(1:ny),idxn2(1:ny))
         idxm1=0
         idxn1=0
         idxm2=0
         idxn2=0

         do i=1,ny
            idxm1(i)=i-1
            idxn1(i)=i-1

            idxm2(i)=i-1
            idxn2(i)=i-1
         enddo
         
         !block(4,2)
         idxm1=idxm1+3*nxy
         idxn1=idxn1+1*nxy

         !block(2,4)
         idxm2=idxm2+1*nxy
         idxn2=idxn2+3*nxy

         do i=1,nx       
            
            call MatSetValues(A,ny,idxm1,ny,idxn1,D1Y,INSERT_VALUES,ierr) ! DY (block 4,2)
            call MatSetValues(A,ny,idxm2,ny,idxn2,D1Y,INSERT_VALUES,ierr) ! DY (block 2,4)

            idxm1=idxm1+ny
            idxn1=idxn1+ny
            idxm2=idxm2+ny
            idxn2=idxn2+ny

         enddo


         ! DX (block 4,1)
         
         ct=0
         jj=1

         do i=1,nxy
            
            ii=1+floor((i-0.99)/ny)
            ct=ct+1
            
            if ( mod(ct-1,ny) == 0 ) then
               jmin=1
               jmax=nx*ny-(ny-1)
            endif
            
            do j=jmin,jmax,ny
               
               !D(i,j)=D12X(ii,jj)
               call MatSetValues(A,ione,i+3*nxy-1,ione,j-1,D1X(ii,jj),INSERT_VALUES,ierr)
               jj=jj+1
               
            enddo
            
            jj=1
            jmin=jmin+1
            jmax=jmax+1
            
         enddo

         ! DX (block 1,4)

         ct=0
         jj=1
!!$
         do i=1,nxy
            
            ii=1+floor((i-0.99)/ny)
            ct=ct+1
            
            if ( mod(ct-1,ny) == 0 ) then
               jmin=1
               jmax=nx*ny-(ny-1)
            endif
            
            do j=jmin,jmax,ny
               
               !D(i,j)=D12X(ii,jj)
               call MatSetValues(A,ione,i-1,ione,j+3*nxy-1,D1X(ii,jj),INSERT_VALUES,ierr)
               jj=jj+1
               
            enddo
            
            jj=1
            jmin=jmin+1
            jmax=jmax+1
            
         enddo
         
      endif

      call cpu_time(finish)
      write(*,*)
      print '("Time to build A = ",f21.3," seconds.")',finish-start

      !call MatAssemblyBegin(A,MAT_FLUSH_ASSEMBLY,ierr) ! necessary to switch between INSERT_VALUES and ADD_VALUES
      !call MatAssemblyEnd(A,MAT_FLUSH_ASSEMBLY,ierr)   ! necessary to switch between INSERT_VALUES and ADD_VALUES

      !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
      !!!!! NOTE STILL NEED TO DO BLOCKS A11, A22 and A33 !!!!!!!!
      !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

      call cpu_time(start)

      call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)
      call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)

      call cpu_time(finish)
      write(*,*)
      print '("Time to assemble A = ",f21.3," seconds.")',finish-start

      call MatView(A,PETSC_VIEWER_STDOUT_WORLD,ierr)

!
!.... Build B in parallel
!
      call cpu_time(start)

      do i=IstartB,IendB

         if (i < 3*nxy) then
            call MatSetValues(B,ione,i,ione,i,ip,INSERT_VALUES,ierr)
         endif

      enddo

      call cpu_time(finish)
      write(*,*)
      print '("Time to build B = ",f21.3," seconds.")',finish-start

      call cpu_time(start)

      call MatAssemblyBegin(B,MAT_FINAL_ASSEMBLY,ierr)
      call MatAssemblyEnd(B,MAT_FINAL_ASSEMBLY,ierr)

      call cpu_time(finish)
      write(*,*)
      print '("Time to assemble B = ",f21.3," seconds.")',finish-start

      !call MatView(B,PETSC_VIEWER_STDOUT_WORLD,ierr)

      call PetscFinalize(ierr)

      call cpu_time(finish)
      write(*,*)
      print '("Time = ",f21.3," seconds.")',finish-start

      write(*,*)''
      write(*,*)'End of program'
      write(*,*)''      
    
      end program main









!!$         call VecSetValues(u_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(v_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(w_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(ux_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(vx_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(wx_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(uy_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(vy_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(wy_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(uz_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(vz_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)
!!$         call VecSetValues(wz_vec,nxy,loc,xwork1,INSERT_VALUES,ierr)


!!$         call VecDuplicate(i_vec,u_vec,ierr)
!!$         call VecDuplicate(i_vec,v_vec,ierr)
!!$         call VecDuplicate(i_vec,w_vec,ierr)
!!$         call VecDuplicate(i_vec,ux_vec,ierr)
!!$         call VecDuplicate(i_vec,vx_vec,ierr)
!!$         call VecDuplicate(i_vec,wx_vec,ierr)
!!$         call VecDuplicate(i_vec,uy_vec,ierr)
!!$         call VecDuplicate(i_vec,vy_vec,ierr)
!!$         call VecDuplicate(i_vec,wy_vec,ierr)
!!$         call VecDuplicate(i_vec,uz_vec,ierr)
!!$         call VecDuplicate(i_vec,vz_vec,ierr)
!!$         call VecDuplicate(i_vec,wz_vec,ierr)









!  Create parallel vectors.
!   - Here, the parallel partitioning of the vector is determined by
!     PETSc at runtime.  We could also specify the local dimensions
!     if desired -- or use the more general routine VecCreate().
!   - When solving a linear system, the vectors and matrices MUST
!     be partitioned accordingly.  PETSc automatically generates
!     appropriately partitioned matrices and vectors when MatCreate()
!     and VecCreate() are used with the same communicator.
!   - Note: We form 1 vector from scratch and then duplicate as needed.

      !call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,m*n,u,ierr)
      !call VecSetFromOptions(u,ierr)
      !call VecDuplicate(u,b,ierr)
      !call VecDuplicate(b,x,ierr)

!  Set exact solution; then compute right-hand-side vector.
!  By default we use an exact solution of a vector with all
!  elements of 1.0;  Alternatively, using the runtime option
!  -random_sol forms a solution vector with random components.

!!$      call PetscOptionsHasName(PETSC_NULL_CHARACTER,'-random_exact_sol',flg,ierr)
!!$      if (flg) then
!!$         call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr)
!!$         call PetscRandomSetFromOptions(rctx,ierr)
!!$         call VecSetRandom(u,rctx,ierr)
!!$         call PetscRandomDestroy(rctx,ierr)
!!$      else
!!$         call VecSet(u,one,ierr)
!!$      endif
!!$      call MatMult(A,u,b,ierr)


!  View the exact solution vector if desired

!!$      call PetscOptionsHasName(PETSC_NULL_CHARACTER,"-view_exact_sol",flg,ierr)
!!$      if (flg) then
!!$         call VecView(u,PETSC_VIEWER_STDOUT_WORLD,ierr)
!!$      endif

! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!         Create the linear solver and set various options
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

!  Create linear solver context

!!$      call KSPCreate(PETSC_COMM_WORLD,ksp,ierr)


!  Set operators. Here the matrix that defines the linear system
!  also serves as the preconditioning matrix.

      !call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr) !aha commented and replaced by next line
!!$      call KSPSetOperators(ksp,A,A,ierr)


!  Set linear solver defaults for this problem (optional).
!   - By extracting the KSP and PC contexts from the KSP context,
!     we can then directly directly call any KSP and PC routines
!     to set various options.
!   - The following four statements are optional; all of these
!     parameters could alternatively be specified at runtime via
!     KSPSetFromOptions(). All of these defaults can be
!     overridden at runtime, as indicated below.

!     We comment out this section of code since the Jacobi
!     preconditioner is not a good general default.

!      call KSPGetPC(ksp,pc,ierr)
!      ptype = PCJACOBI
!      call PCSetType(pc,ptype,ierr)
!      tol = 1.e-7
!      call KSPSetTolerances(ksp,tol,PETSC_DEFAULT_DOUBLE_PRECISION,
!     &     PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,ierr)

!  Set user-defined monitoring routine if desired

      !call PetscOptionsHasName(PETSC_NULL_CHARACTER,'-my_ksp_monitor',flg,ierr) ! flg set to true if that option has been specified
      !if (flg) then
      !  call KSPMonitorSet(ksp,MyKSPMonitor,PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,ierr) ! call MyKSPMonitor
      !endif

      !tol = 1.e-10
      !call KSPSetTolerances(ksp,tol,PETSC_DEFAULT_REAL,PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,ierr) ! set relative tolerance and uses defualt for absolute and divergence tol 
      !call KSPSetTolerances(ksp,tol,tol,PETSC_DEFAULT_REAL,PETSC_DEFAULT_INTEGER,ierr) ! set relative and absolute tolerances and uses defualt for divergence tol 

!  Set runtime options, e.g.,
!      -ksp_type <type> -pc_type <type> -ksp_monitor -ksp_rtol <rtol>
!  These options will override those specified above as long as
!  KSPSetFromOptions() is called _after_ any other customization
!  routines.

      !call KSPSetFromOptions(ksp,ierr)


!  Set convergence test routine if desired

      !call PetscOptionsHasName(PETSC_NULL_CHARACTER,'-my_ksp_convergence',flg,ierr)
      !if (flg) then
      !  call KSPSetConvergenceTest(ksp,MyKSPConverged,PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,ierr)
      !endif



      !call KSPMonitorSet(ksp,KSPMonitorTrueResidualNorm,PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,ierr);


!
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!                      Solve the linear system
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      !call KSPSolve(ksp,b,x,ierr)


!
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!                      View solver info (could use -ksp_view instead)
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      !added by aha

!!$      write(*,*)''
!!$      write(*,*)'Start of KSPView'
!!$      write(*,*)'----------------'      
!!$      write(*,*)''
!!$      call KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD,ierr)
!!$      write(*,*)''
!!$      write(*,*)'End of KSPView'
!!$      write(*,*)'--------------' 
!!$      write(*,*)''


! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
!                     Check solution and clean up
! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

!  Check the error
!!$      call VecAXPY(x,neg_one,u,ierr)
!!$      call VecNorm(x,NORM_2,norm,ierr)
!!$      call KSPGetIterationNumber(ksp,its,ierr)
!!$      if (rank .eq. 0) then
!!$        if (norm .gt. 1.e-12) then
!!$           write(6,100) norm,its
!!$        else
!!$           write(6,110) its
!!$        endif
!!$      endif
!!$  100 format('Norm of error ',e11.4,' iterations ',i5)
!!$  110 format('Norm of error < 1.e-12,iterations ',i5)

!  Free work space.  All PETSc objects should be destroyed when they
!  are no longer needed.

!!$      call KSPDestroy(ksp,ierr)
!!$      call VecDestroy(u,ierr)
!!$      call VecDestroy(x,ierr)
!!$      call VecDestroy(b,ierr)
!!$      call MatDestroy(A,ierr)

!  Always call PetscFinalize() before exiting a program.  This routine
!    - finalizes the PETSc libraries as well as MPI
!    - provides summary and diagnostic information if certain runtime
!      options are chosen (e.g., -log_summary).  See PetscFinalize()
!      manpage for more information.


! --------------------------------------------------------------
!
!  MyKSPMonitor - This is a user-defined routine for monitoring
!  the KSP iterative solvers.
!
!  Input Parameters:
!    ksp   - iterative context
!    n     - iteration number
!    rnorm - 2-norm (preconditioned) residual value (may be estimated)
!    dummy - optional user-defined monitor context (unused here)
!
      subroutine MyKSPMonitor(ksp,n,rnorm,dummy,ierr)

      implicit none

#include <finclude/petscsys.h>
#include <finclude/petscvec.h>
#include <finclude/petscksp.h>

      KSP              ksp
      Vec              x
      PetscErrorCode ierr
      PetscInt n,dummy
      PetscMPIInt rank
      double precision rnorm

!  Build the solution vector

      call KSPBuildSolution(ksp,PETSC_NULL_OBJECT,x,ierr)

!  Write the solution vector and residual norm to stdout
!   - Note that the parallel viewer PETSC_VIEWER_STDOUT_WORLD
!     handles data from multiple processors so that the
!     output is not jumbled.

      call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)
      if (rank .eq. 0) write(6,100) n
      call VecView(x,PETSC_VIEWER_STDOUT_WORLD,ierr)
      if (rank .eq. 0) write(6,200) n,rnorm

 100  format('iteration ',i5,' solution vector:')
 200  format('iteration ',i5,' residual norm ',e11.4)
      ierr = 0

      end subroutine MyKSPMonitor

! --------------------------------------------------------------
!
!  MyKSPConverged - This is a user-defined routine for testing
!  convergence of the KSP iterative solvers.
!
!  Input Parameters:
!    ksp   - iterative context
!    n     - iteration number
!    rnorm - 2-norm (preconditioned) residual value (may be estimated)
!    dummy - optional user-defined monitor context (unused here)
!
      subroutine MyKSPConverged(ksp,n,rnorm,flag,dummy,ierr)

      implicit none

#include <finclude/petscsys.h>
#include <finclude/petscvec.h>
#include <finclude/petscksp.h>

      KSP              ksp
      PetscErrorCode ierr
      PetscInt n,dummy
      KSPConvergedReason flag
      double precision rnorm

      if (rnorm .le. .05) then
        flag = 1
      else
        flag = 0
      endif
      ierr = 0

      end subroutine MyKSPConverged



Reply via email to