Thanks, Barry,
 
   I didn't  run with  with -ksp_monitor_true_residual -ksp_converged_reason.  
My own code was built based on the 
petsc-current/src/ksp/ksp/examples/tutorials/ex2f.F.  Since the line of 248 
which with KSPSetTolerances is commented out, it seems I didn't set the 
tolerance in my code.
 
  If I need to run with option -ksp_monitor_true_residual -ksp_converged_reason 
, I should add some lines like: 
    call PetscOptionsHasName()
    call KSPGetConvergedReason()
 Am I right?
 
In order to make the problem clear, I just attached the discription of my 
problem. Thanks a lot for any help from you.
 
Following is the portion of the code with KSP solver.
!============
      call KSPCreate(MPI_COMM_WORLD,ksp,ierr)
      call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr)
      call KSPSolve(ksp,b,x,ierr)
      call KSPGetIterationNumber(ksp,its,ierr)
        if (myid .eq. 0) then
            if (norm .gt. 1.e-12) then
            write(6,100) norm,its
            else
            write(6,110) its
            endif
        endif
    100 format('Norm of error ',e10.4,' iterations ',i5)
    110 format('Norm of error < 1.e-12,iterations ',i5)

!=============
 
> From: bsmith at mcs.anl.gov
> Date: Sun, 6 Feb 2011 21:30:56 -0600
> To: petsc-users at mcs.anl.gov
> Subject: Re: [petsc-users] questions about the multigrid framework
> 
> 
> On Feb 6, 2011, at 5:00 PM, Peter Wang wrote:
> 
> > Hello, I have some concerns about the multigrid framework in PETSc.
> > 
> > We are trying to solve a two dimensional problem with a large variety in 
> > length scales. The length of computational domain is in order of 1e3 m, and 
> > the width is in 1 m, nevertheless, there is a tiny object with 1e-3 m in a 
> > corner of the domain.
> > 
> > As a first thinking, we tried to solve the problem with a larger number of 
> > uniform or non-uniform grids. However, the error of the numerical solution 
> > increases when the number of the grid is too large. In order to test the 
> > effect of the grid size on the solution, a domain with regular scale of 1m 
> > by 1m was tried to solve. It is found that the extreme small grid size 
> > might lead to large variation to the exact solution. For example, the exact 
> > solution is a linear distribution in the domain. The numerical solution is 
> > linear as similar as the exact solution when the grid number is nx=1000 by 
> > ny=1000. However, if the grid number is nx=10000 by ny=10000, the numerical 
> > solution varies to nonlinear distribution which boundary is the only same 
> > as the exact solution. 
> 
> Stop right here. 99.9% of the time what you describe should not happen, with 
> a finer grid your solution (for a problem with a known solution for example) 
> will be more accurate and won't suddenly get less accurate with a finer mesh.
> 
> Are you running with -ksp_monitor_true_residual -ksp_converged_reason to make 
> sure that it is converging? and using a smaller -ksp_rtol <tol> for more grid 
> points. For example with 10,000 grid points in each direction and no better 
> idea of what the discretization error is I would use a tol of 1.e-12
> 
> Barry
> 
> We'll deal with the multigrid questions after we've resolved the more basic 
> issues.
> 
> 
> > The solver I used is a KSP solver in PETSc, which is set by calling :
> > KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr). Whether this 
> > solver is not suitable to the system with small size grid? Or, whether the 
> > problem crossing 6 orders of length scale is solvable with only one level 
> > grid system when the memory is enough for large matrix? Since there is less 
> > coding work for one level grid size, it would be easy to implement the 
> > solver.
> > 
> > I did some research work on the website and found the slides by Barry on
> > http://www.mcs.anl.gov/petsc/petsc-2/documentation/tutorials/Columbia04/DDandMultigrid.pdf
> > It seems that the multigrid framework in PETSc is a possible approach to 
> > our problem. We are thinking to turn to the multigrid framework in PETSc to 
> > solve the problem. However, before we dig into it, there are some issues 
> > confusing us. It would be great if we can get any suggestion from you:
> > 1 Whether the multigrid framework can handle the problem with a large 
> > variety in length scales (up to 6 orders)? Is DMMG is the best tool for our 
> > problem?
> > 
> > 2 The coefficient matrix A and the right hand side vector b were created 
> > for the finite difference scheme of the domain and solved by KSP solver 
> > (callKSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr)). Is it easy 
> > to immigrate the created Matrix A and Vector b to the multigrid framework?
> > 
> > 3 How many levels of the subgrid are needed to obtain a solution close 
> > enough to the exact solution for a problem with 6 orders in length scale?
> > 
> 
                                          
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110207/19f08dbb/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Problem_discription.pdf
Type: application/pdf
Size: 95775 bytes
Desc: not available
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110207/19f08dbb/attachment-0001.pdf>

Reply via email to