I known the diagonal scaling. And I will try it tomorrow.
Thank to slepc, I can monitor the eigen values as an approximation of condition 
number.  
The original problem as condition number about 1e20, which defeat any iterative 
solver.
I hope I can reduce it as much as possible.

Further more, can I use MC64, which permute and scale a sparse unsymmetric 
matrix to 
put large entries on the diagonal?

 
>    The literature is unclear to me, but I don't think these scalings are done 
> in this way to improve the conditioning of the matrix. They are done to 
> change the relative importance of different entries in the vector to 
> determine stopping conditions and search directions in Newton's method. For 
> example, if you consider getting the first vector entry in the residual/error 
> small more important than the other entries you would use the scaling vector 
> like [bignumber 1 1 1 1 ....]. In some way the scaling vectors reflect 
> working with a different norm to measure the residual. Since PETSc does not 
> support providing these scaling vectors you can get the same effect if you 
> define your a new function (and hence also new Jacobian) that weights the 
> various entries the way you want based on their importance. In other words 
> newF(x)   = diagonalscaling1* oldF( diagonalscaling2 * y) then if x* is the 
> solution to the new problem, y* = inv(diagonalscaling2*x*) is the solution to 
> the original problem.  In some cases this transformation can correspond to 
> working in "dimensionless coordinates" but all that language is over my head.
> 
> 
> 
> 
> 
>    If you just want to scale the matrix to have ones on the diagonal before 
> forming the preconditioner (on the theory that it is better to solve problems 
> with a "well-scaled" matrix) you can use the run time options 
> -ksp_diagonal_scale -ksp_diagonal_scale_fix or in the code with 
> KSPSetDiagonalScale() KSPSetDiagonalScaleFix().
> 
> 
> 
>    Barry
> 
> 
> 
> On Apr 4, 2011, at 10:52 PM, Barry Smith wrote:
> 
> 
> 
> > 
> 
> >   If you are looking for something like this:
> 
> > 
> 
> > When solving F(x) = 0, I would like to be able to scale both the solution
> 
> > vector x and the residual function vector F, simply by specifying scaling
> 
> > vectors, sx and sf, say. (These vectors would be the diagonal entries of
> 
> > scaling matrices Dx and Df.)
> 
> > I realize this can be achieved, at least in part, within the user residual
> 
> > function.
> 
> > This is what I had been doing, until I looked at Denis and Schnabel (sp?),
> 
> > Brown and Saad, and the KINSOL user guide. It seems one has to take the
> 
> > scaling matrices into account when computing various norms, when applying 
> > the
> 
> > preconditioner, and when computing the step size, \sigma. No doubt there
> 
> > are other things I have missed that also need to be done.
> 
> > 
> 
> > http://www.mcs.anl.gov/petsc/petsc-as/developers/projects.html
> 
> > 
> 
> > we don't have support for this (nor do I understand it). Anyways it has 
> > been on the "projects to do list" for a very long time; suspect it would 
> > require a good amount of futzing around in the source code to add.
> 
> > 
> 
> >   Barry
> 
> > 
> 
> > 
> 
> > On Apr 4, 2011, at 10:16 PM, Gong Ding wrote:
> 
> > 
> 
> >> Hi,
> 
> >> I'd like to scaling the jacobian matrix as if the condition number can be 
> >> improved.
> 
> >> That is scaling J by Dl*J*Dr. The scaling diagonal matrix will be changed 
> >> in each nonlinear iteration.
> 
> >> 
> 
> >> Does SNES already exist some interface to do this?
> 
> >> 
> 
> >> 
> 
> >> 
> 
> >> 
> 
> >> 
> 
> > 
> 
> 
> 
>

Reply via email to