I probably should have said that I'm getting convergence with the
Jacobian-free method. I still haven't had any luck with Newton. It
appears that the default for Jacobian-free is right preconditioning; I
don't know if that's a Moose setting or PetSc. Anyways, if I try left
preconditioning, I get comparable or slightly better performance with
the full set. Still no convergence with the smaller set.
I'll move forward with the full set, but I still want to understand why
I can't achieve convergence with the smaller set. I think I'm going to
review my linear algebra and do some research on preconditioning and
iterative solutions so I have a better grasp of what's going on here.
On 11/23/2015 04:51 PM, Barry Smith wrote:
So just keep the "full" variable set.
Note that without the full set the true residual is not tracking the
preconditioned residual
0 KSP unpreconditioned resid norm 1.250000000000e+03 true resid norm
1.250000000000e+03 ||r(i)||/||b|| 1.000000000000e+00
1 KSP unpreconditioned resid norm 1.250000000000e+03 true resid norm
1.250000000000e+03 ||r(i)||/||b|| 1.000000000000e+00
2 KSP unpreconditioned resid norm 2.182679427760e+02 true resid norm
7.819529716916e+08 ||r(i)||/||b|| 6.255623773533e+05
3 KSP unpreconditioned resid norm 1.011652745364e+02 true resid norm
4.461678857470e+01 ||r(i)||/||b|| 3.569343085976e-02
4 KSP unpreconditioned resid norm 8.125623676015e+00 true resid norm
8.053940519499e+08 ||r(i)||/||b|| 6.443152415599e+05
5 KSP unpreconditioned resid norm 5.805247155944e+00 true resid norm
8.054105876447e+08 ||r(i)||/||b|| 6.443284701157e+05
6 KSP unpreconditioned resid norm 4.756488143441e+00 true resid norm
8.054162537433e+08 ||r(i)||/||b|| 6.443330029946e+05
7 KSP unpreconditioned resid norm 4.126450902175e+00 true resid norm
8.054191165808e+08 ||r(i)||/||b|| 6.443352932646e+05
8 KSP unpreconditioned resid norm 3.694696196953e+00 true resid norm
8.054208439360e+08 ||r(i)||/||b|| 6.443366751488e+05
9 KSP unpreconditioned resid norm 3.375152117403e+00 true resid norm
8.054219995564e+08 ||r(i)||/||b|| 6.443375996451e+05
10 KSP unpreconditioned resid norm 3.126354693526e+00 true resid norm
8.054228269923e+08 ||r(i)||/||b|| 6.443382615939e+05
meaning that the linear solver is not making any progress. With the full set
the as the preconditioned residual gets smaller the true one does as well, to
some degree
0 KSP unpreconditioned resid norm 1.250000000000e+03 true resid norm
1.250000000000e+03 ||r(i)||/||b|| 1.000000000000e+00
1 KSP unpreconditioned resid norm 1.247314056942e+03 true resid norm
1.247314056942e+03 ||r(i)||/||b|| 9.978512455538e-01
2 KSP unpreconditioned resid norm 4.860927105197e-02 true resid norm
4.776005315904e-02 ||r(i)||/||b|| 3.820804252723e-05
3 KSP unpreconditioned resid norm 4.787844580540e-02 true resid norm
4.752835483387e-02 ||r(i)||/||b|| 3.802268386709e-05
4 KSP unpreconditioned resid norm 4.437366678888e-02 true resid norm
2.756372625290e-01 ||r(i)||/||b|| 2.205098100232e-04
5 KSP unpreconditioned resid norm 1.696908986557e-05 true resid norm
5.105761919450e+00 ||r(i)||/||b|| 4.084609535560e-03
Try using right preconditioning where the preconditioned residual does not
appear -ksp_pc_side right what happens in both cases?
Barry
On Nov 23, 2015, at 2:44 PM, Alex Lindsay <[email protected]> wrote:
I've found that with a "full" variable set, I can get convergence. However, if
I remove two of my variables (the other variables have no dependence on the variables
that I remove; the coupling is one-way), then I no longer get convergence. I've attached
logs of one time-step for both the converged and non-converged cases.
On 11/23/2015 01:29 PM, Alex Lindsay wrote:
On 11/20/2015 02:33 PM, Jed Brown wrote:
Alex Lindsay <[email protected]> writes:
I'm almost ashamed to share my condition number because I'm sure it must
be absurdly high. Without applying -ksp_diagonal_scale and
-ksp_diagonal_scale_fix, the condition number is around 1e25. When I do
apply those two parameters, the condition number is reduced to 1e17.
Even after scaling all my variable residuals so that they were all on
the order of unity (a suggestion on the Moose list), I still have a
condition number of 1e12.
Double precision provides 16 digits of accuracy in the best case. When
you finite difference, the accuracy is reduced to 8 digits if the
differencing parameter is chosen optimally. With the condition numbers
you're reporting, your matrix is singular up to available precision.
I have no experience with condition numbers, but knowing that perfect
condition number is unity, 1e12 seems unacceptable. What's an
acceptable upper limit on the condition number? Is it problem
dependent? Having already tried scaling the individual variable
residuals, I'm not exactly sure what my next method would be for
trying to reduce the condition number.
Singular operators are often caused by incorrect boundary conditions.
You should try a small and simple version of your problem and find out
why it's producing a singular (or so close to singular we can't tell)
operator.
Could large variable values also create singular operators? I'm essentially
solving an advection-diffusion-reaction problem for several species where the
advection is driven by an electric field. The species concentrations are in a
logarithmic form such that the true concentration is given by exp(u). With my
current units (# of particles / m^3) exp(u) is anywhere from 1e13 to 1e20, and
thus the initial residuals are probably on the same order of magnitude. After
I've assembled the total residual for each variable and before the residual is
passed to the solver, I apply scaling to the residuals such that the sum of the
variable residuals is around 1e3. But perhaps I lose some accuracy during the
residual assembly process?
I'm equating "incorrect" boundary conditions to "unphysical" or "unrealistic"
boundary conditions. Hopefully that's fair.
<logFailedSnippet.txt><logGoldSnippet.txt>