Thank you Matt. Ling
On Tue, Dec 11, 2012 at 4:02 PM, Matthew Knepley <knepley at gmail.com> wrote: > On Tue, Dec 11, 2012 at 2:59 PM, Zou (Non-US), Ling <ling.zou at > inl.gov>wrote: > >> ok. I tried. Seems there is no effect. >> >> ./my-moose-project -i input.i -snes_type test -mat_mffd_umin >> 1.e-10 -snes_test_display > out >> >> Also, the webpage says: >> *-mat_mffd_unim <umin> >> >> *I am not quite sure if 'unim' is a typo. I tried both 'umin' and 'unim' >> anyway. >> > > You can check what is coming in, right? But this is all academic, with > that scaling, you will get almost > no significant figures in the Jacobian for those unknowns, so why worry > about it. Nondimensionalize. > > Matt > > >> Ling >> >> >> >> On Tue, Dec 11, 2012 at 3:50 PM, Matthew Knepley <knepley at gmail.com>wrote: >> >>> On Tue, Dec 11, 2012 at 2:40 PM, Zou (Non-US), Ling <ling.zou at inl.gov> >>> wrote: >>> > Hmm... I have an 'approximated' analytical Jacobian to compare. And I >>> did >>> > this: >>> > >>> > ./my-moose-project -i input.i -snes_type test -snes_test_display > out >>> > >>> > I actually found out that the PETSc provided FD Jacobian gives 'nan' >>> > numbers, while my approximated Jacobian does not give 'nan' at the same >>> > positions. >>> > >>> > As we discussed in the previous emails, the perturbation on U0 is too >>> large, >>> > which makes 'nan' appear in the FD Jacobians. So....I am trying to use >>> a >>> > smaller '-mat_mffd_err <number here>', to see if I could get an easy >>> fix by >>> > now, like this, >>> >>> I don't think 'err' has anything to do with it. If you read the page I >>> mailed you, I >>> believe umin can be made very small. >>> >>> Matt >>> >>> > ./my-moose-project -i input.i -snes_type test -md_mffd_err 1.e-10 >>> > -snes_test_display > out >>> > >>> > seems not working :-( >>> > no matter what number I give to -md_mffd_err, the print out results >>> seem not >>> > changed. >>> > >>> > But of course, non-dimensionalization might be the ultimate solution. >>> > >>> > Ling >>> > >>> > On Tue, Dec 11, 2012 at 3:29 PM, Matthew Knepley <knepley at gmail.com> >>> wrote: >>> >> >>> >> On Tue, Dec 11, 2012 at 2:19 PM, Zou (Non-US), Ling <ling.zou at inl.gov >>> > >>> >> wrote: >>> >> > Matt, one more question. >>> >> > >>> >> > Can I combine the options >>> >> > -snes_type test >>> >> > and >>> >> > -mat_mffd_err 1.e-10 >>> >> > to see the effect? >>> >> >>> >> I do not understand your question. test does compare the analytic and >>> >> FD Jacobian >>> >> actions, but I thought you did not have an analytic action. >>> >> >>> >> Matt >>> >> >>> >> > Best, >>> >> > >>> >> > Ling >>> >> > >>> >> > >>> >> > >>> >> > On Tue, Dec 11, 2012 at 2:47 PM, Zou (Non-US), Ling < >>> ling.zou at inl.gov> >>> >> > wrote: >>> >> >> >>> >> >> thank you Matt. I will try to figure it out. >>> Non-dimensionalization is >>> >> >> certainly something worth to try. >>> >> >> >>> >> >> Best, >>> >> >> >>> >> >> Ling >>> >> >> >>> >> >> >>> >> >> On Tue, Dec 11, 2012 at 2:41 PM, Matthew Knepley < >>> knepley at gmail.com> >>> >> >> wrote: >>> >> >>> >>> >> >>> On Tue, Dec 11, 2012 at 1:40 PM, Matthew Knepley < >>> knepley at gmail.com> >>> >> >>> wrote: >>> >> >>> > On Tue, Dec 11, 2012 at 1:34 PM, Zou (Non-US), Ling >>> >> >>> > <ling.zou at inl.gov> >>> >> >>> > wrote: >>> >> >>> >> Dear All, >>> >> >>> >> >>> >> >>> >> I have recently had an issue using snes_mf_operator. I've >>> tried to >>> >> >>> >> figure it >>> >> >>> >> out from PETSc manual and PETSc website but didn't get any >>> luck, so >>> >> >>> >> I >>> >> >>> >> submit >>> >> >>> >> my question here and hope some one could help me out. >>> >> >>> >> >>> >> >>> >> (1) >>> >> >>> >> >>> ================================================================= >>> >> >>> >> A little bit background here: my problem has 7 variables, i.e., >>> >> >>> >> >>> >> >>> >> U = [U0, U1, U2, U3, U4, U5, U6] >>> >> >>> >> >>> >> >>> >> U0 is in the order of 1. >>> >> >>> >> U1, U2, U4 and U5 in the oder of 100. >>> >> >>> >> U3 and U6 are in the order of 1.e8. >>> >> >>> >> >>> >> >>> >> I believe this should be quite common for most PETSc users. >>> >> >>> >> >>> >> >>> >> (2) >>> >> >>> >> >>> ================================================================= >>> >> >>> >> My problem here is, U0, by its physical meaning, has to be >>> limited >>> >> >>> >> between 0 >>> >> >>> >> and 1. When PETSc starts to perturb the initial solution of U >>> >> >>> >> (which I >>> >> >>> >> believe properly set) to approximate the operation of J (dU), >>> the >>> >> >>> >> U0 >>> >> >>> >> get a >>> >> >>> >> perturbation size in the order of 100, which causes problem as >>> U0 >>> >> >>> >> has >>> >> >>> >> to be >>> >> >>> >> smaller than 1. >>> >> >>> >> >>> >> >>> >> From my observation, this same perturbation size, say eps, is >>> >> >>> >> applied >>> >> >>> >> on all >>> >> >>> >> U0, U1, U2, etc. <=== Is this the default setting? >>> >> >>> >> I also guess that this eps, in the order of 100, is determined >>> from >>> >> >>> >> my >>> >> >>> >> initial solution vector and other related PETSc parameters. >>> <=== >>> >> >>> >> Is >>> >> >>> >> my >>> >> >>> >> guessing right? >>> >> >>> >> >>> >> >>> >> (3) >>> >> >>> >> >>> ================================================================= >>> >> >>> >> My question: I'd like to avoid a perturbation size ~100 on U0, >>> >> >>> >> i.e., I >>> >> >>> >> have >>> >> >>> >> to limit it to be ~0.01 (or some small number) to avoid the U0 >>> > 1 >>> >> >>> >> situation. Is there any way to control that? >>> >> >>> >> Or, is there any advanced option to control the perturbation >>> size >>> >> >>> >> on >>> >> >>> >> different variables when using snes_mf_operator? >>> >> >>> > >>> >> >>> > Here is a description of the algorithm for calculating h. It >>> seems >>> >> >>> > to >>> >> >>> > me a better way to do this >>> >> >>> > is to non-dimensionalize first. >>> >> >>> >>> >> >>> I forgot the URL: >>> >> >>> >>> >> >>> >>> >> >>> >>> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatCreateMFFD.html#MatCreateMFFD >>> >> >>> >>> >> >>> Matt >>> >> >>> >>> >> >>> > Matt >>> >> >>> > >>> >> >>> >> >>> >> >>> >> Hope my explanation is clear. Please let me know if it is not. >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> Best Regards, >>> >> >>> >> >>> >> >>> >> Ling >>> >> >>> >> >>> >> >>> > >>> >> >>> > >>> >> >>> > >>> >> >>> > -- >>> >> >>> > What most experimenters take for granted before they begin their >>> >> >>> > experiments is infinitely more interesting than any results to >>> which >>> >> >>> > their experiments lead. >>> >> >>> > -- Norbert Wiener >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> -- >>> >> >>> What most experimenters take for granted before they begin their >>> >> >>> experiments is infinitely more interesting than any results to >>> which >>> >> >>> their experiments lead. >>> >> >>> -- Norbert Wiener >>> >> >> >>> >> >> >>> >> > >>> >> >>> >> >>> >> >>> >> -- >>> >> What most experimenters take for granted before they begin their >>> >> experiments is infinitely more interesting than any results to which >>> >> their experiments lead. >>> >> -- Norbert Wiener >>> > >>> > >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121211/ba8a7795/attachment-0001.html>
