Yes, to clarify, the problem with two scalar fields and two scalar 
“eigenvalues” is


-\Delta u_1 + f(u_1, u_2) = lam * u1 - mu * du2/dx 
-\Delta u_2 + g(u_1, u_2) = lam * u2 + mu * du1/dx 

(u_1, u_2) are defined on the mesh, and (lam, mu) are unknown scalars.  My 
actual problem has a 4 degrees of freedom at each mesh point and 3 unknown 
scalars, but this makes the point.

-gideon

> On Aug 28, 2015, at 6:20 AM, Matthew Knepley <[email protected]> wrote:
> 
> On Thu, Aug 27, 2015 at 10:23 PM, Barry Smith <[email protected] 
> <mailto:[email protected]>> wrote:
> 
> > On Aug 27, 2015, at 10:15 PM, Gideon Simpson <[email protected] 
> > <mailto:[email protected]>> wrote:
> >
> > That’s correct, I am not using the SNESGetDM.  I suppose I could.  Keep in 
> > mind that I’m trying to solve, simultaneously, for the scalar parameters 
> > and the vector field.  I guess what I am unclear about is how DMRefine is 
> > to know that the unknown associated with the scalar parameters can never be 
> > coarsened out, but must be retained at all iterations.
> 
>     Nothing ever gets coarsened in grid sequencing, it only gets refined.
> 
>     The reason it knows not to "refine" the scalars is because the scalars 
> are created with DMRedundant and the DMRedundant object knows that refinement 
> means "leave as is, since there is no grid" while the DMDA knows it is a grid 
> and knows how to refine itself. So when it "interpolates" the DMRedundant 
> variables it just copies them (or multiples them by the matrix 1 which is 
> just a copy).
> 
> I think you might be misunderstanding the "scalars" part. He is solving a 
> nonlinear eigenproblem (which he did not write down) for some variables. Then 
> he
> uses those variable in the coupled diffusion equations he did write down. He 
> has wrapped the whole problem in a SNES with 2 parts: the nonlinear 
> eigenproblem
> and the diffusion equations. He uses DMComposite to deal with all the 
> unknowns.
> 
> I think Nonlinear Block Gauss-Siedel on the different problems would be a 
> useful starting point, but we do not have that.
> 
>   Thanks,
> 
>      Matt
>  
> >
> > Here is my form function.  I can send more code if needed.
> >
> 
>   Just change the user->packer that you use to be the DM obtained with 
> SNESGetDM()
> 
> 
> > /* Form the system of equations for computing a blowup solution*/
> > PetscErrorCode form_function(SNES snes, Vec U, Vec F, void *ctx){
> >
> >  blowup_ctx *user = (blowup_ctx *) ctx;
> >  PetscInt i;
> >  PetscScalar dx, dx2, xmax,x;
> >  PetscScalar u, v, f,g, ux, vx, uxx, vxx, fx,gx, fxx, gxx;
> >  DMDALocalInfo info;
> >  Vec p_vec, Q_vec, Fp_vec, FQ_vec;
> >  PetscScalar *p_array, *Fp_array;
> >  Q *Qvals, *FQvals;
> >  PetscScalar Q2sig, W2sig;
> >  PetscScalar a,a2, b, u0, sigma;
> >
> >  dx = user->dx; dx2 = dx *dx;
> >  xmax = user->xmax;
> >  sigma = user->sigma;
> >
> >  /* PetscPrintf(PETSC_COMM_SELF, " dx = %g, sigma = %g\n", dx, sigma);  */
> >
> >  /* Extract raw arrays */
> >  DMCompositeGetLocalVectors(user->packer, &p_vec, &Q_vec);
> >  DMCompositeGetLocalVectors(user->packer, &Fp_vec, &FQ_vec);
> >
> >  DMCompositeScatter(user->packer, U, p_vec, Q_vec);
> >  /* VecView(Q_vec,    PETSC_VIEWER_STDOUT_SELF); */
> >
> >  VecGetArray(p_vec,&p_array);
> >  VecGetArray(Fp_vec,&Fp_array);
> >
> >  DMDAVecGetArray(user->Q_dm, Q_vec, &Qvals);
> >  DMDAVecGetArray(user->Q_dm, FQ_vec, &FQvals);
> >
> >  DMDAGetLocalInfo(user->Q_dm, &info);
> >
> >  a = p_array[0]; a2 = a*a;
> >  b = p_array[1];
> >  u0 = p_array[2];
> >
> >  /* Set boundary conditions at the origin*/
> >  if(info.xs ==0){
> >    set_origin_bcs(u0, Qvals);
> >  }
> >  /* Set boundray conditions in the far field */
> >  if(info.xs+ info.xm == info.mx <http://info.mx/>){
> >    set_farfield_bcs(xmax,sigma, a,  b,  dx, Qvals,info.mx 
> > <http://info.mx/>);
> >  }
> >
> >  /* Solve auxiliary equations */
> >  if(info.xs ==0){
> >    uxx = (2 * Qvals[0].u-2 * u0)/dx2;
> >    vxx = (Qvals[0].v + Qvals[0].g)/dx2;
> >    vx = (Qvals[0].v - Qvals[0].g)/(2*dx);
> >    Fp_array[0] = Qvals[0].u - Qvals[0].f;
> >    Fp_array[1] = -vxx - (1/a) * (.5/sigma) * u0;
> >    Fp_array[2] = -uxx + (1/a2) * u0
> >      + (1/a) * (-b * vx + PetscPowScalar(u0 * u0, sigma) * vx);
> >  }
> >
> >  /* Solve equations in the bulk */
> >  for(i=info.xs; i < info.xs + info.xm;i++){
> >
> >    u = Qvals[i].u;
> >    v = Qvals[i].v;
> >    f = Qvals[i].f;
> >    g = Qvals[i].g;
> >
> >    x = (i+1) * dx;
> >
> >    Q2sig = PetscPowScalar(u*u + v*v,sigma);
> >    W2sig= PetscPowScalar(f*f + g*g, sigma);
> >
> >    ux = (Qvals[i+1].u-Qvals[i-1].u)/(2*dx);
> >    vx = (Qvals[i+1].v-Qvals[i-1].v)/(2*dx);
> >    fx = (Qvals[i+1].f-Qvals[i-1].f)/(2*dx);
> >    gx = (Qvals[i+1].g-Qvals[i-1].g)/(2*dx);
> >
> >    uxx = (Qvals[i+1].u+Qvals[i-1].u - 2 *u)/(dx2);
> >    vxx = (Qvals[i+1].v+Qvals[i-1].v- 2 *v)/(dx2);
> >    fxx = (Qvals[i+1].f+Qvals[i-1].f -2*f)/(dx2);
> >    gxx = (Qvals[i+1].g+Qvals[i-1].g -2*g)/(dx2);
> >
> >    FQvals[i].u = -uxx +1/a2 * u
> >      + 1/a *(.5/sigma* v +x * vx- b* vx +Q2sig* vx);
> >
> >    FQvals[i].v = -vxx +1/a2 * v
> >      - 1/a *(.5/sigma * u +x * ux- b* ux +Q2sig* ux);
> >
> >    FQvals[i].f = -fxx +1/a2 * f
> >      + 1/a *(.5/sigma * g +x * gx+ b* gx -W2sig* gx);
> >
> >    FQvals[i].g =-gxx +1/a2 * g
> >      - 1/a *(.5/sigma * f +x * fx+ b* fx -W2sig* fx);
> >  }
> >
> >  /* Restore raw arrays */
> >  VecRestoreArray(p_vec, &p_array);
> >  VecRestoreArray(Fp_vec, &Fp_array);
> >
> >  DMDAVecRestoreArray(user->Q_dm, Q_vec, &Qvals);
> >  DMDAVecRestoreArray(user->Q_dm, FQ_vec, &FQvals);
> >
> >  DMCompositeGather(user->packer,F,INSERT_VALUES, Fp_vec, FQ_vec);
> >  DMCompositeRestoreLocalVectors(user->packer, &p_vec, &Q_vec);
> >  DMCompositeRestoreLocalVectors(user->packer, &Fp_vec, &FQ_vec);
> >
> >  return 0;
> > }
> >
> >
> > Here is the form function:
> >
> >
> >
> > -gideon
> >
> >> On Aug 27, 2015, at 11:09 PM, Barry Smith <[email protected] 
> >> <mailto:[email protected]>> wrote:
> >>
> >>
> >> Can you send the code, that will be the easiest way to find the problem.
> >>
> >>  My guess is that you have hardwired in your function/Jacobian computation 
> >> the use of the original DM for computations instead of using the current 
> >> DM (with refinement there will be a new DM on the second level different 
> >> than your original DM). So what you need to do in writing your 
> >> FormFunction and FormJacobian is to call SNESGetDM() to get the current DM 
> >> and then use DMComputeGet... to access the individual DMDA and DMRedundent 
> >> for the parts. I notice you have this user.Q_dm I bet inside your form 
> >> functions you use this DM? You have to remove this logic.
> >>
> >> Barry
> >>
> >>> On Aug 27, 2015, at 9:42 PM, Gideon Simpson <[email protected] 
> >>> <mailto:[email protected]>> wrote:
> >>>
> >>> I have it set up as:
> >>>
> >>>   DMCompositeCreate(PETSC_COMM_WORLD, &user.packer);
> >>>   DMRedundantCreate(PETSC_COMM_WORLD, 0, 3, &user.p_dm);
> >>>   DMCompositeAddDM(user.packer,user.p_dm);
> >>>   DMDACreate1d(PETSC_COMM_WORLD,DM_BOUNDARY_GHOSTED,
> >>>              nx, 4, 1, NULL, &user.Q_dm);
> >>>   DMCompositeAddDM(user.packer,user.Q_dm);
> >>>   DMCreateGlobalVector(user.packer,&U);
> >>>
> >>> where the user.packer structure has
> >>>
> >>> DM packer;
> >>> DM p_dm, Q_dm;
> >>>
> >>> Q_dm holds the field variables and p_dm holds the scalar values (the 
> >>> nonlinear eigenvalues).
> >>>
> >>> Here are some of the errors that are generated:
> >>>
> >>> [0]PETSC ERROR: --------------------- Error Message 
> >>> --------------------------------------------------------------
> >>> [0]PETSC ERROR: Argument out of range
> >>> [0]PETSC ERROR: New nonzero at (0,3) caused a malloc
> >>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn 
> >>> off this check
> >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
> >>> <http://www.mcs.anl.gov/petsc/documentation/faq.html> for trouble 
> >>> shooting.
> >>> [0]PETSC ERROR: Petsc Release Version 3.5.3, unknown
> >>> [0]PETSC ERROR: ./blowup_batch2 on a arch-macports named gs_air by gideon 
> >>> Thu Aug 27 22:40:54 2015
> >>> [0]PETSC ERROR: Configure options --prefix=/opt/local 
> >>> --prefix=/opt/local/lib/petsc --with-valgrind=0 --with-shared-libraries 
> >>> --with-debugging=0 --with-c2html-dir=/opt/local --with-x=0 
> >>> --with-blas-lapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate
> >>>  --with-hwloc-dir=/opt/local --with-suitesparse-dir=/opt/local 
> >>> --with-superlu-dir=/opt/local --with-metis-dir=/opt/local 
> >>> --with-parmetis-dir=/opt/local --with-scalapack-dir=/opt/local 
> >>> --with-mumps-dir=/opt/local --with-superlu_dist-dir=/opt/local 
> >>> CC=/opt/local/bin/mpicc-mpich-mp CXX=/opt/local/bin/mpicxx-mpich-mp 
> >>> FC=/opt/local/bin/mpif90-mpich-mp F77=/opt/local/bin/mpif90-mpich-mp 
> >>> F90=/opt/local/bin/mpif90-mpich-mp COPTFLAGS=-Os CXXOPTFLAGS=-Os 
> >>> FOPTFLAGS=-Os LDFLAGS="-L/opt/local/lib -Wl,-headerpad_max_install_names" 
> >>> CPPFLAGS=-I/opt/local/include CFLAGS="-Os -arch x86_64" CXXFLAGS=-Os 
> >>> FFLAGS=-Os FCFLAGS=-Os F90FLAGS=-Os PETSC_ARCH=arch-macports 
> >>> --with-mpiexec=mpiexec-mpich-mp
> >>> [0]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 530 in 
> >>> /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.3/src/mat/impls/aij/mpi/mpiaij.c
> >>> [0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message 
> >>> --------------------------------------------------------------
> >>> [1]PETSC ERROR: Argument out of range
> >>> [1]PETSC ERROR: Inserting a new nonzero (40003, 0) into matrix
> >>> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
> >>> <http://www.mcs.anl.gov/petsc/documentation/faq.html> for trouble 
> >>> shooting.
> >>> [1]PETSC ERROR: Petsc Release Version 3.5.3, unknown
> >>> [1]PETSC ERROR: ./blowup_batch2 on a arch-macports named gs_air by gideon 
> >>> Thu Aug 27 22:40:54 2015
> >>> [1]PETSC ERROR: Configure options --prefix=/opt/local 
> >>> --prefix=/opt/local/lib/petsc --with-valgrind=0 --with-shared-libraries 
> >>> --with-debugging=0 --with-c2html-dir=/opt/local --with-x=0 
> >>> --with-blas-lapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate
> >>>  --with-hwloc-dir=/opt/local --with-suitesparse-dir=/opt/local 
> >>> --with-superlu-dir=/opt/local --with-metis-dir=/opt/local 
> >>> --with-parmetis-dir=/opt/local --with-scalapack-dir=/opt/local 
> >>> --with-mumps-dir=/opt/local --with-superlu_dist-dir=/opt/local 
> >>> CC=/opt/local/bin/mpicc-mpich-mp CXX=/opt/local/bin/mpicxx-mpich-mp 
> >>> FC=/opt/local/bin/mpif90-mpich-mp F77=/opt/local/bin/mpif90-mpich-mp 
> >>> F90=/opt/local/bin/mpif90-mpich-mp COPTFLAGS=-Os CXXOPTFLAGS=-Os 
> >>> FOPTFLAGS=-Os LDFLAGS="-L/opt/local/lib -Wl,-headerpad_max_install_names" 
> >>> CPPFLAGS=-I/opt/local/include CFLAGS="-Os -arch x86_64" CXXFLAGS=-Os 
> >>> FFLAGS=-Os FCFLAGS=-Os F90FLAGS=-Os PETSC_ARCH=arch-macports 
> >>> --with-mpiexec=mpiexec-mpich-mp
> >>> [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 561 in 
> >>> /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.3/src/mat/impls/aij/mpi/mpiaij.c
> >>> [1]PETSC ERROR: #2 MatSetValues() line 1135 in 
> >>> /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_math_petsc/petsc/work/v3.5.3/src/mat/interface/matrix.c
> >>>
> >>>
> >>>
> >>> -gideon
> >>>
> >>>> On Aug 27, 2015, at 10:37 PM, Barry Smith <[email protected] 
> >>>> <mailto:[email protected]>> wrote:
> >>>>
> >>>>
> >>>> We need the full error message.
> >>>>
> >>>> But are you using a DMDA for the scalars?  You should not be, you should 
> >>>> be using a DMRedundant for the scalars.
> >>>>
> >>>> Barry
> >>>>
> >>>> Though you should not get this error even if you are using a DMDA there.
> >>>>
> >>>>> On Aug 27, 2015, at 9:32 PM, Gideon Simpson <[email protected] 
> >>>>> <mailto:[email protected]>> wrote:
> >>>>>
> >>>>> I’m getting the following errors:
> >>>>>
> >>>>> [1]PETSC ERROR: Argument out of range
> >>>>> [1]PETSC ERROR: Inserting a new nonzero (40003, 0) into matrix
> >>>>>
> >>>>> Could this have to do with me using the DMComposite with one da holding 
> >>>>> the scalar parameters and the other holding the field variables?
> >>>>>
> >>>>> -gideon
> >>>>>
> >>>>>> On Aug 27, 2015, at 10:15 PM, Matthew Knepley <[email protected] 
> >>>>>> <mailto:[email protected]>> wrote:
> >>>>>>
> >>>>>> On Thu, Aug 27, 2015 at 9:11 PM, Gideon Simpson 
> >>>>>> <[email protected] <mailto:[email protected]>> wrote:
> >>>>>> HI Barry,
> >>>>>>
> >>>>>> Nope, I’m not doing any grid sequencing. Clearly that makes a lot of 
> >>>>>> sense, to solve on a spatially coarse mesh for the field variables, 
> >>>>>> interpolate onto the finer mesh, and then solve again.  I’m not 
> >>>>>> entirely clear on the practical implementation
> >>>>>>
> >>>>>> SNES should do this automatically using -snes_grid_sequence <k>.  If 
> >>>>>> this does not work, complain. Loudly.
> >>>>>>
> >>>>>> Matt
> >>>>>>
> >>>>>> -gideon
> >>>>>>
> >>>>>>> On Aug 27, 2015, at 10:02 PM, Barry Smith <[email protected] 
> >>>>>>> <mailto:[email protected]>> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> Gideon,
> >>>>>>>
> >>>>>>> Are you using grid sequencing? Simply solve on a coarse grid, 
> >>>>>>> interpolate u1 and u2 to a once refined version of the grid and use 
> >>>>>>> that plus the mu lam as initial guess for the next level. Repeat to 
> >>>>>>> as fine a grid as you want. You can use DMRefine() and 
> >>>>>>> DMGetInterpolation() to get the interpolation needed to interpolate 
> >>>>>>> from the coarse to finer mesh.
> >>>>>>>
> >>>>>>> Then and only then you can use multigrid (with or without fieldsplit) 
> >>>>>>> to solve the linear problems for finer meshes. Once you have the grid 
> >>>>>>> sequencing working we can help you with this.
> >>>>>>>
> >>>>>>> Barry
> >>>>>>>
> >>>>>>>> On Aug 27, 2015, at 7:00 PM, Gideon Simpson 
> >>>>>>>> <[email protected] <mailto:[email protected]>> wrote:
> >>>>>>>>
> >>>>>>>> I’m working on a problem which, morally, can be posed as a system of 
> >>>>>>>> coupled semi linear elliptic PDEs together with unknown nonlinear 
> >>>>>>>> eigenvalue parameters, loosely, of the form
> >>>>>>>>
> >>>>>>>> -\Delta u_1 + f(u_1, u_2) = lam * u1 - mu * du2/dx
> >>>>>>>> -\Delta u_2 + g(u_1, u_2) = lam * u2 + mu * du1/dx
> >>>>>>>>
> >>>>>>>> Currently, I have it set up with a DMComposite with two sub da’s, 
> >>>>>>>> one for the parameters (lam, mu), and one for the vector field (u_1, 
> >>>>>>>> u_2) on the mesh.  I have had success in solving this as a fully 
> >>>>>>>> coupled system with SNES + sparse direct solvers (MUMPS, SuperLU).
> >>>>>>>>
> >>>>>>>> Lately, I am finding that, when the mesh resolution gets fine enough 
> >>>>>>>> (i.e.  10^6-10^8 lattice points), my SNES gets stuck with the 
> >>>>>>>> function norm = O(10^{-4}),  eventually returning reason -6 (failed 
> >>>>>>>> line search).
> >>>>>>>>
> >>>>>>>> Perhaps there is another way around the above problem, but one thing 
> >>>>>>>> I was thinking of trying would be to get away from direct solvers, 
> >>>>>>>> and I was hoping to use field split for this.  However, it’s a bit 
> >>>>>>>> beyond what I’ve seen examples for because it has 2 types of 
> >>>>>>>> variables: scalar parameters which appear globally in the system and 
> >>>>>>>> vector valued field variables.  Any suggestions on how to get 
> >>>>>>>> started?
> >>>>>>>>
> >>>>>>>> -gideon
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> What most experimenters take for granted before they begin their 
> >>>>>> experiments is infinitely more interesting than any results to which 
> >>>>>> their experiments lead.
> >>>>>> -- Norbert Wiener
> >>>>>
> >>>>
> >>>
> >>
> >
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener

Reply via email to