We need the full error message.

   But are you using a DMDA for the scalars?  You should not be, you should be 
using a DMRedundant for the scalars. 

  Barry

  Though you should not get this error even if you are using a DMDA there.

> On Aug 27, 2015, at 9:32 PM, Gideon Simpson <[email protected]> wrote:
> 
> I’m getting the following errors:
> 
> [1]PETSC ERROR: Argument out of range
> [1]PETSC ERROR: Inserting a new nonzero (40003, 0) into matrix
> 
> Could this have to do with me using the DMComposite with one da holding the 
> scalar parameters and the other holding the field variables?
> 
> -gideon
> 
>> On Aug 27, 2015, at 10:15 PM, Matthew Knepley <[email protected]> wrote:
>> 
>> On Thu, Aug 27, 2015 at 9:11 PM, Gideon Simpson <[email protected]> 
>> wrote:
>> HI Barry,
>> 
>> Nope, I’m not doing any grid sequencing. Clearly that makes a lot of sense, 
>> to solve on a spatially coarse mesh for the field variables, interpolate 
>> onto the finer mesh, and then solve again.  I’m not entirely clear on the 
>> practical implementation
>> 
>> SNES should do this automatically using -snes_grid_sequence <k>.  If this 
>> does not work, complain. Loudly.
>> 
>>    Matt
>> 
>> -gideon
>> 
>>> On Aug 27, 2015, at 10:02 PM, Barry Smith <[email protected]> wrote:
>>> 
>>> 
>>>   Gideon,
>>> 
>>>    Are you using grid sequencing? Simply solve on a coarse grid, 
>>> interpolate u1 and u2 to a once refined version of the grid and use that 
>>> plus the mu lam as initial guess for the next level. Repeat to as fine a 
>>> grid as you want. You can use DMRefine() and DMGetInterpolation() to get 
>>> the interpolation needed to interpolate from the coarse to finer mesh.
>>> 
>>>    Then and only then you can use multigrid (with or without fieldsplit) to 
>>> solve the linear problems for finer meshes. Once you have the grid 
>>> sequencing working we can help you with this.
>>> 
>>>   Barry
>>> 
>>>> On Aug 27, 2015, at 7:00 PM, Gideon Simpson <[email protected]> 
>>>> wrote:
>>>> 
>>>> I’m working on a problem which, morally, can be posed as a system of 
>>>> coupled semi linear elliptic PDEs together with unknown nonlinear 
>>>> eigenvalue parameters, loosely, of the form
>>>> 
>>>> -\Delta u_1 + f(u_1, u_2) = lam * u1 - mu * du2/dx 
>>>> -\Delta u_2 + g(u_1, u_2) = lam * u2 + mu * du1/dx 
>>>> 
>>>> Currently, I have it set up with a DMComposite with two sub da’s, one for 
>>>> the parameters (lam, mu), and one for the vector field (u_1, u_2) on the 
>>>> mesh.  I have had success in solving this as a fully coupled system with 
>>>> SNES + sparse direct solvers (MUMPS, SuperLU).
>>>> 
>>>> Lately, I am finding that, when the mesh resolution gets fine enough (i.e. 
>>>>  10^6-10^8 lattice points), my SNES gets stuck with the function norm = 
>>>> O(10^{-4}),  eventually returning reason -6 (failed line search).
>>>> 
>>>> Perhaps there is another way around the above problem, but one thing I was 
>>>> thinking of trying would be to get away from direct solvers, and I was 
>>>> hoping to use field split for this.  However, it’s a bit beyond what I’ve 
>>>> seen examples for because it has 2 types of variables: scalar parameters 
>>>> which appear globally in the system and vector valued field variables.  
>>>> Any suggestions on how to get started?
>>>> 
>>>> -gideon
>>>> 
>>> 
>> 
>> 
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
> 

Reply via email to