Re: [petsc-users] Using DMCOMPOSITE with TS

2019-02-23 Thread Jed Brown via petsc-users
It shouldn't have any affect.  This will need to be debugged.  There's
no chance I'll have time for at least a week; hopefully one of the other
TS contributors can look sooner.

"Ellen M. Price via petsc-users"  writes:

> Quick update: I found that changing TS_EXACTFINALTIME_INTERPOLATE to
> TS_EXACTFINALTIME_MATCHSTEP fixes the problem; is there a reason I can't
> use the interpolation when using a DMCOMPOSITE?
>
> Ellen Price
>
>
> On 2/20/19 1:42 PM, Ellen Price wrote:
>> Hi fellow PETSc users!
>> 
>> I am attempting to use a DMCOMPOSITE alongside TS and have run into some
>> trouble. I'm attaching a MWE demonstrating the problem. The goal is to
>> combine a DMDA3d for spatial data and a DMREDUNDANT for non-spatial,
>> time-dependent fields. In the attached example, this additional field is
>> temperature.
>> 
>> The DMDA data is currently just temperature-dependent, and the
>> temperature is supposed to increase linearly. Unfortunately, no matter
>> how I integrate (explicitly, using Euler or RK, or implicitly, using
>> backward Euler or BDF), I get the wrong final temperature. The
>> integration proceeds for 100 timesteps and then stops (verified by
>> -ts_monitor). At a heating rate of 1, and an initial temperature of 150,
>> I should get a final temperature of 250 very easily. However, I get
>> something closer to 200 (not exact, which makes me think this isn't a
>> simple missing-a-factor-of-2 error).
>> 
>> I'm currently using TSSetDM with the composite DM to let PETSc compute
>> the Jacobian. Having checked all the inputs and outputs as well as I
>> can, this is the only step that seems like it may be causing a problem;
>> yet even explicit integration doesn't work, and that shouldn't need the
>> Jacobian at all. I'm at a loss.
>> 
>> Run using:
>> ./mwe -implicit yes -ts_type beuler -ts_monitor OR ./mwe -implicit no
>> -ts_type euler -ts_monitor
>> 
>> Thanks in advance for any help,
>> Ellen Price


Re: [petsc-users] Using DMCOMPOSITE with TS

2019-02-23 Thread Ellen M. Price via petsc-users
Quick update: I found that changing TS_EXACTFINALTIME_INTERPOLATE to
TS_EXACTFINALTIME_MATCHSTEP fixes the problem; is there a reason I can't
use the interpolation when using a DMCOMPOSITE?

Ellen Price


On 2/20/19 1:42 PM, Ellen Price wrote:
> Hi fellow PETSc users!
> 
> I am attempting to use a DMCOMPOSITE alongside TS and have run into some
> trouble. I'm attaching a MWE demonstrating the problem. The goal is to
> combine a DMDA3d for spatial data and a DMREDUNDANT for non-spatial,
> time-dependent fields. In the attached example, this additional field is
> temperature.
> 
> The DMDA data is currently just temperature-dependent, and the
> temperature is supposed to increase linearly. Unfortunately, no matter
> how I integrate (explicitly, using Euler or RK, or implicitly, using
> backward Euler or BDF), I get the wrong final temperature. The
> integration proceeds for 100 timesteps and then stops (verified by
> -ts_monitor). At a heating rate of 1, and an initial temperature of 150,
> I should get a final temperature of 250 very easily. However, I get
> something closer to 200 (not exact, which makes me think this isn't a
> simple missing-a-factor-of-2 error).
> 
> I'm currently using TSSetDM with the composite DM to let PETSc compute
> the Jacobian. Having checked all the inputs and outputs as well as I
> can, this is the only step that seems like it may be causing a problem;
> yet even explicit integration doesn't work, and that shouldn't need the
> Jacobian at all. I'm at a loss.
> 
> Run using:
> ./mwe -implicit yes -ts_type beuler -ts_monitor OR ./mwe -implicit no
> -ts_type euler -ts_monitor
> 
> Thanks in advance for any help,
> Ellen Price


[petsc-users] 答复: Question with filedsplit in PETSc

2019-02-23 Thread Zhu, Qiming via petsc-users
Dear Knepley,


Thank you for your detailed answer and help to this problem. I got your idea. 
And I also find it would be better to just use MPIAIJ matrix to construct this 
problem for fieldsplit. Thank you for your help all the time. Wish you have a 
nice day.


Yours sincerely,

Qiming Zhu


发件人: Matthew Knepley 
发送时间: 2019年2月23日 15:09:17
收件人: Zhu, Qiming
抄送: petsc-users@mcs.anl.gov
主题: Re: [petsc-users] Question with filedsplit in PETSc

On Thu, Feb 21, 2019 at 3:45 PM Zhu, Qiming via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:


Dear all,


Sorry to disturb you. I am a user of Petsc. I am trying to use Fieldsplit in 
Petsc to do preconditioning for Navier-Stokes problem. I have some problems 
when I trying to use Fieldsplit function. I am now defining the nest matrix 
first, then I get the IS from nested matrix. But I find that my code just work 
for one core. When I change to parallel case, I could only get zero solution. I 
wonder is there any special requirements for IS definition in Fieldsplit? I 
include one code here. If you have any idea, hope you reply soon. Thank you for 
your help. Thank you very much.

I cleaned up the code a little so I could see what was going on. I attach my 
version here. If you run on 1 proc, you get what you expect:


master *:~/Downloads/tmp/blaise$ $PETSC_DIR/../bin/mpiexec -n 1 ./ex5 
-ksp_monitor_true_residual -ksp_view_mat -sol_view -rhs_view -sys_view

A00 block print here.

Mat Object: 1 MPI processes

  type: mpiaij

row 0: (0, 1.)

row 1: (1, 2.)

row 2: (2, 3.)

row 3: (3, 4.)

A01 block print here.

Mat Object: 1 MPI processes

  type: mpiaij

row 0:

row 1:

row 2:

row 3:

A10 block print here.

Mat Object: 1 MPI processes

  type: mpiaij

row 0:

row 1:

row 2:

row 3:

A11 block print here.

Mat Object: 1 MPI processes

  type: mpiaij

row 0: (0, 5.)

row 1: (1, 6.)

row 2: (2, 7.)

row 3: (3, 8.)

IS Object: 1 MPI processes

  type: stride

Index set is permutation

Number of indices in (stride) set 4

0 0

1 1

2 2

3 3

IS Object: 1 MPI processes

  type: stride

Number of indices in (stride) set 4

0 4

1 5

2 6

3 7

  0 KSP preconditioned resid norm 2.828427124746e+00 true resid norm 
1.428285685709e+01 ||r(i)||/||b|| 1.e+00

  1 KSP preconditioned resid norm 4.154074181055e-16 true resid norm 
3.475547814546e-15 ||r(i)||/||b|| 2.433370192898e-16

Mat Object: 1 MPI processes

  type: nest

  Matrix object:

type=nest, rows=2, cols=2

MatNest structure:

(0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=4, cols=4

(0,1) : type=mpiaij, rows=4, cols=4

(1,0) : type=mpiaij, rows=4, cols=4

(1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=4, cols=4

Vec Object: Rhs 1 MPI processes

  type: mpi

Process [0]

1.

2.

3.

4.

5.

6.

7.

8.

Vec Object: Sol 1 MPI processes

  type: mpi

Process [0]

1.

1.

1.

1.

1.

1.

1.

1.

Mat Object: System Matrix 1 MPI processes

  type: seqaij

row 0: (0, 1.)

row 1: (1, 2.)

row 2: (2, 3.)

row 3: (3, 4.)

row 4: (4, 5.)

row 5: (5, 6.)

row 6: (6, 7.)

row 7: (7, 8.)

If you run on 2 procs, you get the "wrong" answer. This is because you matrix 
is not in the order you think it is. I show this by converting to AIJ and 
printing it. This happens because you are sticking together _parallel_ matrices 
with Nest, so the local parts become contiguous:


master *:~/Downloads/tmp/blaise$ $PETSC_DIR/../bin/mpiexec -n 2 ./ex5 
-ksp_monitor_true_residual -ksp_view_mat -sol_view -rhs_view -sys_view

A00 block print here.

Mat Object: 2 MPI processes

  type: mpiaij

row 0: (0, 1.)

row 1: (1, 2.)

row 2: (2, 3.)

row 3: (3, 4.)

A01 block print here.

Mat Object: 2 MPI processes

  type: mpiaij

row 0:

row 1:

row 2:

row 3:

A10 block print here.

Mat Object: 2 MPI processes

  type: mpiaij

row 0:

row 1:

row 2:

row 3:

A11 block print here.

Mat Object: 2 MPI processes

  type: mpiaij

row 0: (0, 5.)

row 1: (1, 6.)

row 2: (2, 7.)

row 3: (3, 8.)

IS Object: 2 MPI processes

  type: stride

[0] Index set is permutation

[0] Number of indices in (stride) set 2

[0] 0 0

[0] 1 1

[1] Number of indices in (stride) set 2

[1] 0 4

[1] 1 5

IS Object: 2 MPI processes

  type: stride

[0] Number of indices in (stride) set 2

[0] 0 2

[0] 1 3

[1] Number of indices in (stride) set 2

[1] 0 6

[1] 1 7

  0 KSP preconditioned resid norm 3.135637450698e+00 true resid norm 
1.428285685709e+01 ||r(i)||/||b|| 1.e+00

  1 KSP preconditioned resid norm -0.e+00 true resid norm 
1.620317160370e-15 ||r(i)||/||b|| 1.134448924737e-16

Mat Object: 2 MPI processes

  type: nest

  Matrix object:

type=nest, rows=2, cols=2

MatNest structure:

(0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=4, cols=4

(0,1) : type=mpiaij, rows=4, cols=4

(1,0) : type=mpiaij, rows=4, cols=4

(1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=4, cols=4

Vec Object: Rhs 2 MPI processes

  type: mpi

Process [0]

1.

2.

3.

4.


Re: [petsc-users] Question with filedsplit in PETSc

2019-02-23 Thread Matthew Knepley via petsc-users
On Thu, Feb 21, 2019 at 3:45 PM Zhu, Qiming via petsc-users <
petsc-users@mcs.anl.gov> wrote:

>
> Dear all,
>
>
> Sorry to disturb you. I am a user of Petsc. I am trying to use Fieldsplit
> in Petsc to do preconditioning for Navier-Stokes problem. I have some
> problems when I trying to use Fieldsplit function. I am now defining the
> nest matrix first, then I get the IS from nested matrix. But I find that my
> code just work for one core. When I change to parallel case, I could only
> get zero solution. I wonder is there any special requirements for IS
> definition in Fieldsplit? I include one code here. If you have any idea,
> hope you reply soon. Thank you for your help. Thank you very much.
>
I cleaned up the code a little so I could see what was going on. I attach
my version here. If you run on 1 proc, you get what you expect:

master *:~/Downloads/tmp/blaise$ $PETSC_DIR/../bin/mpiexec -n 1 ./ex5
-ksp_monitor_true_residual -ksp_view_mat -sol_view -rhs_view -sys_view

A00 block print here.

Mat Object: 1 MPI processes

  type: mpiaij

row 0: (0, 1.)

row 1: (1, 2.)

row 2: (2, 3.)

row 3: (3, 4.)

A01 block print here.

Mat Object: 1 MPI processes

  type: mpiaij

row 0:

row 1:

row 2:

row 3:

A10 block print here.

Mat Object: 1 MPI processes

  type: mpiaij

row 0:

row 1:

row 2:

row 3:

A11 block print here.

Mat Object: 1 MPI processes

  type: mpiaij

row 0: (0, 5.)

row 1: (1, 6.)

row 2: (2, 7.)

row 3: (3, 8.)

IS Object: 1 MPI processes

  type: stride

Index set is permutation

Number of indices in (stride) set 4

0 0

1 1

2 2

3 3

IS Object: 1 MPI processes

  type: stride

Number of indices in (stride) set 4

0 4

1 5

2 6

3 7

  0 KSP preconditioned resid norm 2.828427124746e+00 true resid norm
1.428285685709e+01 ||r(i)||/||b|| 1.e+00

  1 KSP preconditioned resid norm 4.154074181055e-16 true resid norm
3.475547814546e-15 ||r(i)||/||b|| 2.433370192898e-16

Mat Object: 1 MPI processes

  type: nest

  Matrix object:

type=nest, rows=2, cols=2

MatNest structure:

(0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=4, cols=4

(0,1) : type=mpiaij, rows=4, cols=4

(1,0) : type=mpiaij, rows=4, cols=4

(1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=4, cols=4

Vec Object: Rhs 1 MPI processes

  type: mpi

Process [0]

1.

2.

3.

4.

5.

6.

7.

8.

Vec Object: Sol 1 MPI processes

  type: mpi

Process [0]

1.

1.

1.

1.

1.

1.

1.

1.

Mat Object: System Matrix 1 MPI processes

  type: seqaij

row 0: (0, 1.)

row 1: (1, 2.)

row 2: (2, 3.)

row 3: (3, 4.)

row 4: (4, 5.)

row 5: (5, 6.)

row 6: (6, 7.)
row 7: (7, 8.)

If you run on 2 procs, you get the "wrong" answer. This is because you
matrix is not in the order you think it is. I show this by converting to
AIJ and printing it. This happens because you are sticking together
_parallel_ matrices with Nest, so the local parts become contiguous:

master *:~/Downloads/tmp/blaise$ $PETSC_DIR/../bin/mpiexec -n 2 ./ex5
-ksp_monitor_true_residual -ksp_view_mat -sol_view -rhs_view -sys_view

A00 block print here.

Mat Object: 2 MPI processes

  type: mpiaij

row 0: (0, 1.)

row 1: (1, 2.)

row 2: (2, 3.)

row 3: (3, 4.)

A01 block print here.

Mat Object: 2 MPI processes

  type: mpiaij

row 0:

row 1:

row 2:

row 3:

A10 block print here.

Mat Object: 2 MPI processes

  type: mpiaij

row 0:

row 1:

row 2:

row 3:

A11 block print here.

Mat Object: 2 MPI processes

  type: mpiaij

row 0: (0, 5.)

row 1: (1, 6.)

row 2: (2, 7.)

row 3: (3, 8.)

IS Object: 2 MPI processes

  type: stride

[0] Index set is permutation

[0] Number of indices in (stride) set 2

[0] 0 0

[0] 1 1

[1] Number of indices in (stride) set 2

[1] 0 4

[1] 1 5

IS Object: 2 MPI processes

  type: stride

[0] Number of indices in (stride) set 2

[0] 0 2

[0] 1 3

[1] Number of indices in (stride) set 2

[1] 0 6

[1] 1 7

  0 KSP preconditioned resid norm 3.135637450698e+00 true resid norm
1.428285685709e+01 ||r(i)||/||b|| 1.e+00

  1 KSP preconditioned resid norm -0.e+00 true resid norm
1.620317160370e-15 ||r(i)||/||b|| 1.134448924737e-16

Mat Object: 2 MPI processes

  type: nest

  Matrix object:

type=nest, rows=2, cols=2

MatNest structure:

(0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=4, cols=4

(0,1) : type=mpiaij, rows=4, cols=4

(1,0) : type=mpiaij, rows=4, cols=4

(1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=4, cols=4

Vec Object: Rhs 2 MPI processes

  type: mpi

Process [0]

1.

2.

3.

4.

Process [1]

5.

6.

7.

8.

Vec Object: Sol 2 MPI processes

  type: mpi

Process [0]

1.

1.

0.6

0.67

Process [1]

1.7

1.5

1.

1.

Mat Object: System Matrix 2 MPI processes

  type: mpiaij

row 0: (0, 1.)

row 1: (1, 2.)

row 2: (2, 5.)

row 3: (3, 6.)

row 4: (4, 3.)

row 5: (5, 4.)

row 6: (6, 7.)
row 7: (7, 8.)

In general, I think its a bad idea to use Nest. Just build an AIJ matrix
the way you want and make some ISes.

  Thanks,

 Matt

> Yours sincerely,

[petsc-users] Wrong Global Vector size when default section is built after dmdistribute?

2019-02-23 Thread Blaise A Bourdin via petsc-users
Hi,

My student Alex and I are trying to build an example combining natural to 
global ordering and constraints.
Constraints information do not seem to be distributed, so the most rational 
thing to do seems to be creating the default section after distribution, then 
figuring out a way to reconstruct the natural to global stuff.

In the example attached, we do this, then create a global vector, but for some 
reason that I do not understand, the global vector size is 0 (should be 18) 
when run on more than 1 CPU. If we make 2 calls to DMGetGlobalVector, we get a 
vector of the proper size. What are we doing wrong?

Regards,
Blaise


--
Department of Mathematics and Center for Computation & Technology
Louisiana State University, Baton Rouge, LA 70803, USA
Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin









main.c
Description: main.c


twobytwo.exo
Description: twobytwo.exo