Re: [petsc-users] TAO: Finite Difference vs Continuous Adjoint gradient issues

2017-11-23 Thread Julian Andrej
It was indeed a mass scaling issue. We have to project the CADJ derived 
gradient to the corresponding FE space again.


Testing hand-coded gradient (hc) against finite difference gradient 
(fd), if the ratio ||fd - hc|| / ||hc|| is

0 (1.e-8), the hand-coded gradient is probably correct.
Run with -tao_test_display to show difference
between hand-coded and finite difference gradient.
||fd|| 0.000150841, ||hc|| = 0.000150841, angle cosine = 
(fd'hc)/||fdhc|| = 1.
2-norm ||fd-hc||/max(||hc||,||fd||) = 4.48554e-06, difference ||fd-hc|| 
= 6.76604e-10
max-norm ||fd-hc||/max(||hc||,||fd||) = 4.99792e-06, difference 
||fd-hc|| = 1.88044e-10
||fd|| 0.000386312, ||hc|| = 0.000386312, angle cosine = 
(fd'hc)/||fdhc|| = 1.
2-norm ||fd-hc||/max(||hc||,||fd||) = 1.14682e-05, difference ||fd-hc|| 
= 4.4303e-09
max-norm ||fd-hc||/max(||hc||,||fd||) = 1.56645e-05, difference 
||fd-hc|| = 1.49275e-09
||fd|| 8.46797e-05, ||hc|| = 8.46797e-05, angle cosine = 
(fd'hc)/||fdhc|| = 1.
2-norm ||fd-hc||/max(||hc||,||fd||) = 2.63488e-06, difference ||fd-hc|| 
= 2.2312e-10
max-norm ||fd-hc||/max(||hc||,||fd||) = 2.7873e-06, difference ||fd-hc|| 
= 5.58718e-11


Thank you all for the quick responses and input again!

On 2017-11-23 09:29, Julian Andrej wrote:

On 2017-11-22 16:27, Emil Constantinescu wrote:

On 11/22/17 3:48 AM, Julian Andrej wrote:

Hello,

we prepared a small example which computes the gradient via the 
continuous adjoint method of a heating problem with a cost 
functional.


We implemented the text book example and tested the gradient via a 
Taylor Remainder (which works fine). Now we wanted to solve the
optimization problem with TAO and checked the gradient vs. the finite 
difference gradient and run into problems.


Testing hand-coded gradient (hc) against finite difference gradient 
(fd), if the ratio ||fd - hc|| / ||hc|| is

0 (1.e-8), the hand-coded gradient is probably correct.
Run with -tao_test_display to show difference
between hand-coded and finite difference gradient.
||fd|| 0.000147076, ||hc|| = 0.00988136, angle cosine = 
(fd'hc)/||fdhc|| = 0.99768
2-norm ||fd-hc||/max(||hc||,||fd||) = 0.985151, difference ||fd-hc|| 
= 0.00973464
max-norm ||fd-hc||/max(||hc||,||fd||) = 0.985149, difference 
||fd-hc|| = 0.00243363
||fd|| 0.000382547, ||hc|| = 0.0257001, angle cosine = 
(fd'hc)/||fdhc|| = 0.997609
2-norm ||fd-hc||/max(||hc||,||fd||) = 0.985151, difference ||fd-hc|| 
= 0.0253185
max-norm ||fd-hc||/max(||hc||,||fd||) = 0.985117, difference 
||fd-hc|| = 0.00624562
||fd|| 8.84429e-05, ||hc|| = 0.00594196, angle cosine = 
(fd'hc)/||fdhc|| = 0.997338
2-norm ||fd-hc||/max(||hc||,||fd||) = 0.985156, difference ||fd-hc|| 
= 0.00585376
max-norm ||fd-hc||/max(||hc||,||fd||) = 0.985006, difference 
||fd-hc|| = 0.00137836


Despite these differences we achieve convergence with our hand coded 
gradient, but have to use -tao_ls_type unit.


Both give similar (assume descent) directions, but seem to be scaled
differently. It could be a bad scaling by the mass matrix somewhere in
the continuous adjoint. This could be seen if you plot them side by
side as a quick diagnostic.



I visualized and attached the two gradients. The CADJ is hand coded and
the DADJ is from pyadjoint which is the same as the finite difference
gradient from TAO.

If the attachement gets lost in the mailing list,, here is a direct 
link [1]


[1] https://cloud.tf.uni-kiel.de/index.php/s/nmiNOoI213dx1L1


Emil

$ python heat_adj.py -tao_type blmvm -tao_view -tao_monitor 
-tao_gatol 1e-7 -tao_ls_type unit

iter =   0, Function value: 0.000316722,  Residual: 0.00126285
iter =   1, Function value: 3.82272e-05,  Residual: 0.000438094
iter =   2, Function value: 1.26011e-07,  Residual: 8.4194e-08
Tao Object: 1 MPI processes
   type: blmvm
   Gradient steps: 0
   TaoLineSearch Object: 1 MPI processes
     type: unit
   Active Set subset type: subvec
   convergence tolerances: gatol=1e-07,   steptol=0.,   gttol=0.
   Residual in Function/Gradient:=8.4194e-08
   Objective value=1.26011e-07
   total number of iterations=2,  (max: 2000)
   total number of function/gradient evaluations=3,  (max: 4000)
   Solution converged:    ||g(X)|| <= gatol

$ python heat_adj.py -tao_type blmvm -tao_view -tao_monitor 
-tao_fd_gradient

iter =   0, Function value: 0.000316722,  Residual: 4.87343e-06
iter =   1, Function value: 0.000195676,  Residual: 3.83011e-06
iter =   2, Function value: 1.26394e-07,  Residual: 1.60262e-09
Tao Object: 1 MPI processes
   type: blmvm
   Gradient steps: 0
   TaoLineSearch Object: 1 MPI processes
     type: more-thuente
   Active Set subset type: subvec
   convergence tolerances: gatol=1e-08,   steptol=0.,   gttol=0.
   Residual in Function/Gradient:=1.60262e-09
   Objective value=1.26394e-07
   total number of iterations=2,  (max: 2000)
   total number of function/gradient evaluations=3474,  (max: 
4000)

   Solution converged:    ||g(X)|| <= gato

[petsc-users] TAO: Finite Difference vs Continuous Adjoint gradient issues

2017-11-22 Thread Julian Andrej

Hello,

we prepared a small example which computes the gradient via the 
continuous adjoint method of a heating problem with a cost functional.


We implemented the text book example and tested the gradient via a 
Taylor Remainder (which works fine). Now we wanted to solve the
optimization problem with TAO and checked the gradient vs. the finite 
difference gradient and run into problems.


Testing hand-coded gradient (hc) against finite difference gradient 
(fd), if the ratio ||fd - hc|| / ||hc|| is

0 (1.e-8), the hand-coded gradient is probably correct.
Run with -tao_test_display to show difference
between hand-coded and finite difference gradient.
||fd|| 0.000147076, ||hc|| = 0.00988136, angle cosine = 
(fd'hc)/||fdhc|| = 0.99768
2-norm ||fd-hc||/max(||hc||,||fd||) = 0.985151, difference ||fd-hc|| = 
0.00973464
max-norm ||fd-hc||/max(||hc||,||fd||) = 0.985149, difference ||fd-hc|| = 
0.00243363
||fd|| 0.000382547, ||hc|| = 0.0257001, angle cosine = 
(fd'hc)/||fdhc|| = 0.997609
2-norm ||fd-hc||/max(||hc||,||fd||) = 0.985151, difference ||fd-hc|| = 
0.0253185
max-norm ||fd-hc||/max(||hc||,||fd||) = 0.985117, difference ||fd-hc|| = 
0.00624562
||fd|| 8.84429e-05, ||hc|| = 0.00594196, angle cosine = 
(fd'hc)/||fdhc|| = 0.997338
2-norm ||fd-hc||/max(||hc||,||fd||) = 0.985156, difference ||fd-hc|| = 
0.00585376
max-norm ||fd-hc||/max(||hc||,||fd||) = 0.985006, difference ||fd-hc|| = 
0.00137836


Despite these differences we achieve convergence with our hand coded 
gradient, but have to use -tao_ls_type unit.


$ python heat_adj.py -tao_type blmvm -tao_view -tao_monitor -tao_gatol 
1e-7 -tao_ls_type unit

iter =   0, Function value: 0.000316722,  Residual: 0.00126285
iter =   1, Function value: 3.82272e-05,  Residual: 0.000438094
iter =   2, Function value: 1.26011e-07,  Residual: 8.4194e-08
Tao Object: 1 MPI processes
  type: blmvm
  Gradient steps: 0
  TaoLineSearch Object: 1 MPI processes
type: unit
  Active Set subset type: subvec
  convergence tolerances: gatol=1e-07,   steptol=0.,   gttol=0.
  Residual in Function/Gradient:=8.4194e-08
  Objective value=1.26011e-07
  total number of iterations=2,  (max: 2000)
  total number of function/gradient evaluations=3,  (max: 4000)
  Solution converged:||g(X)|| <= gatol

$ python heat_adj.py -tao_type blmvm -tao_view -tao_monitor 
-tao_fd_gradient

iter =   0, Function value: 0.000316722,  Residual: 4.87343e-06
iter =   1, Function value: 0.000195676,  Residual: 3.83011e-06
iter =   2, Function value: 1.26394e-07,  Residual: 1.60262e-09
Tao Object: 1 MPI processes
  type: blmvm
  Gradient steps: 0
  TaoLineSearch Object: 1 MPI processes
type: more-thuente
  Active Set subset type: subvec
  convergence tolerances: gatol=1e-08,   steptol=0.,   gttol=0.
  Residual in Function/Gradient:=1.60262e-09
  Objective value=1.26394e-07
  total number of iterations=2,  (max: 2000)
  total number of function/gradient evaluations=3474,  (max: 4000)
  Solution converged:||g(X)|| <= gatol


We think, that the finite difference gradient should be in line with our 
hand coded gradient for such a simple example.


We appreciate any hints on debugging this issue. It is implemented in 
python (firedrake) and i can provide the code if this is needed.


Regards
Julian


Re: [petsc-users] PetscFE and TS

2016-11-14 Thread Julian Andrej
You're absolutely right, i missed the y-direction component of the
momentum equation in the mass matrix integration.

I guess i have most of the parts i need from here to investigate the
remaining on my own!

Thank you so much!

On Mon, 14 Nov 2016 07:53:45 -0600
Matthew Knepley <knep...@gmail.com> wrote:

> On Mon, Nov 14, 2016 at 7:35 AM, Matthew Knepley <knep...@gmail.com>
> wrote:
> 
> > On Mon, Nov 14, 2016 at 3:24 AM, Julian Andrej <j...@tf.uni-kiel.de>
> > wrote:
> >  
> >> I think i'm understanding the concept now. Although i have another
> >> set of questions. If i am formulating a DAE (like in the linear
> >> stokes equation for example) it seems i am missing something to
> >> formulate the correct jacobian. The residual formulation with
> >> snes_mf or snes_fd works fine in terms of nonlinear convergence.
> >> If i use the hand coded jacobian the convergence is way slower,
> >> which is a reason for an incorrect jacobian formulation from my
> >> side (at least what i think right now). I should be able to test
> >> my jacobian with -snes_type test, even if i am using TS, right?
> >> This gives me a huge difference. I guess i am missing something in
> >> the formulation and thus in the implementation of the callbacks
> >> here.
> >>
> >> There is a small example attached.
> >>  
> >
> > There are a few problems here, but first here are my options for
> > the run
> >
> > -vel_petscspace_order 2 -vel_petscspace_poly_tensor 1
> > -pres_petscspace_order 1 -pres_petscspace_poly_tensor 1
> > -ts_dt 0.0001 -ts_max_steps 5
> > -pc_type fieldsplit -pc_fieldsplit_type schur
> > -pc_fieldsplit_schur_precondition full
> >   -fieldsplit_velocity_pc_type lu
> >   -fieldsplit_pressure_ksp_rtol 1.0e-9 -fieldsplit_pressure_pc_type
> > svd -ts_monitor -snes_monitor -snes_view -snes_converged_reason
> > -ksp_converged_reason -ksp_monitor
> >
> > 1) Your g00 term was wrong since it did not handle all velocity
> > components
> >
> > void g0_uu(PetscInt dim, PetscInt Nf, PetscInt NfAux,
> >   const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar
> > u[], const PetscScalar u_t[], const PetscScalar u_x[],
> >   const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar
> > a[], const PetscScalar a_t[], const PetscScalar a_x[],
> >   PetscReal t, PetscReal u_tShift, const PetscReal x[], PetscScalar
> > g0[]) {
> >   PetscInt d;
> >   for (d = 0; d < dim; ++d) g0[d*dim+d] = u_tShift*1.0;
> > }
> >
> > 2) You do not handle the null space for the pressure solve right
> > now. The easy way is to use SVD for the PC, however this
> >  is not scalable. Eventually, you want to specify the constant
> > null space for that solve. I do it in ex62.
> >
> > 3) I still do not know what is wrong with the exact solution...
> >  
> 
> I found it. You are not making the initial condition. Uncomment the
> line above TSSolve().
> 
>   Thanks,
> 
> Matt
> 
> 
> >   Thanks,
> >
> >  Matt
> >
> >  
> >> I appreciate your time and help.
> >>
> >> On Thu, Nov 10, 2016 at 4:45 PM, Matthew Knepley
> >> <knep...@gmail.com> wrote:  
> >> > On Thu, Nov 10, 2016 at 9:38 AM, Jed Brown <j...@jedbrown.org>
> >> > wrote:  
> >> >>
> >> >> Matthew Knepley <knep...@gmail.com> writes:
> >> >>  
> >> >> > On Thu, Nov 10, 2016 at 1:31 AM, Julian Andrej
> >> >> > <j...@tf.uni-kiel.de> wrote:
> >> >> >  
> >> >> >> I'm getting the correct solution now, thanks! There are
> >> >> >> still a few open questions.
> >> >> >>
> >> >> >> 1. The mass term is necessary _and_ using the u_tShift value
> >> >> >> if i provide the jacobian. After reading the manual
> >> >> >> countless times, i still don't get what the u_tShift value
> >> >> >> tells me in this context.  
> >> Are  
> >> >> >> there any resources which you could point me to?
> >> >> >>  
> >> >> >
> >> >> > The manual is always a lagging resource because we are trying
> >> >> > things out.  
> >> >>
> >> >> This has been explained in the manual similarly to your email
> >> >> for several years.  If anyone has a suggestion for how to make
> >> >> it

Re: [petsc-users] PetscFE and TS

2016-11-14 Thread Julian Andrej
Sorry, i'll make it a habit to include my runtime options ;)

./fluid -dm_refine 2 -vel_petscspace_poly_tensor -vel_petscspace_order 2
-pres_petscspace_poly_tensor -pres_petscspace_order 1 -ts_monitor
-ts_max_steps 10 -ts_dt 0.01 -ts_final_time 2 -snes_monitor
-ts_fd_color

0 TS dt 0.01 time 0.
0 SNES Function norm 1.843877547310e+01 
1 SNES Function norm 1.284864233819e-02 
2 SNES Function norm 7.195271345724e-08 
1 TS dt 0.01 time 0.01 ...

works fine for example. Running with my hand coded jacobian with

./fluid -dm_refine 2 -vel_petscspace_poly_tensor -vel_petscspace_order 2
-pres_petscspace_poly_tensor -pres_petscspace_order 1 -ts_monitor
-ts_max_steps 10 -ts_dt 0.01 -ts_final_time 2 -snes_monitor

0 TS dt 0.01 time 0.
0 SNES Function norm 1.843877547310e+01 
...
46 SNES Function norm 2.462286148582e-07 
   47 SNES Function norm 1.792848027835e-07 
1 TS dt 0.01 time 0.01 ...

And finally running

./fluid -dm_refine 2 -vel_petscspace_poly_tensor -vel_petscspace_order
2 -pres_petscspace_poly_tensor -pres_petscspace_order 1 -ts_monitor
-ts_max_steps 10 -ts_dt 0.01 -ts_final_time 2 -snes_type test

...
Norm of matrix ratio 0.0203667, difference 3.06481 (user-defined state)
Norm of matrix ratio 0.0203667, difference 3.06481 (constant state -1.0)
Norm of matrix ratio 0.0203667, difference 3.06481 (constant state 1.0)

Thanks again

On Mon, 14 Nov 2016 07:01:09 -0600 Matthew Knepley
<knep...@gmail.com> wrote:

> On Mon, Nov 14, 2016 at 3:24 AM, Julian Andrej <j...@tf.uni-kiel.de>
> wrote:
> 
> > I think i'm understanding the concept now. Although i have another
> > set of questions. If i am formulating a DAE (like in the linear
> > stokes equation for example) it seems i am missing something to
> > formulate the correct jacobian. The residual formulation with
> > snes_mf or snes_fd works fine in terms of nonlinear convergence. If
> > i use the hand coded jacobian the convergence is way slower, which
> > is a reason for an incorrect jacobian formulation from my side (at
> > least what i think right now). I should be able to test my jacobian
> > with -snes_type test, even if i am using TS, right? This gives me a
> > huge difference. I guess i am missing something in the formulation
> > and thus in the implementation of the callbacks here.
> >
> > There is a small example attached.
> >  
> 
> I will look at it. It would be really helpful for me if you would
> include the options you
> run with, and perhaps a little output with the report.
> 
>   Thanks,
> 
>  Matt
> 
> 
> > I appreciate your time and help.
> >
> > On Thu, Nov 10, 2016 at 4:45 PM, Matthew Knepley <knep...@gmail.com>
> > wrote:  
> > > On Thu, Nov 10, 2016 at 9:38 AM, Jed Brown <j...@jedbrown.org>
> > > wrote:  
> > >>
> > >> Matthew Knepley <knep...@gmail.com> writes:
> > >>  
> > >> > On Thu, Nov 10, 2016 at 1:31 AM, Julian Andrej
> > >> > <j...@tf.uni-kiel.de> wrote:
> > >> >  
> > >> >> I'm getting the correct solution now, thanks! There are still
> > >> >> a few open questions.
> > >> >>
> > >> >> 1. The mass term is necessary _and_ using the u_tShift value
> > >> >> if i provide the jacobian. After reading the manual countless
> > >> >> times, i still don't get what the u_tShift value tells me in
> > >> >> this context. Are there any resources which you could point
> > >> >> me to? 
> > >> >
> > >> > The manual is always a lagging resource because we are trying
> > >> > things out.  
> > >>
> > >> This has been explained in the manual similarly to your email for
> > >> several years.  If anyone has a suggestion for how to make it
> > >> better, we'd like to hear.  The typical syndrome is that once
> > >> you learn it, the description suddenly makes sense and you
> > >> wonder why you were ever confused.  It takes feedback from
> > >> people just learning the material to make the explanation more
> > >> clear. 
> > >> > I learned about this by reading examples. The idea is the
> > >> > following. We have an implicit definition of our timestepping
> > >> > method
> > >> >
> > >> >   F(u, grad u, u_t, x, t) = 0  
> > >>
> > >> It doesn't make sense to write grad u as a separate argument
> > >> here.  
> > >
> > >
> > > Sorry, that is just how I think of it, not the actual interface.
> > >  
> > >>
> 

Re: [petsc-users] PetscFE and TS

2016-11-14 Thread Julian Andrej
I think i'm understanding the concept now. Although i have another set
of questions. If i am formulating a DAE (like in the linear stokes
equation for example) it seems i am missing something to formulate the
correct jacobian. The residual formulation with snes_mf or snes_fd
works fine in terms of nonlinear convergence. If i use the hand coded
jacobian the convergence is way slower, which is a reason for an
incorrect jacobian formulation from my side (at least what i think
right now). I should be able to test my jacobian with -snes_type test,
even if i am using TS, right? This gives me a huge difference. I guess
i am missing something in the formulation and thus in the
implementation of the callbacks here.

There is a small example attached.

I appreciate your time and help.

On Thu, Nov 10, 2016 at 4:45 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Thu, Nov 10, 2016 at 9:38 AM, Jed Brown <j...@jedbrown.org> wrote:
>>
>> Matthew Knepley <knep...@gmail.com> writes:
>>
>> > On Thu, Nov 10, 2016 at 1:31 AM, Julian Andrej <j...@tf.uni-kiel.de>
>> > wrote:
>> >
>> >> I'm getting the correct solution now, thanks! There are still a few
>> >> open questions.
>> >>
>> >> 1. The mass term is necessary _and_ using the u_tShift value if i
>> >> provide the jacobian. After reading the manual countless times, i
>> >> still don't get what the u_tShift value tells me in this context. Are
>> >> there any resources which you could point me to?
>> >>
>> >
>> > The manual is always a lagging resource because we are trying things
>> > out.
>>
>> This has been explained in the manual similarly to your email for
>> several years.  If anyone has a suggestion for how to make it better,
>> we'd like to hear.  The typical syndrome is that once you learn it, the
>> description suddenly makes sense and you wonder why you were ever
>> confused.  It takes feedback from people just learning the material to
>> make the explanation more clear.
>>
>> > I learned about this by reading examples. The idea is the
>> > following. We have an implicit definition of our timestepping method
>> >
>> >   F(u, grad u, u_t, x, t) = 0
>>
>> It doesn't make sense to write grad u as a separate argument here.
>
>
> Sorry, that is just how I think of it, not the actual interface.
>
>>
>> Also, is `x` meant to be coordinates or some other statically prescribed
>
>
> Coordinates, as distinct from u. Again this is my fault.
>
>>
>> data?  It can't be actively changing if you expect a generic TS to be
>> convergent.  (It's not an argument to the function.)  So really, you
>> have:
>>
>>   F(u, u_t, t) = 0
>>
>> > which is not unlike a Lagrangian description of a mechanical system. The
>> > Jacobian can be considered
>> > formally to have two parts,
>> >
>> >   dF/du and dF/du_t
>> >
>> > just as in the Lagrangian setting. The u_tShift variable is the
>> > multiplier
>> > for dF/du_t so that you actually
>> > form
>> >
>> >   dF/du + u_tShift dF/du_t
>>
>> We're taking the total derivative of F with respect to u where u_t has
>> been _defined_ as an affine function of u, i.e., shift*u+affine.
>> u_tShift comes from the chain rule.  When computing the total
>> derivative, we have
>>
>>   dF/du + dF/du_t du_t/du.
>>
>>
>> >> 2. Is DMProjectFunction necessary _before_ TSSolve? This acts like an
>> >> initial condition if i'm not mistaken.
>> >>
>> >
>> > Yes, this is the initial condition.
>> >
>> >
>> >> 3. A more in depth question i guess. Where is the "mass action"
>> >> formed? As i could see by stepping through the debugger, there is just
>> >> a formed action for the residual equation. It seems way over my
>> >> understanding how this formulation works out. I'd also appreciate
>> >> resources on that if thats possible.
>> >>
>> >
>> > Jed is the only one who understands this completely.
>>
>> That's a stretch -- several other people have developed sophisticated TS
>> implementations.
>
>
> Matt does not understand this completely.
>
>>
>>
>> > However, I guess by "mass action" you mean the dF/du_t term. In the
>> > explicit methods, you just have u_t + ..., so
>> >
>> >   dF/du_t = M
>> >
>> > for finite element methods. So you are putting that there i

Re: [petsc-users] PetscFE and TS

2016-11-09 Thread Julian Andrej
I'm getting the correct solution now, thanks! There are still a few
open questions.

1. The mass term is necessary _and_ using the u_tShift value if i
provide the jacobian. After reading the manual countless times, i
still don't get what the u_tShift value tells me in this context. Are
there any resources which you could point me to?

2. Is DMProjectFunction necessary _before_ TSSolve? This acts like an
initial condition if i'm not mistaken.

3. A more in depth question i guess. Where is the "mass action"
formed? As i could see by stepping through the debugger, there is just
a formed action for the residual equation. It seems way over my
understanding how this formulation works out. I'd also appreciate
resources on that if thats possible.

Thanks again.

Julian

On Wed, Nov 9, 2016 at 9:25 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Wed, Nov 9, 2016 at 8:14 AM, Matthew Knepley <knep...@gmail.com> wrote:
>>
>> On Wed, Nov 9, 2016 at 7:52 AM, Julian Andrej <j...@tf.uni-kiel.de> wrote:
>>>
>>> I tried the example with different signs for the forcing, so this
>>> might be a mistake by me, yes. it doesn't change anything though. I
>>> also have to correct myself, the time for the callback in
>>> PetscDSAddBoundary changes correctly! Although u_t in all other
>>> callbacks seems to remain zero.
>>>
>>> I am out of ideas on my side though :)
>>
>>
>> The bug is that the boundary conditions are not being properly enforced. I
>> am fixing it now.
>> Thanks for the simple example demonstrating this. It made it much much
>> easier to debug.
>
>
> I think this works. Let me know if you agree.
>
> There is the business of u_t on the boundary, which is not handled right
> now. I am putting that
> in, but the homogeneous values for that is still alright here.
>
>   Thanks,
>
>  Matt
>
>>
>> Matt
>>
>>>
>>> On Wed, Nov 9, 2016 at 2:33 PM, Matthew Knepley <knep...@gmail.com>
>>> wrote:
>>> > On Wed, Nov 9, 2016 at 12:32 AM, Julian Andrej <j...@tf.uni-kiel.de>
>>> > wrote:
>>> >>
>>> >> I tested a few other things and it turned out that the function added
>>> >> as a dirichlet boundary condition via PetscDSAddBoundary doesn't
>>> >> receive the time values correctly (PetscReal time is always 0.0).
>>> >
>>> >
>>> > I will check out your example and fix this. However, I wonder if there
>>> > is a
>>> > bug in your implementation:
>>> >
>>> > void f0_temp(PetscInt dim, PetscInt Nf, PetscInt NfAux,
>>> >   const PetscInt uOff[], const PetscInt uOff_x[], const
>>> > PetscScalar
>>> > u[], const PetscScalar u_t[], const PetscScalar u_x[],
>>> >   const PetscInt aOff[], const PetscInt aOff_x[], const
>>> > PetscScalar
>>> > a[], const PetscScalar a_t[], const PetscScalar a_x[],
>>> >   PetscReal t, const PetscReal x[], PetscScalar f0[])
>>> > {
>>> >   f0[0] = u_t[0] - 2.0;
>>> > }
>>> >
>>> > I get
>>> >
>>> >   du/dt - \Delta u = -2
>>> >
>>> > so that if u = x^2 + y^2 + 2t, then
>>> >
>>> >   2 - (2 + 2) = -2
>>> >
>>> > so you would want u_t + 2.
>>> >
>>> > If you give the Laplacian the opposite sign, this is an unstable
>>> > formulation.
>>> >
>>> >   Thanks,
>>> >
>>> >  Matt
>>> >
>>> >>
>>> >> I also see no effect adding DMTSSetIJacobianLocal vs. not setting the
>>> >> jacobian function. The analytic solution i used in my attached example
>>> >> is the same as in the src/ts/examples/tutorials/ex32.c.
>>> >>
>>> >> Am i doing anything obviously wrong here? My next step would be to try
>>> >> if assembling the matrices myself and writing a custom
>>> >> IFunction/IJacobian which assembles the different parts of the matrix
>>> >> like M and J for the stiff ODE with nontrivial mass matrix (see manual
>>> >> section of TS -> F = M u' - f) but i think this should be obsolete
>>> >> right?
>>> >>
>>> >> Appreciate your help.
>>> >>
>>> >> Regards
>>> >> Julian
>>> >>
>>> >> On Mon, Nov 7, 2016 at 10:49 AM, Julian Andrej <j...@tf.uni-kiel.de>

Re: [petsc-users] PetscFE and TS

2016-11-09 Thread Julian Andrej
I tried the example with different signs for the forcing, so this
might be a mistake by me, yes. it doesn't change anything though. I
also have to correct myself, the time for the callback in
PetscDSAddBoundary changes correctly! Although u_t in all other
callbacks seems to remain zero.

I am out of ideas on my side though :)

On Wed, Nov 9, 2016 at 2:33 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Wed, Nov 9, 2016 at 12:32 AM, Julian Andrej <j...@tf.uni-kiel.de> wrote:
>>
>> I tested a few other things and it turned out that the function added
>> as a dirichlet boundary condition via PetscDSAddBoundary doesn't
>> receive the time values correctly (PetscReal time is always 0.0).
>
>
> I will check out your example and fix this. However, I wonder if there is a
> bug in your implementation:
>
> void f0_temp(PetscInt dim, PetscInt Nf, PetscInt NfAux,
>   const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar
> u[], const PetscScalar u_t[], const PetscScalar u_x[],
>   const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar
> a[], const PetscScalar a_t[], const PetscScalar a_x[],
>   PetscReal t, const PetscReal x[], PetscScalar f0[])
> {
>   f0[0] = u_t[0] - 2.0;
> }
>
> I get
>
>   du/dt - \Delta u = -2
>
> so that if u = x^2 + y^2 + 2t, then
>
>   2 - (2 + 2) = -2
>
> so you would want u_t + 2.
>
> If you give the Laplacian the opposite sign, this is an unstable
> formulation.
>
>   Thanks,
>
>  Matt
>
>>
>> I also see no effect adding DMTSSetIJacobianLocal vs. not setting the
>> jacobian function. The analytic solution i used in my attached example
>> is the same as in the src/ts/examples/tutorials/ex32.c.
>>
>> Am i doing anything obviously wrong here? My next step would be to try
>> if assembling the matrices myself and writing a custom
>> IFunction/IJacobian which assembles the different parts of the matrix
>> like M and J for the stiff ODE with nontrivial mass matrix (see manual
>> section of TS -> F = M u' - f) but i think this should be obsolete
>> right?
>>
>> Appreciate your help.
>>
>> Regards
>> Julian
>>
>> On Mon, Nov 7, 2016 at 10:49 AM, Julian Andrej <j...@tf.uni-kiel.de>
>> wrote:
>> > Hello,
>> >
>> > i'm using PetscFE in combination with SNES and a hand written backward
>> > euler solver right now and like to hand that over to TS. I have been
>> > fiddling quite a while getting TS to work with PetscFE and have
>> > encountered a few unclarities.
>> >
>> > I have looked at examples which are outdated i guess
>> > (src/ts/examples/tutorials/ex32.c) and i am confused on how i have to
>> > formulate the discretization of the jacobian and the residual. It
>> > seems obvious to use the DMTSSetIFunctionLocal and
>> > DMTSSetIJacobianLocal functions because there are prepared wrappers
>> > from DMPlexTSComputeIFunctionFEM and DMPlexTSComputeIJacobianFEM.
>> >
>> > I need a Mass matrix for my problem, is that formed from the function
>> > space information or do i have to form it myself? Is there any working
>> > example which uses PetscFE and TS to form the DMTSSetIFunctionLocal
>> > and DMTSSetIJacobianLocal? (grepping the tree tells me "no"
>> > unfortunately).
>> >
>> > Regards
>> > Julian
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener


Re: [petsc-users] PetscFE and TS

2016-11-08 Thread Julian Andrej
I tested a few other things and it turned out that the function added
as a dirichlet boundary condition via PetscDSAddBoundary doesn't
receive the time values correctly (PetscReal time is always 0.0).

I also see no effect adding DMTSSetIJacobianLocal vs. not setting the
jacobian function. The analytic solution i used in my attached example
is the same as in the src/ts/examples/tutorials/ex32.c.

Am i doing anything obviously wrong here? My next step would be to try
if assembling the matrices myself and writing a custom
IFunction/IJacobian which assembles the different parts of the matrix
like M and J for the stiff ODE with nontrivial mass matrix (see manual
section of TS -> F = M u' - f) but i think this should be obsolete
right?

Appreciate your help.

Regards
Julian

On Mon, Nov 7, 2016 at 10:49 AM, Julian Andrej <j...@tf.uni-kiel.de> wrote:
> Hello,
>
> i'm using PetscFE in combination with SNES and a hand written backward
> euler solver right now and like to hand that over to TS. I have been
> fiddling quite a while getting TS to work with PetscFE and have
> encountered a few unclarities.
>
> I have looked at examples which are outdated i guess
> (src/ts/examples/tutorials/ex32.c) and i am confused on how i have to
> formulate the discretization of the jacobian and the residual. It
> seems obvious to use the DMTSSetIFunctionLocal and
> DMTSSetIJacobianLocal functions because there are prepared wrappers
> from DMPlexTSComputeIFunctionFEM and DMPlexTSComputeIJacobianFEM.
>
> I need a Mass matrix for my problem, is that formed from the function
> space information or do i have to form it myself? Is there any working
> example which uses PetscFE and TS to form the DMTSSetIFunctionLocal
> and DMTSSetIJacobianLocal? (grepping the tree tells me "no"
> unfortunately).
>
> Regards
> Julian
#include 
#include 
#include 
#include 
#include 

typedef struct {
  int debug;
  int dim;
  PetscBool interpolate;
  PetscReal refinementLimit;
  PetscErrorCode (**exactFuncs)(PetscInt dim, PetscReal time, const PetscReal x[], PetscInt Nf, PetscScalar *u, void *ctx);
} AppCtx;

PetscErrorCode CreateMesh(MPI_Comm, DM*, AppCtx*);
PetscErrorCode SetupDiscretization(DM, AppCtx*);

PetscErrorCode zero(PetscInt dim, PetscReal time, const PetscReal x[], PetscInt Nf, PetscScalar *u, void *ctx)
{
  u[0] = 0.0;
  return 0;
}

PetscErrorCode analytic_temp_2d(PetscInt dim, PetscReal time, const PetscReal x[], PetscInt Nf, PetscScalar *u, void *ctx)
{
  *u = 2.0*time + x[0]*x[0] + x[1]*x[1];
  /*
if (x[0] == 0) {
*u = time * 1.0;
} else {
*u = 0.0;
}
  */
  return 0;
}

void f0_temp(PetscInt dim, PetscInt Nf, PetscInt NfAux,
  const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar u[], const PetscScalar u_t[], const PetscScalar u_x[],
  const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar a[], const PetscScalar a_t[], const PetscScalar a_x[],
  PetscReal t, const PetscReal x[], PetscScalar f0[])
{
  f0[0] = u_t[0] - 2.0;
}

void f1_temp(PetscInt dim, PetscInt Nf, PetscInt NfAux,
	 const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar u[], const PetscScalar u_t[], const PetscScalar u_x[],
	 const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar a[], const PetscScalar a_t[], const PetscScalar a_x[],
	 PetscReal t, const PetscReal x[], PetscScalar f1[])
{
  PetscInt d;
  for (d = 0; d < dim; ++d) {
f1[d] = u_x[d];
  }
}

void g3_temp(PetscInt dim, PetscInt Nf, PetscInt NfAux,
	 const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar u[], const PetscScalar u_t[], const PetscScalar u_x[],
	 const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar a[], const PetscScalar a_t[], const PetscScalar a_x[],
	 PetscReal t, PetscReal u_tShift, const PetscReal x[], PetscScalar g3[])
{
  PetscInt d;
  for (d = 0; d < dim; ++d) {
g3[d*dim+d] = 1.0;
  }
}

void g0_temp(PetscInt dim, PetscInt Nf, PetscInt NfAux,
	 const PetscInt uOff[], const PetscInt uOff_x[], const PetscScalar u[], const PetscScalar u_t[], const PetscScalar u_x[],
	 const PetscInt aOff[], const PetscInt aOff_x[], const PetscScalar a[], const PetscScalar a_t[], const PetscScalar a_x[],
	 PetscReal t, PetscReal u_tShift, const PetscReal x[], PetscScalar g0[])
{
  g0[0] = 1.0;
}

#undef __FUNCT__
#define __FUNCT__ "main"
int main(int argc, char **argv)
{
  PetscErrorCode ierr;
  AppCtx ctx;
  DM dm;
  TS ts;
  Vec u, r;
  PetscReal t = 0.0;
  PetscReal l2error = 0.0; /* L_2 error in the solution */
  
  ierr = PetscInitialize(, , NULL, NULL);
  if (ierr) {
return ierr;
  }

  /* Default context initialization */
  ctx.debug = 0;
  ctx.dim = 2;
  ctx.interpolate = PETSC_TRUE;
  ctx.refinementLimit = 0.0;

  ierr = CreateMesh(PETSC_COMM_WORLD, , );CHKERRQ(ierr);
  ierr = DMSetApplicationContext(dm, );CHKERRQ(ierr);
  ierr = PetscMalloc

[petsc-users] PetscFE and TS

2016-11-07 Thread Julian Andrej
Hello,

i'm using PetscFE in combination with SNES and a hand written backward
euler solver right now and like to hand that over to TS. I have been
fiddling quite a while getting TS to work with PetscFE and have
encountered a few unclarities.

I have looked at examples which are outdated i guess
(src/ts/examples/tutorials/ex32.c) and i am confused on how i have to
formulate the discretization of the jacobian and the residual. It
seems obvious to use the DMTSSetIFunctionLocal and
DMTSSetIJacobianLocal functions because there are prepared wrappers
from DMPlexTSComputeIFunctionFEM and DMPlexTSComputeIJacobianFEM.

I need a Mass matrix for my problem, is that formed from the function
space information or do i have to form it myself? Is there any working
example which uses PetscFE and TS to form the DMTSSetIFunctionLocal
and DMTSSetIJacobianLocal? (grepping the tree tells me "no"
unfortunately).

Regards
Julian


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Julian Andrej
We have an implementation which models a real physical application but
don't have the manpower to implement different preconditioner themes
(like with fieldsplit) or try out different time solving schemes
(which is way too easy with TS). Specifically we create a bunch of
operators (Boundary and Face integrators) which model
actuator/observer domains on a predefined mesh.

On Tue, Oct 25, 2016 at 7:19 PM, Stefano Zampini
<stefano.zamp...@gmail.com> wrote:
> I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
> I could add this code to PETSc, maybe in the contrib folder. Barry, what do 
> you think?
>
>
>> We tried a similar approach to get MFEM objects to PETSc and the real
>> problem is that all other convenient functions like creating
>> gridfunctions and projections, you have to convert them every time
>> which is basically nothing more than a bad workaround.
>
> So far, my interface covers matrices and Krylov solvers (PCFieldSplit and 
> PCBDDC are explicitly supported).
>
> Can you tell me how would you like to use these objects with PETSc? What you 
> would like to achieve?
>
> So far, my work on the PETSc interface to MFEM originated from a wishlist for 
> solvers, but I could expand it.
>
>
>> Stefano Zampini
>> replied to the issue on github and was stating that there is some
>> intention to get MFEM working with PETSc but there is no specific
>> timeframe.
>>
>
> There’s currently an open pull request for PETSc solvers inside the private 
> MFEM repo.
> However, I don’t know when the code, if merged, will be released.
>
>
>> Regards
>> Julian Andrej
>>
>> On Tue, Oct 25, 2016 at 6:30 PM, Abdullah Ali Sivas
>> <abdullahasi...@gmail.com> wrote:
>>> I will check that. I am preallocating but it may be that I am not allocating
>>> big enough. I still have to figure out nuance differences between these
>>> formats to solve the bugs. I appreciate your answer and hope Satish knows
>>> it.
>>>
>>> Thank you,
>>> Abdullah Ali Sivas
>>>
>>>
>>> On 2016-10-25 12:15 PM, Matthew Knepley wrote:
>>>
>>> On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas
>>> <abdullahasi...@gmail.com> wrote:
>>>>
>>>> Hello,
>>>>
>>>> I want to use PETSc with mfem and I know that mfem people will figure out
>>>> a way to do it in few months. But for now as a temporary solution I just
>>>> thought of converting hypre PARCSR matrices (that is what mfem uses as
>>>> linear solver package) into PETSc MPIAIJ matrices and I have a semi-working
>>>> code with some bugs. Also my code is dauntingly slow and seems like not
>>>> scaling. I have used MatHYPRE_IJMatrixCopy from myhp.c of PETSc and
>>>> hypre_ParCSRMatrixPrintIJ from par_csr_matrix.c of hypre as starting 
>>>> points.
>>>> Before starting I checked whether there was anything done similar to this, 
>>>> I
>>>> could not find anything.
>>>>
>>>> My question is, are you aware of such a conversion code (i.e. something
>>>> like hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, Mat *A)?
>>>
>>> No, but maybe Satish knows. Slow running times most likely come from lack of
>>> preallocation for the target matrix.
>>>
>>>  Thanks,
>>>
>>> Matt
>>>>
>>>> Thanks in advance,
>>>> Abdullah Ali Sivas
>>>
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their experiments
>>> is infinitely more interesting than any results to which their experiments
>>> lead.
>>> -- Norbert Wiener
>>>
>>>
>


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Julian Andrej
We tried a similar approach to get MFEM objects to PETSc and the real
problem is that all other convenient functions like creating
gridfunctions and projections, you have to convert them every time
which is basically nothing more than a bad workaround. Stefano Zampini
replied to the issue on github and was stating that there is some
intention to get MFEM working with PETSc but there is no specific
timeframe.

Regards
Julian Andrej

On Tue, Oct 25, 2016 at 6:30 PM, Abdullah Ali Sivas
<abdullahasi...@gmail.com> wrote:
> I will check that. I am preallocating but it may be that I am not allocating
> big enough. I still have to figure out nuance differences between these
> formats to solve the bugs. I appreciate your answer and hope Satish knows
> it.
>
> Thank you,
> Abdullah Ali Sivas
>
>
> On 2016-10-25 12:15 PM, Matthew Knepley wrote:
>
> On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas
> <abdullahasi...@gmail.com> wrote:
>>
>> Hello,
>>
>> I want to use PETSc with mfem and I know that mfem people will figure out
>> a way to do it in few months. But for now as a temporary solution I just
>> thought of converting hypre PARCSR matrices (that is what mfem uses as
>> linear solver package) into PETSc MPIAIJ matrices and I have a semi-working
>> code with some bugs. Also my code is dauntingly slow and seems like not
>> scaling. I have used MatHYPRE_IJMatrixCopy from myhp.c of PETSc and
>> hypre_ParCSRMatrixPrintIJ from par_csr_matrix.c of hypre as starting points.
>> Before starting I checked whether there was anything done similar to this, I
>> could not find anything.
>>
>> My question is, are you aware of such a conversion code (i.e. something
>> like hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, Mat *A)?
>
> No, but maybe Satish knows. Slow running times most likely come from lack of
> preallocation for the target matrix.
>
>   Thanks,
>
>  Matt
>>
>> Thanks in advance,
>> Abdullah Ali Sivas
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener
>
>


Re: [petsc-users] PetscFE questions

2016-10-21 Thread Julian Andrej
Yeah, thanks for pointing out my mistake. Next time i'm going to think
one more time before writing ;)

On Fri, Oct 21, 2016 at 12:17 PM, Lawrence Mitchell
<lawrence.mitch...@imperial.ac.uk> wrote:
>
>> On 21 Oct 2016, at 08:26, Julian Andrej <j...@tf.uni-kiel.de> wrote:
>>
>> On Thu, Oct 20, 2016 at 5:18 PM, Matthew Knepley <knep...@gmail.com> wrote:
>>> On Thu, Oct 20, 2016 at 9:42 AM, Julian Andrej <j...@tf.uni-kiel.de> wrote:
>>>>
>>>> Thanks for the suggestion. I guess DMCreateSubDM can work, but is
>>>> cumbersome to handle for the normal solution process since the mass
>>>> matrix for example is not a seperate field.
>>>
>>>
>>> I did not understand what you meant by "parts of the physics". If you just
>>> want to make a different operator, then swap out the PetscDS from the DM.
>>> That holds the pointwise functions and discretizations.
>>>
>>
>> Yes, its basically a different operator! Thats a really smart design,
>> i can just create different PetscDS objects and stick them in to
>> assemble the operator.
>>
>>  /* Assemble mass operator */
>>  DMSetDS(dm, ds_mass);
>>  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->M, ctx->M, NULL);
>>  /* Assemble laplacian operator */
>>  DMSetDS(dm, ds_laplacian);
>>  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->J, ctx->J, NULL);
>>
>> There is one thing that bothers me just a bit. Everytime you call
>> DMSetDS the old PetscDS object is destroyed and you have to reacreate
>> the object in case you want to reassemble that operator.
>>
>> src/dm/interface/dm.c:3889:  ierr = PetscDSDestroy(>prob);CHKERRQ(ierr);
>
> All objects in PETSc are refcounted.  So this just drops the reference that 
> the DM is holding to the DS.  As long as you're still holding a reference in 
> your code (you haven't called PetscDSDestroy) then this does not actually 
> deallocate the DS, just decrements the refcount.
>
> Lawrence


Re: [petsc-users] PetscFE questions

2016-10-21 Thread Julian Andrej
On Thu, Oct 20, 2016 at 5:18 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Thu, Oct 20, 2016 at 9:42 AM, Julian Andrej <j...@tf.uni-kiel.de> wrote:
>>
>> Thanks for the suggestion. I guess DMCreateSubDM can work, but is
>> cumbersome to handle for the normal solution process since the mass
>> matrix for example is not a seperate field.
>
>
> I did not understand what you meant by "parts of the physics". If you just
> want to make a different operator, then swap out the PetscDS from the DM.
> That holds the pointwise functions and discretizations.
>

Yes, its basically a different operator! Thats a really smart design,
i can just create different PetscDS objects and stick them in to
assemble the operator.

  /* Assemble mass operator */
  DMSetDS(dm, ds_mass);
  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->M, ctx->M, NULL);
  /* Assemble laplacian operator */
  DMSetDS(dm, ds_laplacian);
  DMPlexSNESComputeJacobianFEM(dm, dummy, ctx->J, ctx->J, NULL);

There is one thing that bothers me just a bit. Everytime you call
DMSetDS the old PetscDS object is destroyed and you have to reacreate
the object in case you want to reassemble that operator.

src/dm/interface/dm.c:3889:  ierr = PetscDSDestroy(>prob);CHKERRQ(ierr);

Maybe it is just my specific use case but something to think about.

>>
>> src/snes/examples/tutorials/ex77 handles a seperate field for the
>> nullspace, if anyone is interested in that.
>>
>> An intuitive way was just copying the DM and describing a new problem on
>> it.
>>
>>   DM dm_mass;
>>   PetscDS ds_mass;
>>   Vec dummy;
>>   PetscInt id = 1;
>>   petsc_call(DMCreateGlobalVector(dm, ));
>>   petsc_call(DMClone(ctx->dm, _mass));
>>   petsc_call(DMGetDS(dm_mass, _mass));
>>   petsc_call(PetscDSSetDiscretization(ds_mass, 0, (PetscObject)fe));
>>   petsc_call(PetscDSSetJacobian(ds_mass, 0, 0, mass_kernel, NULL, NULL,
>> NULL));
>>   petsc_call(PetscDSAddBoundary(ds_mass, PETSC_TRUE, "wall", "marker",
>> 0, 0, NULL, (void (*)())ctx->exact_funcs[0], 1, , ctx));
>>   petsc_call(DMCreateMatrix(dm_mass, >M));
>>   petsc_call(DMPlexSNESComputeJacobianFEM(dm_mass, dummy, ctx->M,
>> ctx->M, NULL));
>>
>> is this an intended way to assemble a jacobian based on a weak form?
>> The memory overhead for a DM copy isn't huge on the first sight.
>
>
> Its O(1).
>
>>
>> And a much more important question. Is there any mathematical
>> description how exactly you handle dirichlet boundary conditions here?
>
>
> Right now, you can do two things:
>
>   1) Handle it yourself
>
> or
>
>   2) eliminate particular dofs
>
> If you use 2), these dofs are eliminated from the global vector. They remain
> in the
> local vector, and boundary values are inserted before local vectors are
> passed to
> assembly routines.
>
>Matt
>

Thank you again for your help and suggestions.

Regards
Julian

>>
>> On first sight it looks like condensing the nodes only to
>> non-essential nodes and then projecting them back in the solution
>> vector. If thats teh case I don't understand how you "augment" the
>> solution with the boundary nodes.
>>
>> Regards
>> Julian
>>
>>
>> On Wed, Oct 19, 2016 at 11:51 AM, Matthew Knepley <knep...@gmail.com>
>> wrote:
>> > On Tue, Oct 18, 2016 at 7:38 AM, Julian Andrej <j...@tf.uni-kiel.de>
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> i have general question about PetscFE. When i want to assemble certain
>> >> parts of physics separately, how can i do that? I basically want to
>> >> assemble matrices/vectors from the weak forms on the same DM (and
>> >> avoid copying the DM) and use them afterwards. Is there a convenient
>> >> way for doing that?
>> >>
>> >> The "workflow" i'm approaching is something like:
>> >>
>> >> - Setup the DM
>> >> - Setup discretization (spaces and quadrature) for each weak form i
>> >> want to compute
>> >> - Compute just the weak form i want right now for a specific
>> >> discretization and field.
>> >>
>> >> The reason is i need certain parts of the "complete" Jacobian for
>> >> computations of eigenproblems and like to avoid computing those more
>> >> often than needed.
>> >
>> >
>> > The way I envision this working is to use DMCreateSubDM(). It should
>> > extract
>> > everything correctly for the subset of fields you select. However, I
>> > have
>> > not
>> > extensively tested, so if something is wrong let me know.
>> >
>> >   Thanks,
>> >
>> >  Matt
>> >
>> >>
>> >> Regards
>> >> Julian
>> >
>> >
>> >
>> >
>> > --
>> > What most experimenters take for granted before they begin their
>> > experiments
>> > is infinitely more interesting than any results to which their
>> > experiments
>> > lead.
>> > -- Norbert Wiener
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener


Re: [petsc-users] PetscFE questions

2016-10-20 Thread Julian Andrej
Thanks for the suggestion. I guess DMCreateSubDM can work, but is
cumbersome to handle for the normal solution process since the mass
matrix for example is not a seperate field.
src/snes/examples/tutorials/ex77 handles a seperate field for the
nullspace, if anyone is interested in that.

An intuitive way was just copying the DM and describing a new problem on it.

  DM dm_mass;
  PetscDS ds_mass;
  Vec dummy;
  PetscInt id = 1;
  petsc_call(DMCreateGlobalVector(dm, ));
  petsc_call(DMClone(ctx->dm, _mass));
  petsc_call(DMGetDS(dm_mass, _mass));
  petsc_call(PetscDSSetDiscretization(ds_mass, 0, (PetscObject)fe));
  petsc_call(PetscDSSetJacobian(ds_mass, 0, 0, mass_kernel, NULL, NULL, NULL));
  petsc_call(PetscDSAddBoundary(ds_mass, PETSC_TRUE, "wall", "marker",
0, 0, NULL, (void (*)())ctx->exact_funcs[0], 1, , ctx));
  petsc_call(DMCreateMatrix(dm_mass, >M));
  petsc_call(DMPlexSNESComputeJacobianFEM(dm_mass, dummy, ctx->M,
ctx->M, NULL));

is this an intended way to assemble a jacobian based on a weak form?
The memory overhead for a DM copy isn't huge on the first sight.

And a much more important question. Is there any mathematical
description how exactly you handle dirichlet boundary conditions here?
On first sight it looks like condensing the nodes only to
non-essential nodes and then projecting them back in the solution
vector. If thats teh case I don't understand how you "augment" the
solution with the boundary nodes.

Regards
Julian


On Wed, Oct 19, 2016 at 11:51 AM, Matthew Knepley <knep...@gmail.com> wrote:
> On Tue, Oct 18, 2016 at 7:38 AM, Julian Andrej <j...@tf.uni-kiel.de> wrote:
>>
>> Hi,
>>
>> i have general question about PetscFE. When i want to assemble certain
>> parts of physics separately, how can i do that? I basically want to
>> assemble matrices/vectors from the weak forms on the same DM (and
>> avoid copying the DM) and use them afterwards. Is there a convenient
>> way for doing that?
>>
>> The "workflow" i'm approaching is something like:
>>
>> - Setup the DM
>> - Setup discretization (spaces and quadrature) for each weak form i
>> want to compute
>> - Compute just the weak form i want right now for a specific
>> discretization and field.
>>
>> The reason is i need certain parts of the "complete" Jacobian for
>> computations of eigenproblems and like to avoid computing those more
>> often than needed.
>
>
> The way I envision this working is to use DMCreateSubDM(). It should extract
> everything correctly for the subset of fields you select. However, I have
> not
> extensively tested, so if something is wrong let me know.
>
>   Thanks,
>
>  Matt
>
>>
>> Regards
>> Julian
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener


[petsc-users] PetscFE questions

2016-10-18 Thread Julian Andrej
Hi,

i have general question about PetscFE. When i want to assemble certain
parts of physics separately, how can i do that? I basically want to
assemble matrices/vectors from the weak forms on the same DM (and
avoid copying the DM) and use them afterwards. Is there a convenient
way for doing that?

The "workflow" i'm approaching is something like:

- Setup the DM
- Setup discretization (spaces and quadrature) for each weak form i
want to compute
- Compute just the weak form i want right now for a specific
discretization and field.

The reason is i need certain parts of the "complete" Jacobian for
computations of eigenproblems and like to avoid computing those more
often than needed.

Regards
Julian


Re: [petsc-users] SLEPc: Convergence Problems

2016-10-17 Thread Julian Andrej
Do you precondition your eigenvalue problem? If not, you should. Let
us know what structure your matrix has and which blocks (if there are
any) include which physics.

Regards
Julian

On Mon, Oct 17, 2016 at 5:30 PM, Christopher Pierce  wrote:
> I've implemented my application using MatGetSubMatrix and the solvers
> appear to be converging correctly now, just slowly.  I assume that this
> is due to the clustering of eigenvalues inherent to the problem that I'm
> using, however.  I think that this should be enough to get me on track
> to solving problems with it.
>
> Thanks,
>
> Chris
>
>
> On 10/14/16 01:43, Christopher Pierce wrote:
>> Thank You,
>>
>> That looks like what I need to do if the highly degenerate eigenpairs
>> are my problem.  I'll try that out this week and see if that helps.
>>
>> Chris
>>
>>
>>
>>
>> On 10/13/16 20:01, Barry Smith wrote:
>>>   I would use MatGetSubMatrix() to pull out the part of the matrix you care 
>>> about and hand that matrix off to SLEPc.
>>>
>>>   Others prefer to remove the Dirichlet boundary value locations while 
>>> doing the finite element assembly, this way those locations never appear in 
>>> the matrix.
>>>
>>>The end result is the same, you have the slightly smaller matrix of 
>>> interest to compute the eigenvalues from.
>>>
>>>
>>> Barry
>>>
 On Oct 13, 2016, at 5:48 PM, Christopher Pierce  wrote:

 Hello All,

 As there isn't a SLEPc specific list, it was recommended that I bring my
 question here.  I am using SLEPc to solve a generalized eigenvalue
 problem generated as part of the Finite Element Method, but am having
 difficulty getting the diagonalizer to converge.  I am worried that the
 method used to set boundary conditions in the matrix is creating the
 problem and am looking for other people's input.

 In order to set the boundary conditions, I find the list of IDs that
 should be zero in the resulting eigenvectors and then use
 MatZeroRowsColumns to zero the rows and columns and in the matrix A
 insert a large value such as 1E10 on each diagonal element that was
 zeroed and likewise for the B matrix except with the value 1.0.  That
 way the eigenvalues resulting from those solutions are on the order of
 1E10 and are outside of the region of interest for my problem.

 When I tried to diagonal the matrices I could only get converged
 solutions from the rqcg method which I have found to not scale well with
 my problem.  When using any other method, the approximate error of the
 eigenpairs hovers around 1E00 and 1E01 until it reaches the max number
 of iterations.  Could having so many identical eigenvalues (~1,000) in
 the spectrum be causing this to happen even if they are far outside of
 the range of interest?

 Thank,

 Chris Pierce
 WPI Center for Computation Nano-Science


>>
>
>


Re: [petsc-users] SLEPc: Convergence Problems

2016-10-13 Thread Julian Andrej
See this description from Jed
http://scicomp.stackexchange.com/questions/3298/appropriate-space-for-weak-solutions-to-an-elliptical-pde-with-mixed-inhomogeneo/3300#3300.

In a simpler way you could just scale your diagonal entries which are
1 at the moment with a value that is out of your interest range, such
that the values do not appear in the solution.

On Fri, Oct 14, 2016 at 2:01 AM, Barry Smith  wrote:
>
>   I would use MatGetSubMatrix() to pull out the part of the matrix you care 
> about and hand that matrix off to SLEPc.
>
>   Others prefer to remove the Dirichlet boundary value locations while doing 
> the finite element assembly, this way those locations never appear in the 
> matrix.
>
>The end result is the same, you have the slightly smaller matrix of 
> interest to compute the eigenvalues from.
>
>
> Barry
>
>> On Oct 13, 2016, at 5:48 PM, Christopher Pierce  wrote:
>>
>> Hello All,
>>
>> As there isn't a SLEPc specific list, it was recommended that I bring my
>> question here.  I am using SLEPc to solve a generalized eigenvalue
>> problem generated as part of the Finite Element Method, but am having
>> difficulty getting the diagonalizer to converge.  I am worried that the
>> method used to set boundary conditions in the matrix is creating the
>> problem and am looking for other people's input.
>>
>> In order to set the boundary conditions, I find the list of IDs that
>> should be zero in the resulting eigenvectors and then use
>> MatZeroRowsColumns to zero the rows and columns and in the matrix A
>> insert a large value such as 1E10 on each diagonal element that was
>> zeroed and likewise for the B matrix except with the value 1.0.  That
>> way the eigenvalues resulting from those solutions are on the order of
>> 1E10 and are outside of the region of interest for my problem.
>>
>> When I tried to diagonal the matrices I could only get converged
>> solutions from the rqcg method which I have found to not scale well with
>> my problem.  When using any other method, the approximate error of the
>> eigenpairs hovers around 1E00 and 1E01 until it reaches the max number
>> of iterations.  Could having so many identical eigenvalues (~1,000) in
>> the spectrum be causing this to happen even if they are far outside of
>> the range of interest?
>>
>> Thank,
>>
>> Chris Pierce
>> WPI Center for Computation Nano-Science
>>
>>
>


Re: [petsc-users] Autoconf tests

2016-10-12 Thread Julian Andrej
libMesh provides a m4 test for finding the PETSc library. You can find it here

https://github.com/libMesh/libmesh/blob/master/m4/petsc.m4

On Wed, Oct 12, 2016 at 3:18 PM, jeremy theler  wrote:
> I once made a quick hack, maybe you can start your dirty work from here
>
> https://bitbucket.org/wasora/wasora/src/5a88abbac1a846f2a6ed0b4e585f6f3c0fedf2e7/m4/petsc.m4?at=default=file-view-default
>
> --
> jeremy
>
> On Tue, Oct 11, 2016 at 5:31 PM Barry Smith  wrote:
>>
>>
>>You don't want to get the debug mode from PETSC_ARCH since there may
>> not be a PETSC_ARCH (for PETSc --prefix installs) or because the user did
>> not put the string in it.  You can check for the PETSC_USE_DEBUG symbol in
>> the petscconf.h file by linking a C program against  and #if
>> defined(PETSC_USE_DEBUG).
>>
>>
>>Barry
>>
>> > On Oct 11, 2016, at 3:12 PM, CLAUS HELLMUTH WARNER HETZER
>> >  wrote:
>> >
>> > Hi everybody-
>> >
>> > Figured I’d ask this here before I go reinventing the wheel.
>> >
>> > I’m writing an autoconf installer (the standard Linux configure/make
>> > package) for an acoustic wave propagation modeling package that builds 
>> > PETSc
>> > and SLEPc as part of the installation process.  I’d like to be able to test
>> > for instances of PETSc already being installed on the user’s machine and, 
>> > if
>> > possible, whether they’re the debug version.  I know I can check for the
>> > existence of the PETSC_DIR environmental variable, and parse the PETSC_ARCH
>> > variable for “debug”, and I’ll do that as a first pass, but has anybody
>> > written any M4 tests that are more reliable than those (i.e. actually
>> > attempting to link to the libraries)?  I had one user who had the libraries
>> > installed in /usr/local/bin but didn’t have the environmental variables set
>> > in their profile, so the linker was confused and it took a while to figure
>> > out what was going weird with the install.
>> >
>> > If not, I guess I’ll be putting on my Autoconf gloves and getting my
>> > hands dirty.
>> >
>> > Thanks
>> > -Claus Hetzer
>> >
>> > --
>> > Claus Hetzer
>> > Senior Research and Development Engineer
>> > National Center for Physical Acoustics
>> > The University of Mississippi
>> > 145 Hill Drive
>> > PO Box 1848
>> > University, MS 38677
>> > cl...@olemiss.edu
>> >
>> >
>> >
>> >
>> >
>>
>


Re: [petsc-users] Segmentation faults: Derived types

2016-08-21 Thread Julian Andrej
You need to generate the TAGS file first using

make alletags

in ${PETSC_DIR}. Then you need to visit the tags table using emacs.

On Sun, Aug 21, 2016 at 12:09 PM, Santiago Ospina De Los Rios
 wrote:
> wow, what a dumb mistake. thank you very much!
>
> Since you've noticed me such error, I am taking the time to learn all of
> your advice about management and debugging code. Then I'm learning emacs and
> so... the matter is that you recommend to use PETSc tags which it's supposed
> to be in ${PETSC_DIR}/TAGS, but it seems to be changed or something because
> I wasn't able to find it.
>
> So, how can I use the tags you are referring to?
>
>
>
> 2016-08-19 14:40 GMT-05:00 Barry Smith :
>>
>>
>>You code has a bug right at the top:
>>
>>CALL PetscInitialize(PETSC_COMM_WORLD,ierr)
>>
>> should be
>>
>>CALL PetscInitialize(PETSC_NULL_CHARACTER,ierr)
>>
>> you were just lucky previously that the stack frame was different enough
>> that it did not previously crash. Once I corrected the code the new version
>> ran without crashing. I found the bug very easily by simply running the new
>> version directly in the debugger
>>
>> lldb ./ANSIFLOW
>>
>> and seeing it crashed with a crazy stack in petscinitialize_
>>
>>   Barry
>>
>>
>> > On Aug 5, 2016, at 11:34 PM, Santiago Ospina De Los Rios
>> >  wrote:
>> >
>> > Dear Barry,
>> >
>> > I tried to build a simple code with the same things I mentioned to you
>> > on last e-mail but it worked, which is more strange to me. So I built two
>> > branches on my git code to show you the problem:
>> >
>> > git code: https://github.com/SoilRos/ANISOFLOWPACK
>> >
>> > Branch:  PETSc_debug_boolean_0
>> > The first one is a simple code which is working for what was designed.
>> > Forget sample problems, just compile and run the ANISOFLOW executable on 
>> > src
>> > folder, if there are some verbose messages then the program is working.
>> >
>> > Branch:  PETSc_debug_boolean_1
>> > The second one is just a modification of the first one adding the two
>> > booleans mentioned above in 01_Types.F90 file. I tried it in mac El Capitan
>> > and Ubuntu (with Valgrind) and PETSc 3.7.3 and 3.7.2 respectively, both 
>> > with
>> > the same segmentation fault.
>> >
>> > PD: Although I already fixed it compressing the three booleans into one
>> > integer, I think is better if we try to figure out why there is a
>> > segmentation fault because I had similar problems before.
>> > PD2: Please obviate the variable description because are pretty out of
>> > date. I'm trying to change it, so it can be confusing.
>> >
>> > Best wishes,
>> > Santiago Ospina
>> >
>> > 2016-08-05 15:54 GMT-05:00 Barry Smith :
>> >
>> > > On Aug 1, 2016, at 4:41 PM, Santiago Ospina De Los Rios
>> > >  wrote:
>> > >
>> > > Hello there,
>> > >
>> > > I'm having problems defining some variables into derived types in
>> > > Fortran. Before, I had a similar problems with an allocatable array
>> > > "PetsInt" but I solved it just doing a non-collective Petsc Vec. Today 
>> > > I'm
>> > > having troubles with "PetscBool" or "Logical":
>> > >
>> > > In a module which define the variables, I have the following:
>> > >
>> > > MODULE ANISOFLOW_Types
>> > >
>> > > IMPLICIT NONE
>> > >
>> > > #include 
>> > > #include 
>> > >
>> > > ...
>> > >
>> > >  TYPE ConductivityField
>> > >  PetscBool   ::
>> > > DefinedByCvtZones=.FALSE.! It produces the segmentation fault.
>> > >  PetscBool   ::
>> > > DefinedByPptZones=.FALSE.! It produces the segmentation fault.
>> > >  PetscBool   ::
>> > > DefinedByCell=.FALSE.
>> > >  ! Conductivity defined by zones (Local):
>> > >  Vec :: ZoneID
>> > >  TYPE(Tensor),ALLOCATABLE:: Zone(:)
>> > >  ! Conductivity defined on every cell (Local):
>> > >  Vec :: Cell
>> > >  END TYPE ConductivityField
>> > >
>> > >
>> > > TYPE SpecificStorageField
>> > > PetscBool   ::
>> > > DefinedByStoZones=.FALSE.! It produces the segmentation fault.
>> > > PetscBool   ::
>> > > DefinedByPptZones=.FALSE.! It produces the segmentation fault.
>> > > PetscBool   ::
>> > > DefinedByCell=.FALSE.
>> > > ! Specific Storage defined by zones (Local):
>> > > Vec :: ZoneID
>> > > Vec :: Zone
>> > > ! Specific Storage defined on every cell (Global).:
>> > > Vec :: Cell
>> > > END TYPE SpecificStorageField
>> > >
>> > > TYPE PropertiesField
>> > > TYPE(ConductivityField) :: Cvt
>> 

Re: [petsc-users] A question about DMPlexDistribute

2016-08-16 Thread Julian Andrej
load-netcdf --with-hdf5-dir=/home/leejearl/Install/hdf5-1.8.14
>> --download-metis=yes".
>> Is there some problem? Can you show me your command for configuring PETSc?
>>
>>
>> Thanks
>>
>> leejearl
>>
>>
>>
>>
>>
>> On 2016年08月13日 01:10, Matthew Knepley wrote:
>>
>> On Thu, Aug 11, 2016 at 8:00 PM, leejearl <leeje...@126.com> wrote:
>>>
>>> Thank you for your reply. I have attached the code, grid and the error
>>> message.
>>>
>>> cavity.c is the code file, cavity.exo is the grid, and error.dat is the
>>> error message.
>>>
>>> The command is "mpirun -n 2 ./cavity
>>
>>
>> Can you verify that you are running the master branch? I just ran this and
>> got
>>
>> DM Object: 2 MPI processes
>>   type: plex
>> DM_0x8404_0 in 2 dimensions:
>>   0-cells: 5253 5252
>>   1-cells: 10352 10350
>>   2-cells: 5298 (198) 5297 (198)
>> Labels:
>>   ghost: 2 strata of sizes (199, 400)
>>   vtk: 1 strata of sizes (4901)
>>   Cell Sets: 1 strata of sizes (5100)
>>   Face Sets: 3 strata of sizes (53, 99, 50)
>>   depth: 3 strata of sizes (5253, 10352, 5298)
>>
>>   Thanks,
>>
>>  Matt
>>
>>>
>>> On 2016年08月11日 23:29, Matthew Knepley wrote:
>>>
>>> On Thu, Aug 11, 2016 at 3:14 AM, leejearl <leeje...@126.com> wrote:
>>>>
>>>> Hi,
>>>> Thank you for your reply. It help me very much.
>>>> But, for "/petsc-3.7.2/src/ts/examples/tutorials/ex11.c", when I set
>>>> the overlap to 2 levels, the command is
>>>> "mpirun -n 3 ./ex11 -f annulus-20.exo -ufv_mesh_overlap 2 -physics sw",
>>>> it suffers a error.
>>>> It seems to me that setting overlap to 2 is very common. Are there
>>>> issues that I have not take into consideration?
>>>> Any help are appreciated.
>>>
>>> I will check this out. I have not tested an overlap of 2 here since I
>>> generally use nearest neighbor FV methods for
>>> unstructured stuff. I have test examples that run fine for overlap > 1.
>>> Can you send the entire error message?
>>>
>>> If the error is not in the distribution, but rather in the analytics,
>>> that is understandable because this example is only
>>> intended to be run using a nearest neighbor FV method, and thus might be
>>> confused if we give it two layers of ghost
>>> cells.
>>>
>>>Matt
>>>
>>>>
>>>>
>>>> leejearl
>>>>
>>>>
>>>> On 2016年08月11日 14:57, Julian Andrej wrote:
>>>>
>>>> Hi,
>>>>
>>>> take a look at slide 10 of [1], there is visually explained what the
>>>> overlap between partitions is.
>>>>
>>>> [1]
>>>> https://www.archer.ac.uk/training/virtual/files/2015/06-PETSc/slides.pdf
>>>>
>>>> On Thu, Aug 11, 2016 at 8:48 AM, leejearl <leeje...@126.com> wrote:
>>>>>
>>>>> Hi, all:
>>>>> I want to use PETSc to build my FVM code. Now, I have a question
>>>>> about
>>>>> the function  DMPlexDistribute(DM dm, PetscInt overlap, PetscSF *sf, DM
>>>>> *dmOverlap) .
>>>>>
>>>>> In the example "/petsc-3.7.2/src/ts/examples/tutorials/ex11.c".
>>>>> When I set the overlap
>>>>> as 0 or 1, it works well. But, if I set the overlap as 2, it suffers a
>>>>> problem.
>>>>> I am confused about the value of overlap. Can it be set as 2? What
>>>>> is the meaning of
>>>>> the parameter overlap?
>>>>> Any helps are appreciated!
>>>>>
>>>>> leejearl
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>>
>>
>>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener
>
>


Re: [petsc-users] A question about DMPlexDistribute

2016-08-11 Thread Julian Andrej
Hi,

take a look at slide 10 of [1], there is visually explained what the
overlap between partitions is.

[1] https://www.archer.ac.uk/training/virtual/files/2015/06-PETSc/slides.pdf

On Thu, Aug 11, 2016 at 8:48 AM, leejearl  wrote:

> Hi, all:
> I want to use PETSc to build my FVM code. Now, I have a question about
> the function  DMPlexDistribute(DM dm, PetscInt overlap, PetscSF *sf, DM
> *dmOverlap) .
>
> In the example "/petsc-3.7.2/src/ts/examples/tutorials/ex11.c". When
> I set the overlap
> as 0 or 1, it works well. But, if I set the overlap as 2, it suffers a
> problem.
> I am confused about the value of overlap. Can it be set as 2? What is
> the meaning of
> the parameter overlap?
> Any helps are appreciated!
>
> leejearl
>
>
>
>


Re: [petsc-users] [RFC] Docs: TeX -> HTML

2016-07-23 Thread Julian Andrej
Small suggestion (it also came up at the Meeting)

What is the opinion on a "main" documentation in markdown/restructured
text or something like that? The conversion from one of these formats
into pdf or any other format like html is handled by a variety of
tools pretty well.

On Sat, Jul 23, 2016 at 8:23 PM, Barry Smith  wrote:
>
>Marco,
>
> Every rending I've seen of nontrivial latex documents to HTML looks dang 
> ugly in HTML and is extra work to maintain (despite the poor quality). We've 
> tried a couple of times with PETSc to keep an HTML version going and gave up 
> both times.
>
> I don't like the idea of having two copies of the same thing, we'd never 
> keep them in sync nor do I like the idea of ugly HTML pages.
>
> The one drawback of just having a PDF manual IMHO is that we cannot 
> currently link directly to bookmarks inside the users manual from, say, a 
> manual page html file. (Bookmarks inside the manual.pdf to other places 
> inside the manual.pdf do work fine). The solution seems to be to use Adobe 
> #nameddest=destination instead of bookmarks. These can be added in latex with 
> \hypertarget{} for example \hypertarget{ch_performance} and then in the 
> browser 
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf#nameddest=ch_performance
>  will jump to the correct place. This works with the current chrome but does 
> not work with the current Apple Safari (arg) and if you google nameddest 
> doesn't work you find that often browsers seem to have this broken. If it 
> wasn't so broken I would have (automated) adding all the hypertargets and the 
> manual and augmented all the manual pages to have links to them. Still 
> tempted but it seems they won't work except with Chrome (maybe firefox if 
> properly configured). The problem of badly supported #nameddest goes back 10 
> years
>
>
>   Barry
>
>
>
>
>
>
>> On Jul 23, 2016, at 4:25 AM, Marco Zocca  wrote:
>>
>> Dear all,
>>
>>  following the discussion at PETSc'16, I have tried to render the
>> TeX-based manual into HTML with latex2html [1] and pandoc [2] .
>>
>> Neither attempt was successful, because of the presence of certain
>> external TeX packages used for rendering various custom aspects of the
>> manual.
>>
>> There is no 1:1 way of converting such a document. However there are a
>> number of templates for rendering static websites that use LaTeX math
>> and verbatim source code (e.g. readthedocs [3] for manual-type
>> documents, which also supports MathJax [4] and re-renders at every
>> repository push).
>>
>> At any rate, the conversion requires copying blocks of text and code
>> to the web-based version, i.e. removing all the LaTeX markup,
>> therefore effectively committing to maintaining 2 versions of the
>> manual up to date and in sync with each other.
>>
>>
>> Before committing to any approach, I would like your input on this:
>>
>> 1) Do you have any preference for web rendering/site hosting solution?
>>
>> 2) Are you OK with the idea of essentially forking the manual into PDF
>> output and web output ? It is not huge work (an afternoon of tweaking
>> initially and a couple minutes at every new release) but we should be
>> sure about the approach in the first place.
>>
>> Any and all feedback is welcome;
>>
>> Thank you and kind regards,
>> Marco
>>
>>
>> [1] https://www.ctan.org/tex-archive/support/latex2html/
>> [2] http://pandoc.org/
>> [3] https://readthedocs.org/
>> [4] http://mathjax.readthedocs.io/en/latest/tex.html
>


Re: [petsc-users] Mass matrix with PetscFE

2016-07-21 Thread Julian Andrej
Hey,

yes, this issue was resolved a few weeks after the mail. I just tried
again after some DMPlex commits ;).

Thanks!

On Thu, Jul 21, 2016 at 3:04 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Mon, Mar 7, 2016 at 6:21 PM, Julian Andrej <j...@tf.uni-kiel.de> wrote:
>>
>> Any news about this? I've seen you merged the dmforest branch into next.
>
>
> I am going through my mail, and see that this might have been dropped. Has
> your
> problem been solved?
>
>   Sorry for the delay,
>
>  Matt
>
>>
>>
>> On 2016-02-26 01:22, Matthew Knepley wrote:
>>>
>>> I am sorry about the delay. I have your example working but it exposed
>>> a bug in Plex so I need to push the fix first. I should have
>>> everything for you early next week.
>>>
>>>   Thanks
>>>
>>>  Matt
>>>
>>> On Feb 25, 2016 2:04 AM, "Julian Andrej" <j...@tf.uni-kiel.de> wrote:
>>>
>>>> After a bit of rethinking the problem, the discrepancy between the
>>>> size of matrix A and the mass matrix M arises because of the
>>>> Dirichlet boundary conditions. So why aren't the BCs not imposed on
>>>> the mass matrix? Do I need to handle Dirichlet BCs differently in
>>>> this context (like zero rows and put one the diagonal?)
>>>>
>>>> On 24.02.2016 20 [1]:54, juan wrote:
>>>> I attached another example which creates the correct mass matrix
>>>> but also overwrites the DM for the SNES solve. Somehow i cannot
>>>> manage
>>>> to really copy the DM to dm_mass and use that. If i try to do that
>>>> with
>>>> DMClone(dm, _mass) i get a smaller mass matrix (which is not of
>>>> size A).
>>>>
>>>> Maybe this helps in the discussion.
>>>>
>>>> Relevant code starts at line 455.
>>>>
>>>> On 2016-02-24 15:03, Julian Andrej wrote:
>>>> Thanks Matt,
>>>>
>>>> I attached the modified example.
>>>>
>>>> the corresponding code (and only changes to ex12) is starting at
>>>> line
>>>> 832.
>>>>
>>>> It also seems that the mass matrix is of size 169x169 and the
>>>> stiffness matrix is of dimension 225x225. I'd assume that if i
>>>> multiply test and trial function i'd get a matrix of same size (if
>>>> the
>>>> space/quadrature is the same for the stiffness matrix)
>>>>
>>>> On 24.02.2016 14 [2]:56, Matthew Knepley wrote:
>>>> On Wed, Feb 24, 2016 at 7:47 AM, Julian Andrej <j...@tf.uni-kiel.de
>>>> <mailto:j...@tf.uni-kiel.de>> wrote:
>>>>
>>>> I'm now using the petsc git master branch.
>>>>
>>>> I tried adding my code to the ex12
>>>>
>>>> DM dm_mass;
>>>> PetscDS prob_mass;
>>>> PetscFE fe;
>>>> Mat M;
>>>> PetscFECreateDefault(dm, user.dim, 1, PETSC_TRUE, NULL, -1,
>>>> );
>>>>
>>>> DMClone(dm, _mass);
>>>> DMGetDS(dm_mass, _mass);
>>>> PetscDSSetDiscretization(prob_mass, 0, (PetscObject) fe);
>>>> PetscDSSetJacobian(prob_mass, 0, 0, mass_kernel, NULL, NULL,
>>>> NULL);
>>>> DMCreateMatrix(dm_mass, );
>>>>
>>>> MatSetOptionsPrefix(M, "M_";)
>>>>
>>>> and receive the error on running
>>>> ./exe -interpolate -refinement_limit 0.0125 -petscspace_order 2
>>>> -M_mat_view binary
>>>>
>>>> WARNING! There are options you set that were not used!
>>>> WARNING! could be spelling mistake, etc!
>>>> Option left: name:-M_mat_view value: binary
>>>>
>>>> I don't know if the matrix is actually there and assembled or if
>>>> the
>>>> option is ommitted because something is wrong.
>>>>
>>>> Its difficult to know when I cannot see the whole code. You can
>>>> always
>>>> insert
>>>>
>>>> MatViewFromOptions(M, NULL, "-mat_view");
>>>>
>>>> Using
>>>> MatView(M, PETSC_VIEWER_STDOUT_WORLD);
>>>>
>>>> gives me a reasonable output to stdout.
>>>>
>>>> Good.
>>>>
>>>> But saving the matrix and analysing it in matlab, results in an
>>>> all
>>>> zero matrix.
>>>>
>>>> PetscViewerBinaryOpen(PETSC_COMM_WORLD, "Mou

Re: [petsc-users] Mass matrix with PetscFE

2016-03-07 Thread Julian Andrej

Any news about this? I've seen you merged the dmforest branch into next.

On 2016-02-26 01:22, Matthew Knepley wrote:

I am sorry about the delay. I have your example working but it exposed
a bug in Plex so I need to push the fix first. I should have
everything for you early next week.

  Thanks

 Matt

On Feb 25, 2016 2:04 AM, "Julian Andrej" <j...@tf.uni-kiel.de> wrote:


After a bit of rethinking the problem, the discrepancy between the
size of matrix A and the mass matrix M arises because of the
Dirichlet boundary conditions. So why aren't the BCs not imposed on
the mass matrix? Do I need to handle Dirichlet BCs differently in
this context (like zero rows and put one the diagonal?)

On 24.02.2016 20 [1]:54, juan wrote:
I attached another example which creates the correct mass matrix
but also overwrites the DM for the SNES solve. Somehow i cannot
manage
to really copy the DM to dm_mass and use that. If i try to do that
with
DMClone(dm, _mass) i get a smaller mass matrix (which is not of
size A).

Maybe this helps in the discussion.

Relevant code starts at line 455.

On 2016-02-24 15:03, Julian Andrej wrote:
Thanks Matt,

I attached the modified example.

the corresponding code (and only changes to ex12) is starting at
line
832.

It also seems that the mass matrix is of size 169x169 and the
stiffness matrix is of dimension 225x225. I'd assume that if i
multiply test and trial function i'd get a matrix of same size (if
the
space/quadrature is the same for the stiffness matrix)

On 24.02.2016 14 [2]:56, Matthew Knepley wrote:
On Wed, Feb 24, 2016 at 7:47 AM, Julian Andrej <j...@tf.uni-kiel.de
<mailto:j...@tf.uni-kiel.de>> wrote:

I'm now using the petsc git master branch.

I tried adding my code to the ex12

DM dm_mass;
PetscDS prob_mass;
PetscFE fe;
Mat M;
PetscFECreateDefault(dm, user.dim, 1, PETSC_TRUE, NULL, -1,
);

DMClone(dm, _mass);
DMGetDS(dm_mass, _mass);
PetscDSSetDiscretization(prob_mass, 0, (PetscObject) fe);
PetscDSSetJacobian(prob_mass, 0, 0, mass_kernel, NULL, NULL,
NULL);
DMCreateMatrix(dm_mass, );

MatSetOptionsPrefix(M, "M_";)

and receive the error on running
./exe -interpolate -refinement_limit 0.0125 -petscspace_order 2
-M_mat_view binary

WARNING! There are options you set that were not used!
WARNING! could be spelling mistake, etc!
Option left: name:-M_mat_view value: binary

I don't know if the matrix is actually there and assembled or if
the
option is ommitted because something is wrong.

Its difficult to know when I cannot see the whole code. You can
always
insert

MatViewFromOptions(M, NULL, "-mat_view");

Using
MatView(M, PETSC_VIEWER_STDOUT_WORLD);

gives me a reasonable output to stdout.

Good.

But saving the matrix and analysing it in matlab, results in an
all
zero matrix.

PetscViewerBinaryOpen(PETSC_COMM_WORLD, "Mout",FILE_MODE_WRITE,
);
MatView(M, viewer);

I cannot explain this, but it has to be something like you are
viewing
the matrix before it is
actually assembled. Feel free to send the code. It sounds like it is
mostly working.

Matt

Any hints?

On 24.02.2016 13 [3] <tel:24.02.2016%2013>:58, Matthew Knepley
wrote:

On Wed, Feb 24, 2016 at 6:47 AM, Julian Andrej
<j...@tf.uni-kiel.de <mailto:j...@tf.uni-kiel.de>
<mailto:j...@tf.uni-kiel.de <mailto:j...@tf.uni-kiel.de>>>
wrote:

Hi,

i'm trying to assemble a mass matrix with the
PetscFE/DMPlex
interface. I found something in the examples of TAO



https://bitbucket.org/petsc/petsc/src/da8116b0e8d067e39fd79740a8a864b0fe207998/src/tao/examples/tutorials/ex3.c?at=master=file-view-default


but using the lines

DMClone(dm, _mass);
DMSetNumFields(dm_mass, 1);
DMPlexCopyCoordinates(dm, dm_mass);
DMGetDS(dm_mass, _mass);
PetscDSSetJacobian(prob_mass, 0, 0, mass_kernel, NULL,
NULL, NULL);
PetscDSSetDiscretization(prob_mass, 0, (PetscObject)
fe);
DMPlexSNESComputeJacobianFEM(dm_mass, u, M, M, NULL);
DMCreateMatrix(dm_mass, );

leads to errors in DMPlexSNESComputeJacobianFEM (u is a
global vector).

I don't can understand the necessary commands until
DMPlexSNESComputeJacobianFEM. What does it do and why
is it
necessary? (especially why does the naming involve
SNES?)

Is there another/easier/better way to create a mass
matrix (the
inner product of the function space and the test
space)?

1) That example needs updating. First, look at SNES ex12
which
is up to
date.

2) I assume you are using 3.6. If you use the development
version, you
can remove DMPlexCopyCoordinates().

3) You need to create the matrix BEFORE calling the assembly

4) Always always always send the entire error messge

Matt

Regards
Julian Andrej

--
What most experimenters take for granted before they begin
their
experiments is infinitely more interesting than any results
to which
their experiments lead.
-- Norbert Wiener

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
th

Re: [petsc-users] Mass matrix with PetscFE

2016-02-25 Thread Julian Andrej
After a bit of rethinking the problem, the discrepancy between the size 
of matrix A and the mass matrix M arises because of the Dirichlet 
boundary conditions. So why aren't the BCs not imposed on the mass 
matrix? Do I need to handle Dirichlet BCs differently in this context 
(like zero rows and put one the diagonal?)


On 24.02.2016 20:54, juan wrote:

I attached another example which creates the correct mass matrix
but also overwrites the DM for the SNES solve. Somehow i cannot manage
to really copy the DM to dm_mass and use that. If i try to do that with
DMClone(dm, _mass) i get a smaller mass matrix (which is not of size A).

Maybe this helps in the discussion.

Relevant code starts at line 455.

On 2016-02-24 15:03, Julian Andrej wrote:

Thanks Matt,

I attached the modified example.

the corresponding code (and only changes to ex12) is starting at line
832.

It also seems that the mass matrix is of size 169x169 and the
stiffness matrix is of dimension 225x225. I'd assume that if i
multiply test and trial function i'd get a matrix of same size (if the
space/quadrature is the same for the stiffness matrix)

On 24.02.2016 14:56, Matthew Knepley wrote:

On Wed, Feb 24, 2016 at 7:47 AM, Julian Andrej <j...@tf.uni-kiel.de
<mailto:j...@tf.uni-kiel.de>> wrote:

I'm now using the petsc git master branch.

I tried adding my code to the ex12

   DM dm_mass;
   PetscDS prob_mass;
   PetscFE fe;
   Mat M;
   PetscFECreateDefault(dm, user.dim, 1, PETSC_TRUE, NULL, -1, );

   DMClone(dm, _mass);
   DMGetDS(dm_mass, _mass);
   PetscDSSetDiscretization(prob_mass, 0, (PetscObject) fe);
   PetscDSSetJacobian(prob_mass, 0, 0, mass_kernel, NULL, NULL,
NULL);
   DMCreateMatrix(dm_mass, );

   MatSetOptionsPrefix(M, "M_";)

and receive the error on running
./exe -interpolate -refinement_limit 0.0125 -petscspace_order 2
-M_mat_view binary

WARNING! There are options you set that were not used!
WARNING! could be spelling mistake, etc!
Option left: name:-M_mat_view value: binary

I don't know if the matrix is actually there and assembled or if the
option is ommitted because something is wrong.


Its difficult to know when I cannot see the whole code. You can always
insert

   MatViewFromOptions(M, NULL, "-mat_view");

Using
MatView(M, PETSC_VIEWER_STDOUT_WORLD);

gives me a reasonable output to stdout.


Good.

But saving the matrix and analysing it in matlab, results in an all
zero matrix.

PetscViewerBinaryOpen(PETSC_COMM_WORLD, "Mout",FILE_MODE_WRITE,
);
MatView(M, viewer);


I cannot explain this, but it has to be something like you are viewing
the matrix before it is
actually assembled. Feel free to send the code. It sounds like it is
mostly working.

   Matt

Any hints?


On 24.02.2016 13 <tel:24.02.2016%2013>:58, Matthew Knepley wrote:

    On Wed, Feb 24, 2016 at 6:47 AM, Julian Andrej
<j...@tf.uni-kiel.de <mailto:j...@tf.uni-kiel.de>
<mailto:j...@tf.uni-kiel.de <mailto:j...@tf.uni-kiel.de>>>
wrote:

 Hi,

 i'm trying to assemble a mass matrix with the
PetscFE/DMPlex
 interface. I found something in the examples of TAO

https://bitbucket.org/petsc/petsc/src/da8116b0e8d067e39fd79740a8a864b0fe207998/src/tao/examples/tutorials/ex3.c?at=master=file-view-default


 but using the lines

 DMClone(dm, _mass);
 DMSetNumFields(dm_mass, 1);
 DMPlexCopyCoordinates(dm, dm_mass);
 DMGetDS(dm_mass, _mass);
 PetscDSSetJacobian(prob_mass, 0, 0, mass_kernel, NULL,
NULL, NULL);
 PetscDSSetDiscretization(prob_mass, 0, (PetscObject) fe);
 DMPlexSNESComputeJacobianFEM(dm_mass, u, M, M, NULL);
 DMCreateMatrix(dm_mass, );

 leads to errors in DMPlexSNESComputeJacobianFEM (u is a
global vector).

 I don't can understand the necessary commands until
 DMPlexSNESComputeJacobianFEM. What does it do and why is it
 necessary? (especially why does the naming involve SNES?)

 Is there another/easier/better way to create a mass
matrix (the
 inner product of the function space and the test space)?


1) That example needs updating. First, look at SNES ex12 which
is up to
date.

2) I assume you are using 3.6. If you use the development
version, you
can remove DMPlexCopyCoordinates().

3) You need to create the matrix BEFORE calling the assembly

4) Always always always send the entire error messge

Matt

 Regards
 Julian Andrej




--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results
to which
their experim

Re: [petsc-users] Mass matrix with PetscFE

2016-02-24 Thread Julian Andrej

I'm now using the petsc git master branch.

I tried adding my code to the ex12

  DM dm_mass;
  PetscDS prob_mass;
  PetscFE fe;
  Mat M;
  PetscFECreateDefault(dm, user.dim, 1, PETSC_TRUE, NULL, -1, );

  DMClone(dm, _mass);
  DMGetDS(dm_mass, _mass);
  PetscDSSetDiscretization(prob_mass, 0, (PetscObject) fe);
  PetscDSSetJacobian(prob_mass, 0, 0, mass_kernel, NULL, NULL, NULL);
  DMCreateMatrix(dm_mass, );

  MatSetOptionsPrefix(M, "M_";)

and receive the error on running
./exe -interpolate -refinement_limit 0.0125 -petscspace_order 2 
-M_mat_view binary


WARNING! There are options you set that were not used!
WARNING! could be spelling mistake, etc!
Option left: name:-M_mat_view value: binary

I don't know if the matrix is actually there and assembled or if the 
option is ommitted because something is wrong.


Using
MatView(M, PETSC_VIEWER_STDOUT_WORLD);

gives me a reasonable output to stdout.

But saving the matrix and analysing it in matlab, results in an all zero 
matrix.


PetscViewerBinaryOpen(PETSC_COMM_WORLD, "Mout",FILE_MODE_WRITE, );
MatView(M, viewer);

Any hints?


On 24.02.2016 13:58, Matthew Knepley wrote:

On Wed, Feb 24, 2016 at 6:47 AM, Julian Andrej <j...@tf.uni-kiel.de
<mailto:j...@tf.uni-kiel.de>> wrote:

Hi,

i'm trying to assemble a mass matrix with the PetscFE/DMPlex
interface. I found something in the examples of TAO


https://bitbucket.org/petsc/petsc/src/da8116b0e8d067e39fd79740a8a864b0fe207998/src/tao/examples/tutorials/ex3.c?at=master=file-view-default

but using the lines

DMClone(dm, _mass);
DMSetNumFields(dm_mass, 1);
DMPlexCopyCoordinates(dm, dm_mass);
DMGetDS(dm_mass, _mass);
PetscDSSetJacobian(prob_mass, 0, 0, mass_kernel, NULL, NULL, NULL);
PetscDSSetDiscretization(prob_mass, 0, (PetscObject) fe);
DMPlexSNESComputeJacobianFEM(dm_mass, u, M, M, NULL);
DMCreateMatrix(dm_mass, );

leads to errors in DMPlexSNESComputeJacobianFEM (u is a global vector).

I don't can understand the necessary commands until
DMPlexSNESComputeJacobianFEM. What does it do and why is it
necessary? (especially why does the naming involve SNES?)

Is there another/easier/better way to create a mass matrix (the
inner product of the function space and the test space)?


1) That example needs updating. First, look at SNES ex12 which is up to
date.

2) I assume you are using 3.6. If you use the development version, you
can remove DMPlexCopyCoordinates().

3) You need to create the matrix BEFORE calling the assembly

4) Always always always send the entire error messge

   Matt

Regards
Julian Andrej




--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


[petsc-users] Mass matrix with PetscFE

2016-02-24 Thread Julian Andrej

Hi,

i'm trying to assemble a mass matrix with the PetscFE/DMPlex interface. 
I found something in the examples of TAO


https://bitbucket.org/petsc/petsc/src/da8116b0e8d067e39fd79740a8a864b0fe207998/src/tao/examples/tutorials/ex3.c?at=master=file-view-default

but using the lines

DMClone(dm, _mass);
DMSetNumFields(dm_mass, 1);
DMPlexCopyCoordinates(dm, dm_mass);
DMGetDS(dm_mass, _mass);
PetscDSSetJacobian(prob_mass, 0, 0, mass_kernel, NULL, NULL, NULL);
PetscDSSetDiscretization(prob_mass, 0, (PetscObject) fe);
DMPlexSNESComputeJacobianFEM(dm_mass, u, M, M, NULL);
DMCreateMatrix(dm_mass, );

leads to errors in DMPlexSNESComputeJacobianFEM (u is a global vector).

I don't can understand the necessary commands until 
DMPlexSNESComputeJacobianFEM. What does it do and why is it necessary? 
(especially why does the naming involve SNES?)


Is there another/easier/better way to create a mass matrix (the inner 
product of the function space and the test space)?


Regards
Julian Andrej


Re: [petsc-users] Parallel ODE solver

2015-10-20 Thread Julian Andrej
There are a variety of algorithms for ODE/DAEs in the PetscTS
implementation. For examples see e.g.
http://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/index.html

Sundials is also capable of parallel solving, even from the Matlab
sundialsTB interface, as far as i know.

On Tue, Oct 20, 2015 at 9:10 AM, Sahai, Amal  wrote:
> I am trying to solve a very large system of ODEs for modeling the
> time-varying composition of a homogeneous (no spatial variation) reaction
> mixture composed of 5 species. I was wondering if there is parallel ode
> solver included in PETSc? I have been using the sundials package for solving
> a smaller system in serial and would like to move on to parallel framework
> to speed up my calculations.
>
> Regards
> Amal


Re: [petsc-users] working-with-submatrix

2015-10-07 Thread Julian Andrej
Especially in eigenproblems, I would suggest using Matthews approach, or
applying Dirichlet boundary conditions via the penalty method (penalized
robin for example). This would avoid spurious eigenvalues which occur when
you use MatZeroRows().

On Wed, Oct 7, 2015 at 2:17 PM, Matthew Knepley  wrote:

> On Wed, Oct 7, 2015 at 7:08 AM, Giuntoli Guido Alfredo <
> ggiunt...@tecna.com> wrote:
>
>> Hi! I have a problem of the form Ax=kBx were x is a scalar variable that
>> is 0 in the border of a domain (x_D=0).
>>
>> Using FEM the problem that you have to solve is a reduced problem were
>> you don’t have to include the coefficients of the matrix that multiply the
>> x_Ds. So, I have two alternatives here, one is to construct a reduce matrix
>> A and B from the beginning… this could result in a slow code. The other
>> alternative I want to try is to use a sub matrix of A and B and pass them
>> to slepc. How can I do this is I have Mat A and Mat b ?
>>
>
> 1) Can't you just use MatZeroRows() or MatZeroRowsColumns() to apply your
> BC?
>
> 2) I am not sure why you think eliminating variables will be slower. My
> experiments and theoretical understanding say it is at least as fast as
> zeroing the rows.
>
> 3) If you want to extract a submatrix, you can use MatGetSubmatrix().
>
>   Thanks,
>
>  Matt
>
>
>>
>>
>> Thank you, Guido!
>>
>> --
>> [image: Argentina Oil & Gas Expo]  *¡Venga a
>> visitarnos!* *en Argentina Oil & Gas Expo*
>> 5 – 8 Octubre, 2015 La Rural Predio Ferial
>> Stand 2F-30 - Hall 2
>>
>> Trabajamos con seguridad y velamos por la protección del Medio Ambiente.
>> Antes de Imprimir este mail confirme que sea necesario. Gracias
>>
>> El presente e-mail y cualquier documento adjunto al mismo, pertenecen al
>> emisor y puede contener información confidencial legalmente protegida. La
>> información contenida en este e-mail es remitida únicamente para uso del
>> destinatario indicado en el mismo. La divulgación, copia, distribución o
>> cualquier otro uso de la información contenida en el presente e-mail por
>> parte de personas distintas al destinatario se encuentra prohibido. Si Ud.
>> ha recibido este e-mail por error, rogamos borrar el mismo e informar dicha
>> circunstancia por mail a la dirección i...@tecna.com Muchas gracias.
>>
>> This electronic mail transmission and any attached documents contain
>> information belonging to the sender which may be confidential and legally
>> privileged. The information is intended for the use of the named
>> recipient(s) only. If you are not the intended recipient(s), any
>> disclosure, copying, distribution or action taken regarding the information
>> on this email is strictly prohibited. If you have received this email in
>> error please notify i...@tecna.com and delete the email from your
>> computer. Thank you.
>> --
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


Re: [petsc-users] Solving eigenproblems in the form of Stokes equations

2015-07-12 Thread Julian Andrej
On Sun, Jul 12, 2015 at 1:07 PM, Matthew Knepley knep...@gmail.com wrote:
 On Fri, Jul 10, 2015 at 4:28 PM, Julian Andrej j...@tf.uni-kiel.de wrote:

 Hi,

 i'm trying to solve a generalized eigenvalue problem which is the
 stokes equations discretized by finite elements (with fenics) and
 producing a banded matrix (reordered structure) of the well known
 block form

 [NQ]
 [QT  0]

 The domain is the unit square with dirichlet boundary condition
 evaluating to zero at all boundary nodes.

 A is the block matrix in banded reordered structure and M is the mass
 matrix.


 You have not written an equation here. Are you solving

   Q^T N^{-1} Q x = \lambda M x

I don't have / want access to the underlying blocks of the matrix. So
in fact i give the solver

A = [N  Q]  M = [M 0]
  [QT 0] [0  0]

With a little more reading of the documentation i was able to figure
out that the EPS object also takes regular KSP objects. So i got the
following settings working.

E = SLEPc.EPS().create()
st = E.getST()
st.setType('sinvert')
kspE = st.getKSP()
kspE.setType('gmres')
pc = kspE.getPC()
pc.setType('lu')
pc.setFactorSolverPackage('mumps')
E.setOperators(A, M)

These options give me an eigenvalue bundle at 1.0+j0 (which represent
the DirichletBC) and the correct eigenvalues following.
I'd like to understand the background a little more, how this produces
a solvable eigenproblem.


 If so, the Schur complement can be rank deficient by 1 if you have Neumann
 conditions on the pressure. I don't know how SLEPc handles this.

Matt


 I'm using slepc4py for the eigenvalue calculation.

 E = SLEPc.EPS().create()
 E.setOperators(A, M)
 E.setDimensions(NEV, PETSc.DECIDE)
 E.setFromOptions()

 I always get the error
 [0] Zero pivot row 1 value 0 tolerance 2.22045e-14

 I cannot find a combination which solves the eigenvalues for this problem.

 The system itself solves fine with a KSP solver object from PETSc
 using tfmqr with the icc PC.

 I can assemble the matrix with a penalty term for the pressure and
 calculate the eigenvalues with a direct solver, but i try to avoid
 that.




 --
 What most experimenters take for granted before they begin their experiments
 is infinitely more interesting than any results to which their experiments
 lead.
 -- Norbert Wiener


[petsc-users] Solving eigenproblems in the form of Stokes equations

2015-07-10 Thread Julian Andrej
Hi,

i'm trying to solve a generalized eigenvalue problem which is the
stokes equations discretized by finite elements (with fenics) and
producing a banded matrix (reordered structure) of the well known
block form

[NQ]
[QT  0]

The domain is the unit square with dirichlet boundary condition
evaluating to zero at all boundary nodes.

A is the block matrix in banded reordered structure and M is the mass matrix.

I'm using slepc4py for the eigenvalue calculation.

E = SLEPc.EPS().create()
E.setOperators(A, M)
E.setDimensions(NEV, PETSc.DECIDE)
E.setFromOptions()

I always get the error
[0] Zero pivot row 1 value 0 tolerance 2.22045e-14

I cannot find a combination which solves the eigenvalues for this problem.

The system itself solves fine with a KSP solver object from PETSc
using tfmqr with the icc PC.

I can assemble the matrix with a penalty term for the pressure and
calculate the eigenvalues with a direct solver, but i try to avoid
that.


[petsc-users] SLEPc iterative solver does not converge but ARPACK does

2015-07-06 Thread Julian Andrej
Hello,

i'm using PETSc and SLEPc in Python.

I'm trying to calculate eigenvalues for the stokes equations with the
following code

SHIFT = SLEPc.ST().create()
SHIFT.setType(SHIFT.Type.SINVERT)
E = SLEPc.EPS().create()
E.setST(SHIFT)
E.setOperators(K, M)
E.setDimensions(25, PETSc.DECIDE)
E.setWhichEigenpairs(E.Which.SMALLEST_MAGNITUDE)
E.setFromOptions()
E.solve()

This gives me correct eigenvalues and right eigenvectors.

If I want to compute the LEFT eigenvectors i setup a new solver
context with the corresponding transposed matrices (these are
precalculated)

SHIFT = SLEPc.ST().create()
SHIFT.setType(SHIFT.Type.SINVERT)
E = SLEPc.EPS().create()
E.setST(SHIFT)
E.setOperators(KT, MT)
E.setDimensions(25, PETSc.DECIDE)
E.setWhichEigenpairs(E.Which.SMALLEST_MAGNITUDE)
E.setFromOptions()
E.solve()

I don't get any converged eigenvalues.

Interfacing ARPACK through scipy

v, V = sp.linalg.eigs(A=K, k=64, M=M, sigma=1.0)
w, W = sp.linalg.eigs(A=K.T, k=64, M=M.T, sigma=1.0)

i get correct left and right eigenvectors (passing (K*V-M*V*diag(v))  1e-10)

I tried alot of configurations to get it working, so i dont have to
convert matrices in my calculations.
Is it possible to get the expected results with the iterative solver?


Re: [petsc-users] SLEPc iterative solver does not converge but ARPACK does

2015-07-06 Thread Julian Andrej
Thanks Jose,

replacing

E.setWhichEigenpairs(E.Which.SMALLEST_MAGNITUDE)

with

E.setTarget(1.0)

did the trick.

$ python ev_test.py
Solution type krylovschur
Number of converged eigenpairs 25

k  ||Ax-kx||/||kx||
- -
13.0877981.80257e-13

23.0387567.41391e-14

On Mon, Jul 6, 2015 at 1:31 PM, Jose E. Roman jro...@dsic.upv.es wrote:
 You should not use SMALLEST_MAGNITUDE with SINVERT.
 If using SINVERT you should set the target value E.setTarget(sigma) and leave 
 the default Which (or set it explicitly to Which.TARGET_MAGNITUDE).

 Jose



 El 6/7/2015, a las 13:26, Julian Andrej j...@tf.uni-kiel.de escribió:

 Hello,

 i'm using PETSc and SLEPc in Python.

 I'm trying to calculate eigenvalues for the stokes equations with the
 following code

 SHIFT = SLEPc.ST().create()
 SHIFT.setType(SHIFT.Type.SINVERT)
 E = SLEPc.EPS().create()
 E.setST(SHIFT)
 E.setOperators(K, M)
 E.setDimensions(25, PETSc.DECIDE)
 E.setWhichEigenpairs(E.Which.SMALLEST_MAGNITUDE)
 E.setFromOptions()
 E.solve()

 This gives me correct eigenvalues and right eigenvectors.

 If I want to compute the LEFT eigenvectors i setup a new solver
 context with the corresponding transposed matrices (these are
 precalculated)

 SHIFT = SLEPc.ST().create()
 SHIFT.setType(SHIFT.Type.SINVERT)
 E = SLEPc.EPS().create()
 E.setST(SHIFT)
 E.setOperators(KT, MT)
 E.setDimensions(25, PETSc.DECIDE)
 E.setWhichEigenpairs(E.Which.SMALLEST_MAGNITUDE)
 E.setFromOptions()
 E.solve()

 I don't get any converged eigenvalues.

 Interfacing ARPACK through scipy

 v, V = sp.linalg.eigs(A=K, k=64, M=M, sigma=1.0)
 w, W = sp.linalg.eigs(A=K.T, k=64, M=M.T, sigma=1.0)

 i get correct left and right eigenvectors (passing (K*V-M*V*diag(v))  1e-10)

 I tried alot of configurations to get it working, so i dont have to
 convert matrices in my calculations.
 Is it possible to get the expected results with the iterative solver?