[petsc-users] question about small matrices

2019-09-24 Thread Povolotskyi, Mykhailo via petsc-users
Dear Petsc developers,

in my application I have to solve millions of linear and non-linear 
systems with small matrices (2x2, 3x3,..., 10x10).

I consider them as dense, and use SNES with KSP method PREONLY, and LU 
preconditioner.

I found that when KSPSolve is called only 25% of time is spend in 
lapack, the rest is PETSc overhead.

I know how to call lapack directly to solve a linear system.

Question: is it possible to call lapack directly in the SNES solver to 
avoid the KSPSolve overhead?

Thank you,

Michael.



Re: [petsc-users] TS scheme with different DAs

2019-09-24 Thread Manuel Valera via petsc-users
Hello all,

I finally implemented the TS routine operating in several DAs at the same
time, hacking it as you suggested. I still have a problem with my algorithm
though. It is not DMDA related so there's that.

My algorithm needs to update u,v,w with information from the updated
T,S,rho. My problem, or what I don't understand yet, is how to operate in
the intermediate runge-kutta time integration states inside the
RHSFunction.

If I can be more clear, I would need the intermediate T,S states to obtain
an updated rho (density) to in turn, obtain the correct intermediate
velocities, and keep the loop going. As I understand right now, the RHS
vector is different from this intermediate state, and it would be only the
RHS input to the loop, so operating on this would be incorrect.

As of now, my algorithm still creates artifacts because of this lack of
information to accurately update all of the variables at the same time. The
problem happens as well in serial.

Thanks for your help,







On Wed, Sep 18, 2019 at 4:36 AM Matthew Knepley  wrote:

> On Tue, Sep 17, 2019 at 8:27 PM Smith, Barry F. 
> wrote:
>
>>
>>   Don't be too quick to dismiss switching to the DMStag you may find that
>> it actually takes little time to convert and then you have a much less
>> cumbersome process to manage the staggered grid. Take a look at
>> src/dm/impls/stag/examples/tutorials/ex2.c where
>>
>> const PetscInt dof0 = 0, dof1 = 1,dof2 = 1; /* 1 dof on each edge and
>> element center */
>> const PetscInt stencilWidth = 1;
>> ierr =
>> DMStagCreate2d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,7,9,PETSC_DECIDE,PETSC_DECIDE,dof0,dof1,dof2,DMSTAG_STENCIL_BOX,stencilWidth,NULL,NULL,);CHKERRQ(ierr);
>>
>> BOOM, it has set up a staggered grid with 1 cell centered variable and 1
>> on each edge. Adding more the cell centers, vertices, or edges is trivial.
>>
>>   If you want to stick to DMDA you
>>
>> "cheat". Depending on exactly what staggering you have you make the DMDA
>> for the "smaller problem" as large as the other ones and just track zeros
>> in those locations. For example if velocities are "edges" and T, S are on
>> cells, make your "cells" DMDA one extra grid width wide in all three
>> dimensions. You may need to be careful on the boundaries deepening on the
>> types of boundary conditions.
>>
>
> Yes, SNES ex30 does exactly this. However, I still recommend looking at
> DMStag. Patrick created it because managing the DMDA
> became such as headache.
>
>   Thanks,
>
> Matt
>
>
>> > On Sep 17, 2019, at 7:04 PM, Manuel Valera via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>> >
>> > Thanks Matthew, but my code is too complicated to be redone on DMStag
>> now after spending a long time using DMDAs,
>> >
>> > Is there a way to ensure PETSc distributes several DAs in the same way?
>> besides manually distributing the points,
>> >
>> > Thanks,
>> >
>> > On Tue, Sep 17, 2019 at 3:28 PM Matthew Knepley 
>> wrote:
>> > On Tue, Sep 17, 2019 at 6:15 PM Manuel Valera via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>> > Hello, petsc users,
>> >
>> > I have integrated the TS routines in my code, but i just noticed i
>> didn't do it optimally. I was using 3 different TS objects to integrate
>> velocities, temperature and salinity, and it works but only for small DTs.
>> I suspect the intermediate Runge-Kutta states are unphased and this creates
>> the discrepancy for broader time steps, so I need to integrate the 3
>> quantities in the same routine.
>> >
>> > I tried to do this by using a 5 DOF distributed array for the RHS,
>> where I store the velocities in the first 3 and then Temperature and
>> Salinity in the rest. The problem is that I use a staggered grid and T,S
>> are located in a different DA layout than the velocities. This is creating
>> problems for me since I can't find a way to communicate the information
>> from the result of the TS integration back to the respective DAs of each
>> variable.
>> >
>> > Is there a way to communicate across DAs? or can you suggest an
>> alternative solution to this problem?
>> >
>> > If you have a staggered discretization on a structured grid, I would
>> recommend checking out DMStag.
>> >
>> >   Thanks,
>> >
>> >  MAtt
>> >
>> > Thanks,
>> >
>> >
>> >
>> > --
>> > What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> > -- Norbert Wiener
>> >
>> > https://www.cse.buffalo.edu/~knepley/
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


[petsc-users] Problem about Scaling

2019-09-24 Thread Yingjie Wu via petsc-users
Respected Petsc developers
Hi,
I am currently using SNES to solve some non-linear PDEs. The model is a
two-dimensional X-Y geometry. Because the magnitude of different physical
variables is too large, it is difficult to find the direction in Krylov
subspace, and the residual descends very slowly or even does not converge.
I think my PDEs need scaling. I need some help to solve the following
quentions.

1. I use - snes_mf_operator, so instead of providing Jacobian matrix, I
only set up an approximate Jacobian matrix for precondition. For my model,
do I just need to magnify the residuals to the same level? Is there any
need to modify the precondition matrix?
2. I have seen some articles referring to the non-dimensional method. I
don't know how to implement this method in the program and how difficult it
is to implement.

Thanks,
Yingjie


[petsc-users] R: Multiple linear solver defined at command line

2019-09-24 Thread Marco Cisternino via petsc-users
Thank you, Lawrence.
Cool! That's perfect!


Bests,
Marco Cisternino



Da: Lawrence Mitchell 
Inviato: martedì 24 settembre 2019 13:59
A: Marco Cisternino
Cc: petsc-users
Oggetto: Re: [petsc-users] Multiple linear solver defined at command line

Dear Marco,

> On 24 Sep 2019, at 12:06, Marco Cisternino via petsc-users 
>  wrote:
>
> Good morning,
> in my code I need to solve 2 linear systems. I would like to use different 
> solvers for the 2 systems and most of all I would like to choose the single 
> solver by flags from command line, is it possible?
> I can call PetscInitialize/PetscFinalize multiple times passing 
> PetscInitialize different argc and argv. What happens if I call the second 
> PetscInitiliaze before the first PetscFinalize with different argc and argv?

The way you should do this is by giving your two different solvers two 
different options prefixes:

Assuming they are KSP objects call:

KSPSetOptionsPrefix(ksp1, "solver1_");
KSPSetOptionsPrefix(ksp2, "solver2_");

Now you can configure ksp1 with:

-solver1_ksp_type ... -solver1_pc_type ...

And ksp2 with:

-solver2_ksp_type ... -solver2_pc_type ...

In general, all PETSc objects can be given such an options prefix so that they 
may be controlled separately.

Thanks,

Lawrence



[petsc-users] Multiple linear solver defined at command line

2019-09-24 Thread Marco Cisternino via petsc-users
Good morning,
in my code I need to solve 2 linear systems. I would like to use different 
solvers for the 2 systems and most of all I would like to choose the single 
solver by flags from command line, is it possible?
I can call PetscInitialize/PetscFinalize multiple times passing PetscInitialize 
different argc and argv. What happens if I call the second PetscInitiliaze 
before the first PetscFinalize with different argc and argv? 
Thanks.

Bests,
Marco Cisternino