Re: [petsc-users] PETSc turtorial example generating unexpected result

2019-01-14 Thread Mark Adams via petsc-users
This code puts the error in x: call VecAXPY(x,neg_one,u,ierr) I suspect you printed these numbers after this statement. On Mon, Jan 14, 2019 at 8:10 PM Maahi Talukder via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello all, > > I complied and run the example *ex2f.F90* located in >

Re: [petsc-users] MPI Iterative solver crash on HPC

2019-01-14 Thread Mark Adams via petsc-users
The memory requested is an insane number. You may need to use 64 bit integers. On Mon, Jan 14, 2019 at 8:06 AM Sal Am via petsc-users < petsc-users@mcs.anl.gov> wrote: > I ran it by: mpiexec -n 8 valgrind --tool=memcheck -q --num-callers=20 > --log-file=valgrind.log-osa.%p ./solveCSys -malloc

Re: [petsc-users] Example for adaptive high-order FEM in PETSc

2019-01-13 Thread Mark Adams via petsc-users
> > > > > The Riemann solver computes the flux and the Jacobian is assembled in > PETSc. > > If you run this example with an implicit integrator, it uses FD coloring > to compute the Jacobian. Manav wants an analytic Jacobian. I was > hoping Matt could answer this; I haven't used Plex in that

Re: [petsc-users] Example for adaptive high-order FEM in PETSc

2019-01-13 Thread Mark Adams via petsc-users
On Sat, Jan 12, 2019 at 6:59 PM Manav Bhatia via petsc-users < petsc-users@mcs.anl.gov> wrote: > I have been studying the source code and the manual entries in the > documentation and have a better understanding of setting up a high-order DG > analysis. > > I have been able to identify the

Re: [petsc-users] GAMG scaling

2018-12-24 Thread Mark Adams via petsc-users
On Tue, Dec 25, 2018 at 12:10 AM Jed Brown wrote: > Mark Adams writes: > > > On Mon, Dec 24, 2018 at 4:56 PM Jed Brown wrote: > > > >> Mark Adams via petsc-users writes: > >> > >> > Anyway, my data for this is in my SC 2004 paper (MakeNextMa

Re: [petsc-users] GAMG scaling

2018-12-24 Thread Mark Adams via petsc-users
On Mon, Dec 24, 2018 at 4:56 PM Jed Brown wrote: > Mark Adams via petsc-users writes: > > > Anyway, my data for this is in my SC 2004 paper (MakeNextMat_private in > > attached, NB, this is code that I wrote in grad school). It is memory > > efficient and simple, just f

Re: [petsc-users] GAMG scaling

2018-12-22 Thread Mark Adams via petsc-users
Wow, this is an old thread. Sorry if I sound like an old fart talking about the good old days but I originally did RAP. in Prometheus, in a non work optimal way that might be of interest. Not hard to implement. I bring this up because we continue to struggle with this damn thing. I think this

Re: [petsc-users] GAMG scaling

2018-12-22 Thread Mark Adams via petsc-users
OK, so this thread has drifted, see title :) On Fri, Dec 21, 2018 at 10:01 PM Fande Kong wrote: > Sorry, hit the wrong button. > > > > On Fri, Dec 21, 2018 at 7:56 PM Fande Kong wrote: > >> >> >> On Fri, Dec 21, 2018 at 9:44 AM Mark Adams wrote: >> >>> Also, you mentioned that you are using

Re: [petsc-users] GAMG scaling

2018-12-21 Thread Mark Adams via petsc-users
Also, you mentioned that you are using 10 levels. This is very strange with GAMG. You can run with -info and grep on GAMG to see the sizes and the number of non-zeros per level. You should coarsen at a rate of about 2^D to 3^D with GAMG (with 10 levels this would imply a very large fine grid

Re: [petsc-users] taking over blind postdoc call for Andy Nonaka

2018-12-17 Thread Mark Adams via petsc-users
Woops petsc-users, mistake, sorry about that. On Mon, Dec 17, 2018 at 11:29 AM Mark Adams wrote: > Dan and Sherry, > > I am taking over blind postdoc call for Andy Nonaka. Do you have any > particular priorities that you would like me to refer good candidates for? > > Mark >

[petsc-users] taking over blind postdoc call for Andy Nonaka

2018-12-17 Thread Mark Adams via petsc-users
Dan and Sherry, I am taking over blind postdoc call for Andy Nonaka. Do you have any particular priorities that you would like me to refer good candidates for? Mark

Re: [petsc-users] PETSc binary write format

2018-12-05 Thread Mark Adams via petsc-users
On Wed, Dec 5, 2018 at 1:27 PM Sajid Ali via petsc-users < petsc-users@mcs.anl.gov> wrote: > I have created a file as per the specification as shown below > > [sajid@xrm temp]$ cat vector.dat > 00010010001101001110 > 00010100 >

Re: [petsc-users] Implementing of a variable block size BILU preconditioner

2018-12-01 Thread Mark Adams via petsc-users
There is a reason PETSc does not have variable block matrices. It is messy and rarely a big win. It is doable, ML/Aztec does it, but it is messy. On Sat, Dec 1, 2018 at 6:20 PM Smith, Barry F. via petsc-users < petsc-users@mcs.anl.gov> wrote: > >Well, you need to start somewhere. It is going

Re: [petsc-users] Solving complex linear sparse matrix in parallel + external library

2018-11-28 Thread Mark Adams via petsc-users
On Wed, Nov 28, 2018 at 10:27 AM Sal Am via petsc-users < petsc-users@mcs.anl.gov> wrote: > Thank you indeed --download-mpich and using PETSC_ARCH/bin/mpiexec seems > to work. > > Now I am wondering about the other problem namely getting the residual, is > the residual only computed when using

Re: [petsc-users] error messages while using snes

2018-11-22 Thread Mark Adams via petsc-users
Note, your FormInitialGuess does not initialize bb[0] and bb[1]. On Thu, Nov 22, 2018 at 11:23 AM barry via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi, > > I want to solve a nonlinear equation tan(x) = x, and the code i wrote > occur some error. > > In the error, it say that I need to

Re: [petsc-users] Nullspace

2018-11-20 Thread Mark Adams via petsc-users
Yes, that's right. Also, it is best to send this to the list, like petsc-users, for a few reasons. It is useful for the core developers, as well as any lurkers really, to casually see what people are doing. And the threads are archived and are a good source of deep documentation. eg, one could

Re: [petsc-users] Question about DMDAGetElements

2018-11-19 Thread Mark Adams via petsc-users
The local indices of the local mesh and local vectors, which includes ghost vertices. There are global-to-local methods to fill in ghost values and local-to-global methods to create global vectors that you can use for computation. On Mon, Nov 19, 2018 at 5:16 PM Sajid Ali wrote: > Bingo! > >

Re: [petsc-users] Question about DMDAGetElements

2018-11-19 Thread Mark Adams via petsc-users
You seem to be confusing the degree of the mesh and the "degree" of the matrix and vector. A matrix is always N x M (2D if you like), a vector is always N (or 1 x N, or 1D if you like). The mesh in a DM or DA can be 1, 2 or 3D. On Mon, Nov 19, 2018 at 4:44 PM Sajid Ali via petsc-users <

Re: [petsc-users] PETSc (3.9.0) GAMG weak scaling test issue

2018-11-19 Thread Mark Adams via petsc-users
> > > > Mark would have better comments on the scalability of the setup stage. > > The first thing to verify is that the algorithm is scaling. If you coarsen too slowly then the coarse grids get large, with many non-zeros per row, and the cost of the matrix triple product can explode. You can

Re: [petsc-users] Solving problem using multigrid

2018-11-18 Thread Mark Adams via petsc-users
The manual is pretty clear on this, but your code is purely algebraic, that is you do not use a DM, and so you will need to use algebraic multigrid (AMG). So you want to look at AMG preconditioners (-pc_type gamg [or hypre if you configured with --download-hypre]). Mark On Sun, Nov 18, 2018 at

Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Mark Adams via petsc-users
There is a lot of load imbalance in VecMAXPY also. The partitioning could be bad and if not its the machine. On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users < petsc-users@mcs.anl.gov> wrote: > > Something is odd about your configuration. Just consider the time for > VecMAXPY

Re: [petsc-users] [petsc-maint] Correct use of PCFactorSetMatOrderingType

2018-11-08 Thread Mark Adams via petsc-users
I am not that familiar with hypre's options. AMG is complicated and I barely keep my options straight. OpenFOAM seems to have highly specialized solvers so being with 50% of them is decent. On Thu, Nov 8, 2018 at 12:03 PM Edoardo alinovi wrote: > Yes, it is like you are saying. This is mostly

Re: [petsc-users] [petsc-maint] Correct use of PCFactorSetMatOrderingType

2018-11-08 Thread Mark Adams via petsc-users
To repeat: You seem to be saying that OpenFOAM solves the problem in 10 seconds and PETSc solves it in 14 seconds. Is that correct? On Thu, Nov 8, 2018 at 3:42 AM Edoardo alinovi via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello Mark, > > Yes, there are 5 KSP calls within a time-step

Re: [petsc-users] PETSc (3.9.0) GAMG weak scaling test issue

2018-11-08 Thread Mark Adams via petsc-users
> > > I did not configured PETSc with ParMetis support. Should I? > > I figured it out when I tried to use "-pc_gamg_repartition". PETSc > complained that it was not compiled with ParMetis support. > You need ParMetis, or some parallel mesh partitioner, configured to use repartitioning. I would

Re: [petsc-users] PETSc (3.9.0) GAMG weak scaling test issue

2018-11-07 Thread Mark Adams via petsc-users
First I would add -gamg_est_ksp_type cg You seem to be converging well so I assume you are setting the null space for GAMG. Note, you should test hypre also. You probably want a bigger "-pc_gamg_process_eq_limit 50". 200 at least but you test your machine with a range on the largest problem.

Re: [petsc-users] [petsc-maint] Correct use of PCFactorSetMatOrderingType

2018-11-07 Thread Mark Adams via petsc-users
please respond to petsc-users. You are doing 5 solves here in 14 seconds. You seem to be saying that the two pressure solves are taking all of this time. I don't know why the two solves are different. You seem to be saying that OpenFOAM solves the problem in 10 seconds and PETSc solves it in 14

Re: [petsc-users] Problems about PCtype bjacobi

2018-11-07 Thread Mark Adams via petsc-users
On Wed, Nov 7, 2018 at 10:16 AM Yingjie Wu via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear Petsc developer: > Hi, > Recently, I'm solving the problems of nonlinear systems of PDEs, I > encountered some problems about precondition and wanted to seek help. > > 1.I set the precondition

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-11-05 Thread Mark Adams via petsc-users
On Mon, Nov 5, 2018 at 4:11 PM Appel, Thibaut wrote: > "Local" as in serial? > Block Jacobi with ILU as the solver on each block. Each block corresponds to an MPI process by default. So it is completely parallel it is just not a true ILU. I the limit of one equation per processor it is just

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-11-05 Thread Mark Adams via petsc-users
On Mon, Nov 5, 2018 at 12:50 PM Thibaut Appel wrote: > Hi Mark, > > Yes it doesn't seem to be usable. Unfortunately we're aiming to do 3D so > direct solvers are not a viable solution and PETSc' ILU is not parallel and > we can't use HYPRE (complex arithmetic) > I think SuperLU has a parallel

Re: [petsc-users] Problems about Assemble DMComposite Precondition Matrix

2018-11-05 Thread Mark Adams via petsc-users
On Mon, Nov 5, 2018 at 10:37 AM Yingjie Wu wrote: > Thank you very much for your reply. > My equation is a neutron diffusion equation with eigenvalues, which is why > I use DMConposite because there is a single non-physical field variable, > eigenvalue. > OK, DMComposite might be your best

Re: [petsc-users] Problems about Assemble DMComposite Precondition Matrix

2018-11-05 Thread Mark Adams via petsc-users
DMComposite is not very mature, the last time I checked and I don't of anyone having worked on it recently, and it is probably not what you want anyway. FieldSplit is most likely what you want. What are your equations and discretization? eg, Stokes with cell centered pressure? There are probably

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-11-01 Thread Mark Adams via petsc-users
On Wed, Oct 31, 2018 at 8:11 PM Smith, Barry F. wrote: > > > > On Oct 31, 2018, at 5:39 PM, Appel, Thibaut via petsc-users < > petsc-users@mcs.anl.gov> wrote: > > > > Well yes naturally for the residual but adding -ksp_true_residual just > gives > > > > 0 KSP unpreconditioned resid norm

Re: [petsc-users] Convergence of AMG

2018-10-31 Thread Mark Adams via petsc-users
On Wed, Oct 31, 2018 at 3:43 PM Manav Bhatia wrote: > Here are the updates. I did not find the options to make much difference > in the results. > > I noticed this message in the GAMG output for cases 2, 3: HARD stop of > coarsening on level 3. Grid too small: 1 block nodes > Yea, this is

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Mark Adams via petsc-users
These are indefinite (bad) Helmholtz problems. Right? On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley wrote: > On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel > wrote: > >> Hi Mark, Matthew, >> >> Thanks for taking the time. >> >> 1) You're not suggesting having -fieldsplit_X_ksp_type *f*gmres

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Mark Adams via petsc-users
Again, you probably want to avoid Cheby. with ‘-mg_levels_ksp_type richardson -mg_levels_pc_type sor’ with the proper prefix. I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on one block, and hence be a subpc. I'm not up on fieldsplit syntax. On Wed, Oct 31, 2018 at 9:22 AM

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-10-31 Thread Mark Adams via petsc-users
On Tue, Oct 30, 2018 at 5:23 PM Appel, Thibaut via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear users, > > Following a suggestion from Matthew Knepley I’ve been trying to apply > fieldsplit/gamg for my set of PDEs but I’m still encountering issues > despite various tests. pc_gamg simply

Re: [petsc-users] Convergence of AMG

2018-10-30 Thread Mark Adams via petsc-users
t;>>> rotational DOF using some plate and shell formulations. >>>> This can explain poor convergence of a multilevel approach, which needs >>>> to restrict and extrapolate the unknowns. In order to check this >>>> hypothesis, you can try a test ca

Re: [petsc-users] Convergence of AMG

2018-10-30 Thread Mark Adams via petsc-users
for the displacement DOF but is not for the >>> rotational DOF using some plate and shell formulations. >>> This can explain poor convergence of a multilevel approach, which needs >>> to restrict and extrapolate the unknowns. In order to check this >>> hypothesis,

Re: [petsc-users] Convergence of AMG

2018-10-30 Thread Mark Adams via petsc-users
shell formulations. > This can explain poor convergence of a multilevel approach, which needs to > restrict and extrapolate the unknowns. In order to check this hypothesis, > you can try a test case with zero rotations. > > Nicolas > > Le lun. 29 oct. 2018 à 22:13, Mark Adams via

Re: [petsc-users] Convergence of AMG

2018-10-29 Thread Mark Adams via petsc-users
* the two level results tell us that MG is not doing well on the coarse grids. So the coarse grids are the problem. * Do not worry about timing now. Get the math correct. The two level solve is not meant to be a solution just a diagnostic so don't try to optimize it by squaring the graph. Use

<    1   2