Re: [petsc-users] Using petsc with an existing domain decomposition.

2019-10-13 Thread Mark Adams via petsc-users
On Sun, Oct 13, 2019 at 5:25 AM Pierre Gubernatis via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello all, > > It souds that the best way to introduce petsc in a code is not to > introduce it, but develop the code over the petsc structure. > All things being equal, yes, but few users start

Re: [petsc-users] DMPlex memory problem in scaling test

2019-10-10 Thread Mark Adams via petsc-users
Now that I think about it, the partitioning and distribution can be done with existing API, I would assume, like is done with matrices. I'm still wondering what the H5 format is. I assume that it is not built for a hardwired number of processes to read in parallel and that the parallel read is

Re: [petsc-users] DMPlex memory problem in scaling test

2019-10-10 Thread Mark Adams via petsc-users
A related question, what is the state of having something like a distributed DMPlexCreateFromCellList method, but maybe your H5 efforts would work. My bone modeling code is old and a pain, but the apps specialized serial mesh generator could write an H5 file instead of the current FEAP file. Then

Re: [petsc-users] Block preconditioning for 3d problem

2019-10-10 Thread Mark Adams via petsc-users
I can think of a few sources of coupling in the solver: 1) line search and 2) Krylov, and 3) the residual test (scaling issues). You could turn linesearch off and use Richardson (with a fixed number of iterations) or exact solves as Jed suggested. As far as scaling can you use the same NL problem

Re: [petsc-users] [petsc-maint] petsc ksp solver hangs

2019-09-29 Thread Mark Adams via petsc-users
On Sun, Sep 29, 2019 at 1:30 AM Michael Wick via petsc-maint < petsc-ma...@mcs.anl.gov> wrote: > Thank you all for the reply. > > I am trying to get the backtrace. However, the code hangs totally > randomly, and it hangs only when I run large simulations (e.g. 72 CPUs for > this one). I am trying

[petsc-users] Undefined symbols for architecture x86_64: "_dmviewfromoptions_",

2019-09-20 Thread Mark Adams via petsc-users
DMViewFromOptions does not seem to have Fortran bindings and I don't see it on the web page for DM methods. I was able to get it to compile using PetscObjectViewFromOptions FYI, It seems to be an inlined thing, thus missing the web page and Fortran bindings:

Re: [petsc-users] DMPlex Distribution

2019-09-19 Thread Mark Adams via petsc-users
I think you are fine with DMForest. I just mentioned ForestClaw for background. It has a bunch of hyperbolic stuff in there that is specialized. On Thu, Sep 19, 2019 at 8:16 AM Mohammad Hassan wrote: > In fact, I would create my base mesh in DMPlex and use DMForest to > construct the

Re: [petsc-users] DMPlex Distribution

2019-09-19 Thread Mark Adams via petsc-users
Note, Forest gives you individual elements at the leaves. Donna Calhoun, a former Chombo user, has developed a block structured solver on p4est ( https://math.boisestate.edu/~calhoun/ForestClaw/index.html), but I would imagine that you could just take the Plex that DMForest creates and just call

Re: [petsc-users] DMPlex Distribution

2019-09-18 Thread Mark Adams via petsc-users
I'm puzzled. It sounds like you are doing non-conforming AMR (structured block AMR), but Plex does not support that. On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users < petsc-users@mcs.anl.gov> wrote: > Mark is right. The functionality of AMR does not relate to > parallelization

Re: [petsc-users] Optimized mode

2019-09-17 Thread Mark Adams via petsc-users
I am suspicious that he gets the exact same answer with a debug build. You might try -O2 (and -O1, and -O0, which should be the same as your debug build). On Tue, Sep 17, 2019 at 2:21 PM Emmanuel Ayala via petsc-users < petsc-users@mcs.anl.gov> wrote: > OK, thanks for the clarification! :D > >

Re: [petsc-users] Strange Partition in PETSc 3.11 version on some computers

2019-09-17 Thread Mark Adams via petsc-users
Matt that sound like it. danyang, just in case its not clear, you need to delete your architecture directory and reconfigure from scratch. You should be able to just delete the arch-dir/externalpackages/git.parmetis[metis] directories but I'd simply delete the whole arch-dir. On Tue, Sep 17,

Re: [petsc-users] Strange Partition in PETSc 3.11 version on some computers

2019-09-17 Thread Mark Adams via petsc-users
On Tue, Sep 17, 2019 at 12:53 PM Danyang Su wrote: > Hi Mark, > > Thanks for your follow-up. > > The unstructured grid code has been verified and there is no problem in > the results. The convergence rate is also good. The 3D mesh is not good, it > is based on the original stratum which I

Re: [petsc-users] DMPlex Distribution

2019-09-17 Thread Mark Adams via petsc-users
On Tue, Sep 17, 2019 at 12:07 PM Mohammad Hassan via petsc-users < petsc-users@mcs.anl.gov> wrote: > Thanks for suggestion. I am going to use a block-based amr. I think I need > to know exactly the mesh distribution of blocks across different processors > for implementation of amr. > > And as a

Re: [petsc-users] Strange Partition in PETSc 3.11 version on some computers

2019-09-17 Thread Mark Adams via petsc-users
Danyang, Excuse me if I missed something in this thread but just a few ideas. First, I trust that you have verified that you are getting a good solution with these bad meshes. Ideally you would check that the solver convergence rates are similar. You might verify that your mesh is inside of

Re: [petsc-users] MemCpy (HtoD and DtoH) in Krylov solver

2019-07-17 Thread Mark Adams via petsc-users
Also, MPI communication is done from the host, so every mat-vec will do a "CopySome" call from the device, do MPI comms, and then the next time you do GPU work it will copy from the host to get the update. On Tue, Jul 16, 2019 at 10:22 PM Matthew Knepley via petsc-users < petsc-users@mcs.anl.gov>

Re: [petsc-users] Questions about AMG and Jacobian Contruction

2019-07-14 Thread Mark Adams via petsc-users
PETSc has three AMG solvers: GAMG (native) and third party libraries hypre and ML. This probably what you want. On Sun, Jul 14, 2019 at 6:25 AM Yingjie Wu via petsc-users < petsc-users@mcs.anl.gov> wrote: > Respected PETSc developers: > Hi, > I have some questions about some functions of AMG and

Re: [petsc-users] Various Questions Regarding PETSC

2019-07-13 Thread Mark Adams via petsc-users
MAT_NO_OFF_PROC_ENTRIES - you know each process will only set values for its own rows, will generate an error if any process sets values for another process. This avoids all reductions in the MatAssembly routines and thus improves performance for very large process counts. OK, so I am seeing the

Re: [petsc-users] Various Questions Regarding PETSC

2019-07-13 Thread Mark Adams via petsc-users
On Sat, Jul 13, 2019 at 2:39 PM Mohammed Mostafa via petsc-users < petsc-users@mcs.anl.gov> wrote: > I am generating the matrix using the finite volume method > I basically loop over the face list instead of looping over the cells to > avoid double evaluation of the fluxes of cell faces > So I

Re: [petsc-users] Various Questions Regarding PETSC

2019-07-13 Thread Mark Adams via petsc-users
Ok, I only see one all to KSPSolve. On Sat, Jul 13, 2019 at 2:08 PM Mohammed Mostafa wrote: > This log is for 100 time-steps, not a single time step > > > On Sun, Jul 14, 2019 at 3:01 AM Mark Adams wrote: > >> You call the assembly stuff a lot (200). BuildTwoSidedF is a global thing >> and is

Re: [petsc-users] Various Questions Regarding PETSC

2019-07-13 Thread Mark Adams via petsc-users
You call the assembly stuff a lot (200). BuildTwoSidedF is a global thing and is taking a lot of time. You should just call these once per time step (it looks like you are just doing one time step). --- Event Stage 1: Matrix Construction BuildTwoSidedF 400 1.0 6.5222e-01 2.0 0.00e+00 0.0

Re: [petsc-users] PETSC ERROR: Petsc has generated inconsistent data with PCGAMG and KSPGRMES

2019-07-09 Thread Mark Adams via petsc-users
On Tue, Jul 9, 2019 at 12:43 PM 我 via petsc-users wrote: > Hello all, > I set PCGAMG and KSPGRMES to solve the matrix which is formed by a > particle method such as SPH. But when there is always a PETSC ERROR: > [2]PETSC ERROR: - Error Message >

Re: [petsc-users] snes/ex19 issue with nvprof

2019-07-07 Thread Mark Adams via petsc-users
> > > > [0]PETSC ERROR: -dm_vec_type cuda > [0]PETSC ERROR: -ksp_monitor > [0]PETSC ERROR: -mat_type aijcusparse > You might want -*dm_*mat_type aijcusparse

Re: [petsc-users] snes/ex19 issue with nvprof

2019-07-06 Thread Mark Adams via petsc-users
I'm using CUDA 10.1.105 and this looks like a logic bug and not a CUDA problem. I added two MatSetType calls that I thought were the right thing to do. Your error is a type problem and test with DMPlex so the matrix creation goes through a different code path. You could try removing them and see

Re: [petsc-users] snes/ex19 issue with nvprof

2019-07-06 Thread Mark Adams via petsc-users
I am not able to reproduce this error. THis is what I get. Are you pulling from mark/gamg-fix-viennacl-rebased ? 13:38 /gpfs/alpine/scratch/adams/geo127$ jsrun -n 1 -c 4 -a 4 -g 1 ./ex19 -da_refine 5 -snes_view -snes_monitor -ksp_monitor -mat_type aijcusparse -vec_type cuda -log_view lid

Re: [petsc-users] How to create a mapping of global to local indices?

2019-07-03 Thread Mark Adams via petsc-users
PETSc matrices and vectors are created with a local size n or global size N and PETSC_DECIDE instead of n. The global PETSc indices are ordered from 0 to n_0 - 1 where n_0 is the number of equations on process 0. This numbering continues for all processes. You can use: PetscErrorCode

Re: [petsc-users] Out of memory - diffusion equation eigenvalues

2019-07-01 Thread Mark Adams via petsc-users
3D uses a lot more memory. This is a crazy number (18446744069467867136). How large is your system? On Mon, Jul 1, 2019 at 7:20 PM Rodrigo Piccinini via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello, > > I'm trying to solve for the first ten smallest eigenvalues of a diffusion > equation

Re: [petsc-users] GAMG scalability for serendipity 20 nodes hexahedra

2019-06-26 Thread Mark Adams via petsc-users
I get growth with Q2 elements also. I've never seen anyone report scaling of high order elements with generic AMG. First, discretizations are very important for AMG solver. All optimal solvers really. I've never looked at serendipity elements. It might be a good idea to try Q2 as well. SNES ex56

Re: [petsc-users] Finding off diagonal blocks of matrix

2019-06-17 Thread Mark Adams via petsc-users
On Mon, Jun 17, 2019 at 8:45 AM Eda Oktay via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello everyone, > > I am trying to find the off diagonal blocks of a matrix. For example, I > partitioned a 10*10 matrix into 4*4 and 6*6 block sub matrices. I need to > find the other blocks in order to

Re: [petsc-users] MatPermute problem because of wrong sized index set

2019-06-17 Thread Mark Adams via petsc-users
You also do not seem to be initializing kk. On Mon, Jun 17, 2019 at 4:00 AM Eda Oktay via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello everyone again, > > I was making an index mistake in for loop. I corrected it and my problem > solved. > > Thank you, > > Eda > > Eda Oktay , 17 Haz

Re: [petsc-users] With-batch (new) flags

2019-05-21 Thread Mark Adams via petsc-users
K', > '-known-64-bit-blas-indices=', nargs.ArgBool(None, 0, 'Indicate if > using 64 bit integer BLAS')) > > +help.addArgument('BLAS/LAPACK', > '-known-64-bit-blas-indices=', nargs.ArgBool(None, 0, 'Indicate if > using 64 bit integer BLAS')) > > return > > > > def

Re: [petsc-users] With-batch (new) flags

2019-05-20 Thread Mark Adams via petsc-users
that is what Dylan (in the log that I sent). He is downloading blas and has --known-64-bit-blas-indices=0. Should this be correct? > > Satish > > On Mon, 20 May 2019, Mark Adams via petsc-users wrote: > > > We are getting this failure. This a bit frustrating in that the first &

Re: [petsc-users] problem with generating simplicies mesh

2019-05-19 Thread Mark Adams via petsc-users
I would guess that you want 2 faces in each direction (the default so use NULL instead of faces). On Sun, May 19, 2019 at 9:23 AM 陳鳴諭 via petsc-users wrote: > I have problem with generating simplicies mesh. > I do as the description in DMPlexCreateBoxmesh says, but still meet error. > > The

Re: [petsc-users] DMPlex assembly global stiffness matrix

2019-05-18 Thread Mark Adams via petsc-users
I don't think that will work. Offsets refer to the graph storage. On Fri, May 17, 2019 at 7:59 PM Josh L via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi, > > I have a DM that has 2 fields , and field #1 has 2 dofs and field #2 has 1 > dof. > I only have dofs on vertex. > > Can I use the

Re: [petsc-users] MatCreateBAIJ, SNES, Preallocation...

2019-05-17 Thread Mark Adams via petsc-users
Oh, this is BAIJ. Sure. On Fri, May 17, 2019 at 11:01 AM Smith, Barry F. wrote: > > BAIJ and SBAIJ it is the number of block rows, so the row sizes divided > by the bs. For AIJ the number of rows. > > > On May 17, 2019, at 9:55 AM, William Coirier < > william.coir...@kratosdefense.com> wrote:

Re: [petsc-users] Suggestions for solver and pc

2019-05-17 Thread Mark Adams via petsc-users
Are you shifting into high frequency? If not you can try tricks in this paper. Using a large coarse grid in AMG that captures the frequency response of interest and using a parallel direct solver is a good option if it is doable (not too deep a shift). @Article{Adams-03a, author = {Adams

Re: [petsc-users] MatCreateBAIJ, SNES, Preallocation...

2019-05-17 Thread Mark Adams via petsc-users
On Thu, May 16, 2019 at 6:28 PM William Coirier via petsc-users < petsc-users@mcs.anl.gov> wrote: > Ok, got it. My misinterpretation was how to fill the d_nnz and o_nnz > arrays. > > Thank you for your help! > > Might I make a suggestion related to the documentation? Perhaps I have not > fully

Re: [petsc-users] Precision of MatView

2019-05-15 Thread Mark Adams via petsc-users
, May 15, 2019 at 10:06 AM Smith, Barry F. wrote: > >The 10-20% in seven digits are presumably printed accurately; it is > presumably simply the case that the rest of the digits would be zero and > hence are not printed. > > > On May 15, 2019, at 8:59 AM, Mark Adams via

Re: [petsc-users] Precision of MatView

2019-05-15 Thread Mark Adams via petsc-users
You are seeing half precision (like 7 digits) in 10-20% of the entries and full in the rest. Someone will probably chime in who knows about this but I can see where a serial matrix is printed in ASCII Matlab in MatView_SeqAIJ_ASCII in src/mat/impls/aij/seq/aij.c. I think this line is operative

Re: [petsc-users] Precision of MatView

2019-05-14 Thread Mark Adams via petsc-users
I would hope you get full precision. How many digits are you seeing? On Tue, May 14, 2019 at 7:15 PM Sanjay Govindjee via petsc-users < petsc-users@mcs.anl.gov> wrote: > I am using the following bit of code to debug a matrix. What is the > expected precision of the numbers that I will find in

Re: [petsc-users] Question about parallel Vectors and communicators

2019-05-07 Thread Mark Adams via petsc-users
On Tue, May 7, 2019 at 11:38 AM GIRET Jean-Christophe via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear PETSc users, > > > > I would like to use Petsc4Py for a project extension, which consists > mainly of: > > - Storing data and matrices on several rank/nodes which could > not

Re: [petsc-users] Strong scaling issue cg solver with HYPRE preconditioner (BoomerAMG preconditioning)

2019-05-06 Thread Mark Adams via petsc-users
On Mon, May 6, 2019 at 7:53 PM Raphael Egan via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear Petsc developer(s), > > I am assessing the strong scalability of our incompressible Navier-Stokes > solver on distributed Octree grids developed at the University of > California, Santa Barbara. >

Re: [petsc-users] [PETSc-Users] The MatSetValues takes too much time

2019-05-02 Thread Mark Adams via petsc-users
You need to set the preallocation for the matrix. https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html On Thu, May 2, 2019 at 7:46 AM Dongyu Liu via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi, > > I am using PETSc sparse matrix ('aij') for a

Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Mark Adams via petsc-users
My question about the quality of the solution was to check if the model (eg, mesh) was messed up, not if the algebraic error was acceptable. So does an exact solution look OK. Using LU if you need to. If there is say a singularity it will mess up the solver as well as give you a bad solution. On

Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Mark Adams via petsc-users
When I said it was singular I was looking at "preconditioned residual norm to an rtol of 1e-12. If I look at the true residual norm, however, it stagnates around 1e-4." This is not what I am seeing in this output. It is just a poor PC. The big drop in the residual at the beginning is suspicious.

Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Mark Adams via petsc-users
> > > > > Allowing GASM to construct the "outer" subdomains from the non-overlapping > "inner" subdomains, and using "exact" subdomain solvers (subdomain KSPs are > using FGMRES+ILU with an rtol of 1e-12), I get convergence in ~2 iterations > in the preconditioned residual norm to an rtol of

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-04-30 Thread Mark Adams via petsc-users
On Tue, Mar 5, 2019 at 8:06 AM Matthew Knepley wrote: > On Tue, Mar 5, 2019 at 7:14 AM Myriam Peyrounette < > myriam.peyroune...@idris.fr> wrote: > >> Hi Matt, >> >> I plotted the memory scalings using different threshold values. The two >> scalings are slightly translated (from -22 to -88 mB)

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-04-26 Thread Mark Adams via petsc-users
On Mon, Mar 11, 2019 at 6:32 AM Myriam Peyrounette < myriam.peyroune...@idris.fr> wrote: > Hi, > > good point, I changed the 3.10 version so that it is configured with > --with-debugging=0. You'll find attached the output of the new LogView. The > execution time is reduced (although still not as

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-04-26 Thread Mark Adams via petsc-users
Increasing the threshold should increase the size of the coarse grids, but yours are decreasing. I'm puzzled by that. On Tue, Mar 5, 2019 at 11:53 AM Myriam Peyrounette < myriam.peyroune...@idris.fr> wrote: > I used PCView to display the size of the linear system in each level of > the MG.

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-04-26 Thread Mark Adams via petsc-users
> Mark, what is the option she can give to output all the GAMG data? > > -info and then grep on GAMG. This will print the number of non-zeros per row, which is useful. The memory size of the matrices will also give you data on this. > Also, run using -ksp_view. GAMG will report all the sizes

Re: [petsc-users] Iterative solver behavior with increasing number of mpi

2019-04-17 Thread Mark Adams via petsc-users
GAMG is almost algorithmically invariant but the graph coarsening is not invariant not deterministic. You should not see much difference in teration could but a little decay is expected. On Wed, Apr 17, 2019 at 12:36 PM Matthew Knepley via petsc-users < petsc-users@mcs.anl.gov> wrote: > On Wed,

Re: [petsc-users] Using -malloc_dump to examine memory leak

2019-04-16 Thread Mark Adams via petsc-users
Use valgrind with --leak-check=yes This should give a stack trace at the end of the run. On Tue, Apr 16, 2019 at 2:14 AM Yuyun Yang via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello team, > > > > I’m trying to use the options -malloc_dump and -malloc_debug to examine > memory leaks. The

Re: [petsc-users] How to build FFTW3 interface?

2019-04-11 Thread Mark Adams via petsc-users
On Thu, Apr 11, 2019 at 7:51 PM Sajid Ali via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi PETSc Developers, > > To run an example that involves the petsc-fftw interface, I loaded both > petsc and fftw modules (linked of course to the same mpi) but the compiler > complains of having no

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-04-10 Thread Mark Adams via petsc-users
This looks like it might be noisy data. I'd make sure you run each size on the same set of nodes and you might run each job twice (A,B,A,B) in a job script. On Wed, Apr 10, 2019 at 8:12 AM Myriam Peyrounette via petsc-users < petsc-users@mcs.anl.gov> wrote: > Here is the time weak scaling from

Re: [petsc-users] Constructing a MATNEST with blocks defined in different procs

2019-04-09 Thread Mark Adams via petsc-users
On Tue, Apr 9, 2019 at 10:36 AM Diogo FERREIRA SABINO < diogo.ferreira-sab...@univ-tlse3.fr> wrote: > Hi Mark, > Thank you for the quick answer. > > So, I defined each block of the Nest matrix as a MATMPIAIJ in the World > communicator and I set the local sizes in such a way that A00 and A11 are

Re: [petsc-users] How to decompose and record a constant A

2019-04-09 Thread Mark Adams via petsc-users
PETSc solvers will cache any setup computations, like a matrix factorization, and will reuse them on subsequent solves. If you call KSPSetOperators, PETSc will assume its cache is invalid and redo any required setup on subsequent solves. Mark On Tue, Apr 9, 2019 at 4:08 AM ztdepyahoo via

Re: [petsc-users] Error with parallel solve

2019-04-08 Thread Mark Adams via petsc-users
On Mon, Apr 8, 2019 at 2:23 PM Manav Bhatia wrote: > Thanks for identifying this, Mark. > > If I compile the debug version of Petsc, will it also build a debug > version of Mumps? > The debug compiler flags will get passed down to MUMPS if you are downloading MUMPS in PETSc. Otherwise, yes

Re: [petsc-users] Error with parallel solve

2019-04-08 Thread Mark Adams via petsc-users
This looks like an error in MUMPS: IF ( IROW_GRID .NE. root%MYROW .OR. & JCOL_GRID .NE. root%MYCOL ) THEN WRITE(*,*) MYID,':INTERNAL Error: recvd root arrowhead ' On Mon, Apr 8, 2019 at 1:37 PM Smith, Barry F. via petsc-users < petsc-users@mcs.anl.gov> wrote: >

Re: [petsc-users] Argument out of range error in MatPermute

2019-04-08 Thread Mark Adams via petsc-users
Note, it does not look like "is" gets created and idxx gets allocated if mod==0. Are you sure 'idx' is a valid permutation? You might try replacing idx[i] with i for debugging, and test with mod==0 On Mon, Apr 8, 2019 at 4:21 AM Eda Oktay via petsc-users < petsc-users@mcs.anl.gov> wrote: >

Re: [petsc-users] error: Petsc has generated inconsistent data, MPI_Allreduce() called in different locations (code lines) on different processors

2019-04-07 Thread Mark Adams via petsc-users
You have MatSetOption inside of a loop for the number of rows. This is apparently a collective operation. Move it to line 156. On Sun, Apr 7, 2019 at 5:44 AM Eda Oktay via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear Barry, > > Thank you for answering. I am sending my code, makefile and

Re: [petsc-users] All-to-All Personalized Communication on a Mesh

2019-04-06 Thread Mark Adams via petsc-users
On Sat, Apr 6, 2019 at 8:48 PM Zulfi Khan via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi, > Can some body please tell me what would be the result of > applying All-to-All Personalized Communication on a Mesh shown on slide 54 > on the following link: > >

Re: [petsc-users] Solution Diverging

2019-04-05 Thread Mark Adams via petsc-users
On Fri, Apr 5, 2019 at 5:27 PM Maahi Talukder wrote: > Hi, > Thank you for your reply. > > Well, I have verified the solution on large problems, and I have converged > the solution to machine accuracy using different solvers. So I guess the > setup okay in that respect. > I am suspicious that

Re: [petsc-users] Solution Diverging

2019-04-05 Thread Mark Adams via petsc-users
So you have a 2D Poisson solver and it is working on large problems but the linear solver is diverging on small problems. I would verify the the solution is good on the large problem. You might have a problem with boundary conditions. What BCs do you think you are using? On Fri, Apr 5, 2019 at

Re: [petsc-users] Constructing a MATNEST with blocks defined in different procs

2019-04-05 Thread Mark Adams via petsc-users
On Fri, Apr 5, 2019 at 7:19 AM Diogo FERREIRA SABINO via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi, > I'm new in petsc and I'm trying to construct a MATNEST in two procs, by > setting each block of the nested matrix with a MATMPIAIJ matrix defined in > each proc. > I'm trying to use

Re: [petsc-users] testing for and removing a null space using JFNK

2019-04-04 Thread Mark Adams via petsc-users
On Thu, Apr 4, 2019 at 7:35 AM Dave Lee wrote: > Thanks Mark, > > I already have the Navier Stokes solver. My issue is wrapping it in a JFNK > solver to find the periodic solutions. I will keep reading up on SVD > approaches, there may be some capability for something like this in SLEPc. > Yes,

Re: [petsc-users] testing for and removing a null space using JFNK

2019-04-04 Thread Mark Adams via petsc-users
[keep on list] On Thu, Apr 4, 2019 at 7:08 AM Dave Lee wrote: > Hi Mark, > > Thanks for responding. My brief scan of the literature suggested that > there are some methods out there to approximate the null space using SVD > methods, but I wasn't sure how mature these methods were, or if PETSc

Re: [petsc-users] testing for and removing a null space using JFNK

2019-04-04 Thread Mark Adams via petsc-users
The Krylov space can not see the null space (by definition) and so getting a useful near null space from it is not likely. Getting a null space is a hard problem and bootstrap AMG methods, for instance, are developed to try to do that. This is an advanced research topic. You really want to know

Re: [petsc-users] Local and global size of IS

2019-04-02 Thread Mark Adams via petsc-users
On Tue, Apr 2, 2019 at 6:55 AM Eda Oktay via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi Barry, > > I did what you said but now I get the error in ISSetPermutation that index > set is not a permutation. ı think it is because since I selected indices on > each process as you said. I did the

Re: [petsc-users] Consistent domain decomposition between DMDA and DMPLEX

2019-03-28 Thread Mark Adams via petsc-users
> > > > That seems like a bad tradeoff. You avoid one communication during > injection for at least that much or more during > FE assembly on that cell partition? > > I am just guessing about the purpose as a way to describing what they are asking for. > Matt > > >> >>> Thanks, >>> >>>

Re: [petsc-users] Consistent domain decomposition between DMDA and DMPLEX

2019-03-27 Thread Mark Adams via petsc-users
On Wed, Mar 27, 2019 at 7:27 PM Matthew Knepley wrote: > On Fri, Mar 22, 2019 at 1:41 PM Swarnava Ghosh > wrote: > >> Hi Mark and Matt, >> >> Thank you for your responses. >> "They may have elements on the unstructured mesh that intersect with any >> number of processor domains on the

Re: [petsc-users] [Ext] Re: error: identifier "MatCreateMPIAIJMKL" is undefined in 3.10.4

2019-03-26 Thread Mark Adams via petsc-users
On Tue, Mar 26, 2019 at 3:00 PM Kun Jiao wrote: > Strange things, when I compile my code in the test dir in PETSC, it works. > After I "make install" PETSC, and try to compile my code against the > installed PETSC, it doesn't work any more. > I'm not sure I follow what you are doing exactly but

Re: [petsc-users] [Ext] Re: error: identifier "MatCreateMPIAIJMKL" is undefined in 3.10.4

2019-03-26 Thread Mark Adams via petsc-users
So this works with v3.8? I don't see any differences (I see Satish figured this out and has suggestions). You could also work around it with code like this: ierr = MatCreate(PETSC_COMM_WORLD,);CHKERRQ(ierr); ierr = MatSetType(A,MATAIJMKL);CHKERRQ(ierr); ierr =

Re: [petsc-users] [Ext] Re: error: identifier "MatCreateMPIAIJMKL" is undefined in 3.10.4

2019-03-26 Thread Mark Adams via petsc-users
I assume the whole error message will have the line of code. Please send the whole error message and line of offending code if not included. On Tue, Mar 26, 2019 at 10:08 AM Kun Jiao wrote: > It is compiling error, error message is: > > > > error: identifier "MatCreateMPIAIJMKL" is undefined. >

Re: [petsc-users] error: identifier "MatCreateMPIAIJMKL" is undefined in 3.10.4

2019-03-26 Thread Mark Adams via petsc-users
Please send the output of the error (runtime, compile time, link time?) On Mon, Mar 25, 2019 at 10:50 PM Kun Jiao via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi Petsc Experts, > > > > Is MatCreateMPIAIJMKL retired in 3.10.4? > > > > I got this error with my code which works fine in 3.8.3

Re: [petsc-users] [petsc-maint] PETSc bugs

2019-03-24 Thread Mark Adams via petsc-users
On Sat, Mar 23, 2019 at 10:04 PM Dian Han wrote: > Thanks, Matt. Is there any PETSc pre-conditioner able to handle ‘0’ > diagonal case? > (Matt told you to use svd.) Fast solvers exploit structure in your problem. If your problems are not arbitrary but have a source in some application then

Re: [petsc-users] About Configuring PETSc

2019-03-23 Thread Mark Adams via petsc-users
On Sat, Mar 23, 2019 at 2:51 PM Maahi Talukder wrote: > Thank you for your reply. > > In my PETSC directory, there is only one "arch..." directory called > arch-linux2-c-debug. And with that one, I can only run my code in debugging > mode. But I want to run them in non-debugging mode, so I was

Re: [petsc-users] About Configuring PETSc

2019-03-23 Thread Mark Adams via petsc-users
On Sat, Mar 23, 2019 at 2:13 PM Maahi Talukder via petsc-users < petsc-users@mcs.anl.gov> wrote: > I think I didn't build with PETSC-ARCH=arch-opt. > So just to make sure, now I just run the command - ./configure PETSC_ARCH > = arch-opt - and it will create the missing directory and I can switch

Re: [petsc-users] Check if a matrix has been created

2019-03-22 Thread Mark Adams via petsc-users
On Fri, Mar 22, 2019 at 5:17 PM Shashwat Sharma via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello, > > I'd like to be able to check if a Mat object has been created and > allocated; if it hasn't been created yet, I want to create and allocate it, > otherwise I want to reuse the existing

Re: [petsc-users] Consistent domain decomposition between DMDA and DMPLEX

2019-03-22 Thread Mark Adams via petsc-users
Sorry, you don't clone a setType method, you clone a create method such as: src/dm/impls/plex/plexpartition.c:PETSC_EXTERN PetscErrorCode PetscPartitionerCreate_Simple(PetscPartitioner part) code like: PetscPartitionerSetType

Re: [petsc-users] Problems about GMRES restart and Scaling

2019-03-21 Thread Mark Adams via petsc-users
On Wed, Mar 20, 2019 at 1:18 PM Smith, Barry F. via petsc-users < petsc-users@mcs.anl.gov> wrote: > > > > On Mar 20, 2019, at 5:52 AM, Yingjie Wu via petsc-users < > petsc-users@mcs.anl.gov> wrote: > > > > Dear PETSc developers: > > Hi, > > Recently, I used PETSc to solve a non-linear PDEs for

Re: [petsc-users] Problems about GMRES restart and Scaling

2019-03-20 Thread Mark Adams via petsc-users
On Wed, Mar 20, 2019 at 8:30 AM Yingjie Wu via petsc-users < petsc-users@mcs.anl.gov> wrote: > Thank you very much for your reply. > I think my statement may not be very clear. I want to know why the linear > residual increases at gmres restart. > GMRES combines the functions in the Krylov

Re: [petsc-users] PCFieldSplit gives different results for direct and iterative solver

2019-03-19 Thread Mark Adams via petsc-users
> > > > -fieldsplit_velocity_ksp_type preonly -fieldsplit_velocity_pc_type gamg > -fieldsplit_pressure_ksp_type minres -fieldsplit_pressure_pc_type none > You should use cg for the ksp_type with gamg if you are symmetric and gmres if not (you can try cg even if it is mildly asymmetric).

Re: [petsc-users] GAMG parallel convergence sensitivity

2019-03-13 Thread Mark Adams via petsc-users
> > > > Any thoughts here? Is there anything obviously wrong with my setup? > Fast and robust solvers for NS require specialized methods that are not provided in PETSc and the methods tend to require tighter integration with the meshing and discretization than the algebraic interface supports. I

Re: [petsc-users] Preconditioner in multigrid solver

2019-03-11 Thread Mark Adams via petsc-users
You are giving all levels the same matrices (K & M). This code should not work. You are using LU as the smother. This will solve the problem immediately. If MG is setup correctly then you will just have zero residuals and corrections for the rest of the solve. And you set the relative tolerance

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-11 Thread Mark Adams via petsc-users
Is there a difference in memory usage on your tiny problem? I assume no. I don't see anything that could come from GAMG other than the RAP stuff that you have discussed already. On Mon, Mar 11, 2019 at 9:32 AM Myriam Peyrounette < myriam.peyroune...@idris.fr> wrote: > The code I am using here

Re: [petsc-users] MatCreate performance

2019-03-11 Thread Mark Adams via petsc-users
The PETSc logs print the max time and the ratio max/min. On Mon, Mar 11, 2019 at 8:24 AM Ale Foggia via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello all, > > Thanks for your answers. > > 1) I'm working with a matrix with a linear size of 2**34, but it's a > sparse matrix, and the number

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-11 Thread Mark Adams via petsc-users
In looking at this larger scale run ... * Your eigen estimates are much lower than your tiny test problem. But this is Stokes apparently and it should not work anyway. Maybe you have a small time step that adds a lot of mass that brings the eigen estimates down. And your min eigenvalue (not

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-11 Thread Mark Adams via petsc-users
GAMG look fine here but the convergence rate looks terrible, like 4k+ iterations. You have 4 degrees of freedom per vertex. What equations and discretization are you using? Your eigen estimates are a little high, but not crazy. I assume this system is not symmetric. AMG is oriented toward the

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-08 Thread Mark Adams via petsc-users
Just seeing this now. It is hard to imagine how bad GAMG could be on a coarse grid, but you can run with -info and grep on GAMG and send that. You will see listing of levels, number of equations and number of non-zeros (nnz). You can send that and I can get some sense of GAMG is going nuts. Mark

Re: [petsc-users] MatCreate performance

2019-03-08 Thread Mark Adams via petsc-users
MatCreate is collective so you want to check that it is not seeing load imbalance from earlier code. And duplicating communicators can be expensive on some systems. On Fri, Mar 8, 2019 at 10:21 AM Ale Foggia via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hello all, > > I have a problem

Re: [petsc-users] PETSC matrix assembling super slow

2019-02-05 Thread Mark Adams via petsc-users
On Mon, Feb 4, 2019 at 4:17 PM Yaxiong Chen wrote: > Hi Mark, > > Will the parameter MatMPIAIJSetPreallocation in influence the > following part > do i=mystart,nelem,nproc > call ptSystem%getElementalMAT(i, Ae, auxRHSe, idx) > ne=size(idx) >

Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-05 Thread Mark Adams via petsc-users
I would stay away from eigen estimates in the solver (but give us the spectra to look at), so set -pc_gamg_agg_nsmooths 0 and use sor. Applications that have lived on direct solvers can add sorts of crap like penalty terms. sor seemed to work OK so I'd check the coarse grids in GAMG. Test with

Re: [petsc-users] Queries related to snes options

2019-02-04 Thread Mark Adams via petsc-users
On Mon, Feb 4, 2019 at 6:05 AM Aman Saxena via petsc-users < petsc-users@mcs.anl.gov> wrote: > I am trying to use PETSc's TS to implement implicit time-stepping, for > 1d-Euler Equation discretized using 5th order WENO schemes. I have > following queries related to using some of the snes

Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-02 Thread Mark Adams via petsc-users
overroad the >> coarse grid LU solver that is set by default in GAMG. >> > > Not really - there was an explicit option provided specifying the coarse > pc to use SOR. > > > >> >>> 2) do more Richardson iterations on each level (you have the default of >>> 2

Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-01 Thread Mark Adams via petsc-users
; 2) do more Richardson iterations on each level (you have the default of > 2). Try 4 or 6 > > Thanks, > Dave > > On Fri, 1 Feb 2019 at 22:18, Mark Adams via petsc-users < > petsc-users@mcs.anl.gov> wrote: > >> We do need the equations and discretization, this no

Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-01 Thread Mark Adams via petsc-users
We do need the equations and discretization, this not a Laplacian or something is not working out-of-th-box. For pc_gamg_agg_nsmooths 0 you want -pc_gamg_square_graph 10 The info output snows the size of the systems on each level and the number of non-zeros. with pc_gamg_agg_nsmooths 0 in 2D it

Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-01 Thread Mark Adams via petsc-users
> > Both GAMG and ILU are nice and dandy for this, > I would test Richardson/SOR and Chebyshev/Jacobi on the tiny system and converge it way down, say rtol = 1.e-12. See which one is better in the early iteration and pick it. It would be nice to check that it solves the problem ... The residual

Re: [petsc-users] PETSC matrix assembling super slow

2019-01-29 Thread Mark Adams via petsc-users
Optimized is a configuration flag not a versions. You need to figure out your number of non-zeros per row of you global matrix, or a bound on it, and supply that in MatMPIAIJSetPreallocation. Otherwise it has to allocate and copy memory often. You could increase your f9 on a serial run and see

Re: [petsc-users] PETSc GPU

2019-01-29 Thread Mark Adams via petsc-users
https://www.mcs.anl.gov/petsc/features/gpus.html On Tue, Jan 29, 2019 at 2:38 PM Najeeb Ahmad via petsc-users < petsc-users@mcs.anl.gov> wrote: > Hi all, > > I am interested in knowing the state of the art in PETSc GPU > implementation. In the paper "Preliminary Implementation of PETSc Using >

Re: [petsc-users] PETSC matrix assembling super slow

2019-01-29 Thread Mark Adams via petsc-users
Slow assembly is often from not preallocating correctly. I am guessing that you are using Q1 element and f9==9, and thus the preallocation should be OK if this is a scalar problem on a regular grid and f6-==6 should be OK for the off processor allocation, if my assumptions are correct. You can

Re: [petsc-users] ASM Interface (Additive Schwarz Method)

2019-01-18 Thread Mark Adams via petsc-users
> > > > This works perfectly as long as I use PC_ASM_BASIC for the PCASMType. If I > switch to PC_ASM_RESTRICT, my GMRES algorithm, does not converge anymore. > Why is this? > PC_ASM_RESTRICT specifies the use of a different algorithm. It is a cheaper algorithm in terms of work (communication)

  1   2   >