Re: [petsc-users] Possible to recover ILU(k) from hypre/pilut?

2017-11-15 Thread Smith, Barry F.
> On Nov 15, 2017, at 9:57 PM, Mark Lohry wrote: > > What are the limitations of ILU in parallel you're referring to? Does > Schwarz+local ILU typically fare better? If ILU works fine for scalably in parallel that is great. Most of the PETSc team has an explicit bias

Re: [petsc-users] Possible to recover ILU(k) from hypre/pilut?

2017-11-15 Thread Mark Lohry
What are the limitations of ILU in parallel you're referring to? Does Schwarz+local ILU typically fare better? On Nov 15, 2017 10:50 PM, "Smith, Barry F." wrote: > > > > On Nov 15, 2017, at 9:40 PM, Jed Brown wrote: > > > > "Smith, Barry F."

Re: [petsc-users] Possible to recover ILU(k) from hypre/pilut?

2017-11-15 Thread Smith, Barry F.
> On Nov 15, 2017, at 9:40 PM, Jed Brown wrote: > > "Smith, Barry F." writes: > >>> On Nov 15, 2017, at 6:38 AM, Mark Lohry wrote: >>> >>> I've found ILU(0) or (1) to be working well for my problem, but the petsc >>> implementation

Re: [petsc-users] Possible to recover ILU(k) from hypre/pilut?

2017-11-15 Thread Jed Brown
"Smith, Barry F." writes: >> On Nov 15, 2017, at 6:38 AM, Mark Lohry wrote: >> >> I've found ILU(0) or (1) to be working well for my problem, but the petsc >> implementation is serial only. Running with -pc_type hypre -pc_hypre_type >> pilut with default

Re: [petsc-users] ISGlobalToLocalMappingApplyBlock

2017-11-15 Thread Adrian Croucher
I've debugged into the ISGlobalToLocalMappingApplyBlock() function and it seems to me the bounds checking in there is not correct when the blocksize is > 1. It checks against the same bounds, scaled up by the blocksize, in both the block and non-block versions of the function. I think for the

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Kong, Fande
Thanks, Barry, On Wed, Nov 15, 2017 at 4:04 PM, Smith, Barry F. wrote: > > Do the ASM runs for thousands of time-steps produce the same final > "physical results" as the MUMPS run for thousands of timesteps? While with > SuperLU you get a very different "physical

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Smith, Barry F.
Do the ASM runs for thousands of time-steps produce the same final "physical results" as the MUMPS run for thousands of timesteps? While with SuperLU you get a very different "physical results"? Barry > On Nov 15, 2017, at 4:52 PM, Kong, Fande wrote: > > > > On

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Kong, Fande
On Wed, Nov 15, 2017 at 3:35 PM, Smith, Barry F. wrote: > > Since the convergence labeled linear does not converge to 14 digits in > one iteration I am assuming you are using lagged preconditioning and or > lagged Jacobian? > We are using Jacobian-free Newton. So Jacobian

Re: [petsc-users] ISGlobalToLocalMappingApplyBlock

2017-11-15 Thread Adrian Croucher
I actually attached the wrong test program last time- I've attached the right one here, which is much simpler. It test global indices 0, 1, ... 9. If I run on 2 processes, the local indices it returns are: rank 0: 0, 1, 2, 3, 4, 0, 0, 0, -253701943, 0 rank 1: -1, -1, -1, -1, -1, -1, -1, -1,

Re: [petsc-users] IDR availalbe in PETSC?

2017-11-15 Thread Jed Brown
There isn't an IDR in PETSc, but there is BCGSL which usually performs similarly. Contributions welcome. Evan Um writes: > Dear PETSC users, > > I was wondering if anyone already tried/developed an induced dimension > reduction (IDR) solver for PETSC? I think that it is a

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Smith, Barry F.
Since the convergence labeled linear does not converge to 14 digits in one iteration I am assuming you are using lagged preconditioning and or lagged Jacobian? What happens if you do no lagging and solve each linear solve with a new LU factorization? Barry > On Nov 15, 2017, at

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Kong, Fande
On Wed, Nov 15, 2017 at 2:52 PM, Smith, Barry F. wrote: > > > > On Nov 15, 2017, at 3:36 PM, Kong, Fande wrote: > > > > Hi Barry, > > > > Thanks for your reply. I was wondering why this happens only when we use > superlu_dist. I am trying to understand

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Mark Adams
To be clear: these differences completely go away with MUMPS? Can you valgrind this? We have seen some valgrind warning from MUMPS from BLAS routines. It could be that your BLAS is buggy (and SuperLU uses some BLAS routines that MUMPS does not). I think SuperLU does more/different pivoting than

Re: [petsc-users] ISGlobalToLocalMappingApplyBlock

2017-11-15 Thread Adrian Croucher
hi Dave, On 15/11/17 21:34, Dave May wrote: Or am I wrong to expect this to give the same results regardless of blocksize? Yep. Maybe I am not using this function correctly then. The man page says it "Provides the local block numbering for a list of integers specified with a

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Matthew Knepley
On Wed, Nov 15, 2017 at 4:36 PM, Kong, Fande wrote: > Hi Barry, > > Thanks for your reply. I was wondering why this happens only when we use > superlu_dist. I am trying to understand the algorithm in superlu_dist. If > we use ASM or MUMPS, we do not produce these differences.

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Smith, Barry F.
> On Nov 15, 2017, at 3:36 PM, Kong, Fande wrote: > > Hi Barry, > > Thanks for your reply. I was wondering why this happens only when we use > superlu_dist. I am trying to understand the algorithm in superlu_dist. If we > use ASM or MUMPS, we do not produce these

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Kong, Fande
Hi Barry, Thanks for your reply. I was wondering why this happens only when we use superlu_dist. I am trying to understand the algorithm in superlu_dist. If we use ASM or MUMPS, we do not produce these differences. The differences actually are NOT meaningless. In fact, we have a real transient

Re: [petsc-users] superlu_dist produces random results

2017-11-15 Thread Smith, Barry F.
Meaningless differences > On Nov 15, 2017, at 2:26 PM, Kong, Fande wrote: > > Hi, > > There is a heat conduction problem. When superlu_dist is used as a > preconditioner, we have random results from different runs. Is there a random > algorithm in superlu_dist? If we

[petsc-users] superlu_dist produces random results

2017-11-15 Thread Kong, Fande
Hi, There is a heat conduction problem. When superlu_dist is used as a preconditioner, we have random results from different runs. Is there a random algorithm in superlu_dist? If we use ASM or MUMPS as the preconditioner, we then don't have this issue. run 1: 0 Nonlinear |R| = 9.447423e+03

[petsc-users] IDR availalbe in PETSC?

2017-11-15 Thread Evan Um
Dear PETSC users, I was wondering if anyone already tried/developed an induced dimension reduction (IDR) solver for PETSC? I think that it is a useful one but I couldn't find its example with PETSC. If you have any idea about IDR routines for PETSC, please let me know. Thanks! Best, Evan

Re: [petsc-users] Possible to recover ILU(k) from hypre/pilut?

2017-11-15 Thread Mark Lohry
> > > > Partially unrelated, PC block-jacobi fails with MFFD type not supported, > but additive schwarz with 0 overlap, which I think is identical, works > fine. Is this a bug? > >Huh, is this related to hypre, or plan PETSc? Please send all > information, command line options etc that

[petsc-users] Possible to recover ILU(k) from hypre/pilut?

2017-11-15 Thread Mark Lohry
I've found ILU(0) or (1) to be working well for my problem, but the petsc implementation is serial only. Running with -pc_type hypre -pc_hypre_type pilut with default settings has considerably worse convergence. I've tried using -pc_hypre_pilut_factorrowsize (number of actual elements in row) to

Re: [petsc-users] indices into Vec/Mat associated to a DMPlex

2017-11-15 Thread Matthew Knepley
On Wed, Nov 15, 2017 at 6:09 AM, Matteo Semplice wrote: > On 15/11/2017 11:39, Matthew Knepley wrote: > > On Wed, Nov 15, 2017 at 3:11 AM, Matteo Semplice > wrote: > >> Hi. >> >> I am struggling with indices into matrices associated to a

Re: [petsc-users] indices into Vec/Mat associated to a DMPlex

2017-11-15 Thread Matteo Semplice
On 15/11/2017 11:39, Matthew Knepley wrote: On Wed, Nov 15, 2017 at 3:11 AM, Matteo Semplice > wrote: Hi. I am struggling with indices into matrices associated to a DMPLex mesh. I can explain my problem to the following

Re: [petsc-users] indices into Vec/Mat associated to a DMPlex

2017-11-15 Thread Matthew Knepley
On Wed, Nov 15, 2017 at 3:11 AM, Matteo Semplice wrote: > Hi. > > I am struggling with indices into matrices associated to a DMPLex mesh. I > can explain my problem to the following minimal example. > > Let's say I want to assemble the matrix to solve an equation (say

Re: [petsc-users] ISGlobalToLocalMappingApplyBlock

2017-11-15 Thread Dave May
On Wed, 15 Nov 2017 at 05:55, Adrian Croucher wrote: > hi > > I'm trying to use ISGlobalToLocalMappingApplyBlock() and am a bit > puzzled about the results it's giving. > > I've attached a small test to illustrate. It just sets up a > local-to-global mapping with 10

[petsc-users] indices into Vec/Mat associated to a DMPlex

2017-11-15 Thread Matteo Semplice
Hi. I am struggling with indices into matrices associated to a DMPLex mesh. I can explain my problem to the following minimal example. Let's say I want to assemble the matrix to solve an equation (say Laplace) with data attached to cells and the finite volume method. In principle I - loop