[petsc-users] Bug in pc_right + nonzero initial guess + {tfqmr, tcqmr, cgs}

2012-03-11 Thread Tobin Isaac
Hi, The changes made to bcgs in the commit below should be applied to tfqmr, tcqmr, and cgs as well. Cheers, Toby author Barry Smith bsmith at mcs.anl.gov Wed, 19 Jan 2011 20:10:29 -0600 (13 months ago) changeset 18159 a526acdfdc95 parent 18153 3c53022524fb child 18160 4db6569c26af

[petsc-users] SEGV using asm + icc + empty processors

2012-09-04 Thread Tobin Isaac
I've set up PCMG using PCML with repartitioning, which gives some processors empty partitions on all by the finest levels. As smoothers I want to use block incomplete factorizations with one block per processor. My command line looks like this: -info -pc_ml_PrintLevel 10 -pc_ml_maxCoarseSize

[petsc-users] sor vs. asm + sor

2012-09-05 Thread Tobin Isaac
What's the difference between -pc_type sor -pc_sor_local_symmetric and -pc_type asm -sub_pc_type sor -sub_pc_sor_local_symmetric? Specifically, this converges in 30 iterations: ./ex49 -mx 100 -my 100 -elas_ksp_view -elas_ksp_monitor -elas_ksp_type cg -elas_pc_type gamg -elas_pc_gamg_verbose 10

[petsc-users] sor vs. asm + sor

2012-09-05 Thread Tobin Isaac
What's the difference between -pc_type sor -pc_sor_local_symmetric and -pc_type asm -sub_pc_type sor -sub_pc_sor_local_symmetric? The ASM version sticks a Krylov iteration in these by default. Matt In this case the inner ksp is preonly so I think they ought to be the same,

[petsc-users] ML options

2013-01-10 Thread Tobin Isaac
On Mon, Jan 07, 2013 at 09:36:05AM -0600, Jed Brown wrote: On Mon, Jan 7, 2013 at 9:09 AM, Mark F. Adams mark.adams at columbia.eduwrote: ex56 is a simple 3D elasticity problem. There is a runex56 target that uses GAMG and a runex56_ml. These have a generic parameters and ML and GAMG

[petsc-users] ML options

2013-01-10 Thread Tobin Isaac
On Tue, Jan 08, 2013 at 08:07:36AM -0600, Jed Brown wrote: On Mon, Jan 7, 2013 at 4:23 PM, Mark F. Adams mark.adams at columbia.eduwrote: '-pc_ml_reuse_interpolation true' does seem to get ML to reuse some mesh setup. The setup time goes from .3 to .1 sec on one of my tests from the

[petsc-users] make dist + hg

2013-01-15 Thread Tobin Isaac
I'm sorry if this is more of a mercurial question, but here goes: I'm keeping track of petsc-dev with mercurial. I'm trying to make a tarball of my source, because I have some local tweaks that I'd like to keep when I build on a new system. Two issues: 1) make dist doesn't seem to run

[petsc-users] [Libmesh-users] Preconditioning of a Stokes problem

2013-02-04 Thread Tobin Isaac
On Mon, Feb 04, 2013 at 12:30:39PM -0600, Dmitry Karpeev wrote: Francesco, Are you using any pressure stabilization scheme? If not, the 11-block in your Jacobian A = [A00 A01; A10 A11] would typically be zero, and preconditioning it with jacobi wouldn't really work. If A11 = 0, you ought

[petsc-users] snes fieldsplit for kkt

2013-03-24 Thread Tobin Isaac
If I'm solving a PDE-constrained optimization problem, I'd like to be able to specify the equations and then choose at runtime between a full-space approach and a reduced-space approach. Is this possible with petsc? Thanks, Toby -- next part -- A non-text attachment was

[petsc-users] snes fieldsplit for kkt

2013-03-25 Thread Tobin Isaac
On Sun, Mar 24, 2013 at 11:03:20PM -0500, Jed Brown wrote: Tobin Isaac tisaac at ices.utexas.edu writes: If I'm solving a PDE-constrained optimization problem, I'd like to be able to specify the equations and then choose at runtime between a full-space approach and a reduced-space

[petsc-users] Jacobian construction, DA vs Plex

2014-07-07 Thread Tobin Isaac
Hi, If I have a pointwise Jacobian function f, I know that I can call DMGetDS() and pass f to PetscDSSetJacobian(), and that f will be used by PetscFEIntegrate() and thus by DMPlexSNESComputeJacobianFEM(). It looks like PetscFEIntegrate() is only used by plex and not da. Is there anyway that I

Re: [petsc-users] Jacobian construction, DA vs Plex

2014-07-07 Thread Tobin Isaac
On Mon, Jul 07, 2014 at 05:43:48PM +0200, Matthew Knepley wrote: On Mon, Jul 7, 2014 at 5:36 PM, Tobin Isaac tis...@ices.utexas.edu wrote: Hi, If I have a pointwise Jacobian function f, I know that I can call DMGetDS() and pass f to PetscDSSetJacobian(), and that f will be used

Re: [petsc-users] Understanding matmult memory performance

2017-09-29 Thread Tobin Isaac
On Fri, Sep 29, 2017 at 12:19:54PM +0100, Lawrence Mitchell wrote: > Dear all, > > I'm attempting to understand some results I'm getting for matmult > performance. In particular, it looks like I'm obtaining timings that suggest > that I'm getting more main memory bandwidth than I think is

Re: [petsc-users] Understanding matmult memory performance

2017-09-29 Thread Tobin Isaac
On Fri, Sep 29, 2017 at 09:04:47AM -0400, Tobin Isaac wrote: > On Fri, Sep 29, 2017 at 12:19:54PM +0100, Lawrence Mitchell wrote: > > Dear all, > > > > I'm attempting to understand some results I'm getting for matmult > > performance. In particular, it looks

Re: [petsc-users] High-dimensional DMDA

2017-10-17 Thread Tobin Isaac
On Tue, Oct 17, 2017 at 03:40:10PM -0400, Mark Adams wrote: > Let me just add that we (me and Toby (p4est)) think of tensor grids for > kinetic problems. A (phase space) grid at every spatial grid point. THis > allows us to compose our existing 3D grids to get 6D, for instance. This > work

Re: [petsc-users] DMForestTransferVec with -petscspace_order 0

2018-04-04 Thread Tobin Isaac
nch and to-be-released maintained version v3.8.9. Cheers, Toby > > Regards, > > Yann > > > Le 03/04/2018 à 03:33, Tobin Isaac a écrit : > > Hi Yann, > > > > Thanks for pointing this out to us. Matt and I are the two most > > actively developing

Re: [petsc-users] DMForestTransferVec with -petscspace_order 0

2018-04-02 Thread Tobin Isaac
Hi Yann, Thanks for pointing this out to us. Matt and I are the two most actively developing in this area. We have been working on separate threads and this looks like an issue where we need to sync up. I think there is a simple fix, but it would be helpful to know which version petsc you're