Re: [petsc-dev] sor smoothers

2013-09-19 Thread Jed Brown
Mark F. Adams mfad...@lbl.gov writes:

 Jed, where you going to give me a PetscObjectGetId?

It is in 'jed/object-id', which is now in 'next'.


pgp116CD7jI0C.pgp
Description: PGP signature


Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Jungho Lee
To fill others in on the situation here; this was regarding a variational
inequality example, that uses reduced space method for which routines are
defined in src/snes/impls/vi/rs/virs.c.  When MatGetSubMatrix was called to
extract rows and columns of the original matrix corresponding to the
inactive indices, original information about the block size wasn't copied.

KSP performance (with fgmres and gmres) deteriorates dramatically when I
switch from mg to gamg (with agg), which I think is due to the lack of
mechanism in gamg to take inactive/active indices into consideration for
examples like this.

There is DMSetVI in virs.c, which marks snes-dm as being associated with
VI and  makes it call an appropriate versions of coarsen (and
createinterpolation, etc.,) which takes inactive IS into account, which in
turn gets called in PCSetUp_MG. This part of PCSetUP_MG is skipped when
gamg is used since it employs its own algorithm to figure out the coarse
grid. So I think a possible solution is to provide a similar mechanism for
gamg as well - but how?.

Ideas?

On Wed, Sep 18, 2013 at 9:11 PM, Mark F. Adams mfad...@lbl.gov wrote:

  results in two levels with the coarse problem being 13*13. Wouldn't
  it be natural for the coarse problem to be of size that's an integer
  multiple of 3, though?

 Yes, something is wrong here.  And as Jed said if you have different Mats
 for the operator and preconditioner the you have to set the block size on
 the preconditioned Mat.  If you are then we have a problem.


 
  2) For the specific set of parameters I'm using for testing purposes,
  the smallest nonzero entry of the finest level matrix is of order
  e-7.

 BTW, the scaling of the matrix should not matter.  If you see a difference
 then we would want to look at it.

  For the coarse level matrix (size 13*13), whose entries are
  determined by MatPtAP called in createlevel (in gamg.c), the smallest
  nonzero entry is of order e-24 - this jumped out at me as a potential
  sign of something wrong.

 Oh, you are talking about the smallest.  That does not matter.  (its a
 sparse matrix so technically the smallest entry is zero)



Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Dmitry Karpeyev
On Thu, Sep 19, 2013 at 1:28 PM, Jungho Lee ju...@mcs.anl.gov wrote:

 To fill others in on the situation here; this was regarding a variational
 inequality example, that uses reduced space method for which routines are
 defined in src/snes/impls/vi/rs/virs.c.  When MatGetSubMatrix was called to
 extract rows and columns of the original matrix corresponding to the
 inactive indices, original information about the block size wasn't copied.

 KSP performance (with fgmres and gmres) deteriorates dramatically when I
 switch from mg to gamg (with agg), which I think is due to the lack of
 mechanism in gamg to take inactive/active indices into consideration for
 examples like this.

Wouldn't gamg work with the submatrix of only inactive degrees of freedom
-- the jac_inact_inact in
KSPSetOperators(snes-ksp,jac_inact_inact,prejac_inact_inact,flg) of
SNESSolve_VINEWTONRSLS()?



 There is DMSetVI in virs.c, which marks snes-dm as being associated with
 VI and  makes it call an appropriate versions of coarsen (and
 createinterpolation, etc.,) which takes inactive IS into account, which in
 turn gets called in PCSetUp_MG. This part of PCSetUP_MG is skipped when
 gamg is used since it employs its own algorithm to figure out the coarse
 grid. So I think a possible solution is to provide a similar mechanism for
 gamg as well - but how?.

 Ideas?

 On Wed, Sep 18, 2013 at 9:11 PM, Mark F. Adams mfad...@lbl.gov wrote:

  results in two levels with the coarse problem being 13*13. Wouldn't
  it be natural for the coarse problem to be of size that's an integer
  multiple of 3, though?

 Yes, something is wrong here.  And as Jed said if you have different Mats
 for the operator and preconditioner the you have to set the block size on
 the preconditioned Mat.  If you are then we have a problem.


 
  2) For the specific set of parameters I'm using for testing purposes,
  the smallest nonzero entry of the finest level matrix is of order
  e-7.

 BTW, the scaling of the matrix should not matter.  If you see a
 difference then we would want to look at it.

  For the coarse level matrix (size 13*13), whose entries are
  determined by MatPtAP called in createlevel (in gamg.c), the smallest
  nonzero entry is of order e-24 - this jumped out at me as a potential
  sign of something wrong.

 Oh, you are talking about the smallest.  That does not matter.  (its a
 sparse matrix so technically the smallest entry is zero)






Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Barry Smith

On Sep 19, 2013, at 3:17 PM, Dmitry Karpeyev dkarp...@gmail.com wrote:

 
 
 
 On Thu, Sep 19, 2013 at 1:28 PM, Jungho Lee ju...@mcs.anl.gov wrote:
 To fill others in on the situation here; this was regarding a variational 
 inequality example, that uses reduced space method for which routines are 
 defined in src/snes/impls/vi/rs/virs.c.  When MatGetSubMatrix was called to 
 extract rows and columns of the original matrix corresponding to the 
 inactive indices, original information about the block size wasn't copied.
 
 KSP performance (with fgmres and gmres) deteriorates dramatically when I 
 switch from mg to gamg (with agg), which I think is due to the lack of 
 mechanism in gamg to take inactive/active indices into consideration for 
 examples like this. 
 Wouldn't gamg work with the submatrix of only inactive degrees of freedom -- 
 the jac_inact_inact in 
 KSPSetOperators(snes-ksp,jac_inact_inact,prejac_inact_inact,flg) of 
 SNESSolve_VINEWTONRSLS()?

   That is what it is doing and apparently it doesn't result in a good 
preconditioner; I don't know why off hand. One thing is it no longer knows 
about the block structure.

   Barry

  
 
 There is DMSetVI in virs.c, which marks snes-dm as being associated with VI 
 and  makes it call an appropriate versions of coarsen (and 
 createinterpolation, etc.,) which takes inactive IS into account, which in 
 turn gets called in PCSetUp_MG. This part of PCSetUP_MG is skipped when gamg 
 is used since it employs its own algorithm to figure out the coarse grid. So 
 I think a possible solution is to provide a similar mechanism for gamg as 
 well - but how?.
 
 Ideas?
 
 On Wed, Sep 18, 2013 at 9:11 PM, Mark F. Adams mfad...@lbl.gov wrote:
  results in two levels with the coarse problem being 13*13. Wouldn't
  it be natural for the coarse problem to be of size that's an integer
  multiple of 3, though?
 
 Yes, something is wrong here.  And as Jed said if you have different Mats for 
 the operator and preconditioner the you have to set the block size on the 
 preconditioned Mat.  If you are then we have a problem.
 
 
  2) For the specific set of parameters I'm using for testing purposes,
  the smallest nonzero entry of the finest level matrix is of order
  e-7.
 
 BTW, the scaling of the matrix should not matter.  If you see a difference 
 then we would want to look at it.
 
  For the coarse level matrix (size 13*13), whose entries are
  determined by MatPtAP called in createlevel (in gamg.c), the smallest
  nonzero entry is of order e-24 - this jumped out at me as a potential
  sign of something wrong.
 
 Oh, you are talking about the smallest.  That does not matter.  (its a sparse 
 matrix so technically the smallest entry is zero)
 
 
 



Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Jed Brown
Barry Smith bsm...@mcs.anl.gov writes:
That is what it is doing and apparently it doesn't result in a good
preconditioner; I don't know why off hand. One thing is it no
longer knows about the block structure.

How is the near null space being specified?


pgpnJypvyI__v.pgp
Description: PGP signature


Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Mark F. Adams

On Sep 19, 2013, at 3:53 PM, Barry Smith bsm...@mcs.anl.gov wrote:

 
   It is essentially 3 Laplacians

If you mean literally 3 scalar laplacians packed in one matrix for some reason 
then the one constant vector is fine.  Is this a block diagonal matrix with 3 
big blocks, if you order it correctly of course?

 so I think the default null space of 3 constant vectors is fine.

You get this if you set the block size to 3.  Constant vectors in each of the 
three components.

 The problem is without the block information presumably GAMG is using only a 
 single constant vector over all variables?  So maybe we need to construct a 3 
 vector null space which just marks in the reduced vector from which of the 3 
 components each entry came from.
 
   Barry
 
 On Sep 19, 2013, at 3:48 PM, Jed Brown jedbr...@mcs.anl.gov wrote:
 
 Barry Smith bsm...@mcs.anl.gov writes:
  That is what it is doing and apparently it doesn't result in a good
  preconditioner; I don't know why off hand. One thing is it no
  longer knows about the block structure.
 
 How is the near null space being specified?
 



Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Barry Smith

   It is essentially 3 Laplacians so I think the default null space of 3 
constant vectors is fine. The problem is without the block information 
presumably GAMG is using only a single constant vector over all variables?  So 
maybe we need to construct a 3 vector null space which just marks in the 
reduced vector from which of the 3 components each entry came from.

   Barry

On Sep 19, 2013, at 3:48 PM, Jed Brown jedbr...@mcs.anl.gov wrote:

 Barry Smith bsm...@mcs.anl.gov writes:
   That is what it is doing and apparently it doesn't result in a good
   preconditioner; I don't know why off hand. One thing is it no
   longer knows about the block structure.
 
 How is the near null space being specified?



Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Jed Brown
Barry Smith bsm...@mcs.anl.gov writes:

It is essentially 3 Laplacians so I think the default null space of
3 constant vectors is fine. The problem is without the block
information presumably GAMG is using only a single constant vector
over all variables?  So maybe we need to construct a 3 vector null
space which just marks in the reduced vector from which of the 3
components each entry came from.

Yup, that's what I would try.


pgpjUskHzDc3X.pgp
Description: PGP signature


Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Dmitry Karpeyev
On Thu, Sep 19, 2013 at 2:53 PM, Barry Smith bsm...@mcs.anl.gov wrote:


It is essentially 3 Laplacians so I think the default null space of 3
 constant vectors is fine. The problem is without the block information
 presumably GAMG is using only a single constant vector over all variables?
  So maybe we need to construct a 3 vector null space which just marks in
 the reduced vector from which of the 3 components each entry came from.


 SNESVIGetActiveSetIS() currently doesn't try to detect or set the inactive
set IS block structure, so MatGetSubMatrix() can't take advantage of that.
 This (blocking) information is geometric in nature -- since it is mesh
nodes that really come into contact -- so unless we involve a DM somehow,
I don't know if it is possible to retain the right blocking structure in
general.  Or add a user callback hook to compute the inactive set?


Barry

 On Sep 19, 2013, at 3:48 PM, Jed Brown jedbr...@mcs.anl.gov wrote:

  Barry Smith bsm...@mcs.anl.gov writes:
That is what it is doing and apparently it doesn't result in a good
preconditioner; I don't know why off hand. One thing is it no
longer knows about the block structure.
 
  How is the near null space being specified?




[petsc-dev] Push restrictions on branches

2013-09-19 Thread Jed Brown
Bitbucket added support to restrict push access based on branch names
(glob matching).  For example, that would allow us to have a smaller
group of people with access to merge to 'maint' or 'master'.

Is this a feature we should start using in petsc.git?

One tangible difference from the current model is that it would let give
more people push access to named branches which then allows an
integrator to patch up a branch for an open pull request.  (When a PR
comes from a fork instead of an in-repo branch, we can't push to their
repository so we can't update the PR.  This sometimes leads to tedious
fine-tuning of trivial details in the PR comments.)


Admins can see the branch list here:

https://bitbucket.org/petsc/petsc/admin/branches


pgpsRIcUn7NRv.pgp
Description: PGP signature


Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Barry Smith

On Sep 19, 2013, at 4:28 PM, Mark F. Adams mfad...@lbl.gov wrote:

 
 On Sep 19, 2013, at 3:53 PM, Barry Smith bsm...@mcs.anl.gov wrote:
 
 
  It is essentially 3 Laplacians
 
 If you mean literally 3 scalar laplacians packed in one matrix for some 
 reason then the one constant vector is fine.  

  Really, but (for periodic or N. bc) the null space has three vectors, you 
don't need all of them in your near null space?


 Is this a block diagonal matrix with 3 big blocks, if you order it correctly 
 of course?
 
 so I think the default null space of 3 constant vectors is fine.
 
 You get this if you set the block size to 3.  Constant vectors in each of the 
 three components.
 
 The problem is without the block information presumably GAMG is using only a 
 single constant vector over all variables?  So maybe we need to construct a 
 3 vector null space which just marks in the reduced vector from which of the 
 3 components each entry came from.
 
  Barry
 
 On Sep 19, 2013, at 3:48 PM, Jed Brown jedbr...@mcs.anl.gov wrote:
 
 Barry Smith bsm...@mcs.anl.gov writes:
 That is what it is doing and apparently it doesn't result in a good
 preconditioner; I don't know why off hand. One thing is it no
 longer knows about the block structure.
 
 How is the near null space being specified?
 
 



Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Jed Brown
Barry Smith bsm...@mcs.anl.gov writes:
 If you mean literally 3 scalar laplacians packed in one matrix for
 some reason then the one constant vector is fine.

   Really, but (for periodic or N. bc) the null space has three
   vectors, you don't need all of them in your near null space?

If the three scalar Laplacians are truly decoupled, then they will be
disjoint on coarse levels of GAMG.  (GAMG won't know that they happen to
share a mesh.)


pgpI07J73PRQV.pgp
Description: PGP signature


Re: [petsc-dev] Push restrictions on branches

2013-09-19 Thread Patrick Sanan
This sounds like a great idea from the perspective of someone like me, who 
would only be pushing to maint/master/next in error at this point.  It would 
also allow slightly easier access to features which are new/experimental enough 
to not be in next yet. 
On Sep 19, 2013, at 4:35 PM, Jed Brown wrote:

 Bitbucket added support to restrict push access based on branch names
 (glob matching).  For example, that would allow us to have a smaller
 group of people with access to merge to 'maint' or 'master'.
 
 Is this a feature we should start using in petsc.git?
 
 One tangible difference from the current model is that it would let give
 more people push access to named branches which then allows an
 integrator to patch up a branch for an open pull request.  (When a PR
 comes from a fork instead of an in-repo branch, we can't push to their
 repository so we can't update the PR.  This sometimes leads to tedious
 fine-tuning of trivial details in the PR comments.)
 
 
 Admins can see the branch list here:
 
 https://bitbucket.org/petsc/petsc/admin/branches



Re: [petsc-dev] coarse grid in gamg (for the OPTE paper)

2013-09-19 Thread Mark F. Adams

On Sep 19, 2013, at 7:05 PM, Barry Smith bsm...@mcs.anl.gov wrote:

 
 On Sep 19, 2013, at 4:28 PM, Mark F. Adams mfad...@lbl.gov wrote:
 
 
 On Sep 19, 2013, at 3:53 PM, Barry Smith bsm...@mcs.anl.gov wrote:
 
 
 It is essentially 3 Laplacians
 
 If you mean literally 3 scalar laplacians packed in one matrix for some 
 reason then the one constant vector is fine.  
 
  Really, but (for periodic or N. bc) the null space has three vectors, you 
 don't need all of them in your near null space?
 

we are probably not understanding each other.  If you are just stacking 3 
Laplacians in a matrix, uncoupled, then you could do three independent solves 
with one (near) null space vector for each solve.  The constant function.  When 
stacked you are just doing all three solves simultaneously.  Norms might be a 
little different and the three matrices might have different largest 
eigenvalues so that will make the SA solver a little different.

 
 Is this a block diagonal matrix with 3 big blocks, if you order it correctly 
 of course?
 
 so I think the default null space of 3 constant vectors is fine.
 
 You get this if you set the block size to 3.  Constant vectors in each of 
 the three components.
 
 The problem is without the block information presumably GAMG is using only 
 a single constant vector over all variables?  So maybe we need to construct 
 a 3 vector null space which just marks in the reduced vector from which of 
 the 3 components each entry came from.
 
 Barry
 
 On Sep 19, 2013, at 3:48 PM, Jed Brown jedbr...@mcs.anl.gov wrote:
 
 Barry Smith bsm...@mcs.anl.gov writes:
 That is what it is doing and apparently it doesn't result in a good
 preconditioner; I don't know why off hand. One thing is it no
 longer knows about the block structure.
 
 How is the near null space being specified?