On 09/11/2014 12:52 PM, Matthew Knepley wrote:
On Thu, Sep 11, 2014 at 11:47 AM, Eric Chamberland

MatNest is absolutely the worst thing in the PETSc interface. You should

Sounds so strange to me... :-)

... because we decided to use MatNest to create sub-matrices for linear vs quadratic parts of a velocity field (http://onlinelibrary.wiley.com/doi/10.1002/nla.757/abstract). We choosed MatNest to minimize the memory used, and "overloaded" the MatSetValues calls to be able to do the assembly of u-u blocks in all 4 sub-matrices (doing global to local conversions with strides only). This prevent us to do 4 different loops over all the elements (in other words, it allows the assembly part of our code to not be aware that we are doing the assembly into a matnest)! We just have to do the numbering of the dofs correctly before everything.

never ever ever ever
be calling MatNest directly. You should be assembling into one matrix
from views. Then MatNest
can be used for optimization in the background.

By "views" you mean doing MatGetLocalSubMatrix and MatRestoreLocalSubMatrix like in http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex28.c.html ?

but... If I do a velocity(u)-pressure(p) field problem, how do I fix a block size of 3 for the u-u part and 1 for other parts without using a MatNest (with sub-matrices with good options)?

(this will allows me to use a "fieldsplit_0_pc_type ml" with block size 3 on the u-u part for example)

These are reasonable, but they really apply at creation time (since you
would not want
to convert after values have been set), and it sounds like that is what
you are doing.

yep.

Okay, so the name change is strange, and happens because MatNest returns
a reference
to the inner matrix rather than some view which gets created and
destroyed. Let me talk
to Jed.

ok, thanks for your help!.

Eric

Reply via email to