Re: [petsc-users] Near null space for a fieldsplit in petsc4py

2023-07-13 Thread Karin
Thank you very much for the information Matt.
Unfortunately, I do not use DM  :-(

Le jeu. 13 juil. 2023 à 13:44, Matthew Knepley  a écrit :

> On Thu, Jul 13, 2023 at 5:33 AM Pierre Jolivet 
> wrote:
>
>> Dear Nicolas,
>>
>> On 13 Jul 2023, at 10:17 AM, TARDIEU Nicolas 
>> wrote:
>>
>> Dear Pierre,
>>
>> You are absolutely right. I was using a --with-debugging=0 (aka release)
>> install and this is definitely an error.
>> Once I used my debug install, I found the way to fix my problem. The
>> solution is in the attached script: I first need to extract the correct
>> block from the PC operator's MatNest and then append the null space to it.
>> Anyway this is a bit tricky...
>>
>>
>> Yep, it’s the same with all “nested” solvers, fieldsplit, ASM, MG, you
>> name it.
>> You first need the initial PCSetUp() so that the bare minimum is put in
>> place, then you have to fetch things yourself and adapt it to your needs.
>> We had a similar discussion with the MEF++ people last week, there is
>> currently no way around this, AFAIK.
>>
>
> Actually, I hated this as well, so I built a way around it _if_ you are
> using a DM to define the problem. Then
> you can set a "nullspace constructor" to make it if the field you are
> talking about is ever extracted. You use DMSetNullSpaceConstructor(). I do
> this in SNES ex62 and ex69, and other examples.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>> Pierre
>>
>> Regards,
>> Nicolas
>>
>> --
>> *De :* pierre.joli...@lip6.fr 
>> *Envoyé :* mercredi 12 juillet 2023 19:52
>> *À :* TARDIEU Nicolas 
>> *Cc :* petsc-users@mcs.anl.gov 
>> *Objet :* Re: [petsc-users] Near null space for a fieldsplit in petsc4py
>>
>>
>> > On 12 Jul 2023, at 6:04 PM, TARDIEU Nicolas via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>> >
>> > Dear PETSc team,
>> >
>> > In the attached example, I set up a block pc for a saddle-point problem
>> in petsc4py. The IS define the unknowns, namely some physical quantity
>> (phys) and a Lagrange multiplier (lags).
>> > I would like to attach a near null space to the physical block, in
>> order to get the best performance from an AMG pc.
>> > I have been trying hard, attaching it to the initial block, to the IS
>> but no matter what I am doing, when it comes to "ksp_view", no near null
>> space is attached to the matrix.
>> >
>> > Could you please help me figure out what I am doing wrong ?
>>
>> Are you using a double-precision 32-bit integers real build of PETSc?
>> Is it --with-debugging=0?
>> Because with my debug build, I get the following error (thus explaining
>> why it’s not attached to the KSP).
>> Traceback (most recent call last):
>>   File "/Volumes/Data/Downloads/test/test_NullSpace.py", line 35, in
>> 
>> ns = NullSpace().create(True, [v], comm=comm)
>>  
>>   File "petsc4py/PETSc/Mat.pyx", line 5611, in
>> petsc4py.PETSc.NullSpace.create
>> petsc4py.PETSc.Error: error code 62
>> [0] MatNullSpaceCreate() at
>> /Volumes/Data/repositories/petsc/src/mat/interface/matnull.c:249
>> [0] Invalid argument
>> [0] Vector 0 must have 2-norm of 1.0, it is 22.3159
>>
>> Furthermore, if you set yourself the constant vector in the near
>> null-space, then the first argument of create() must be False, otherwise,
>> you’ll have twice the same vector, and you’ll end up with another error
>> (the vectors in the near null-space must be orthonormal).
>> If things still don’t work after those couple of fixes, please feel free
>> to send an up-to-date reproducer.
>>
>> Thanks,
>> Pierre
>>
>> > Thanks,
>> > Nicolas
>> >
>> >
>> >
>> >
>> > Ce message et toutes les pièces jointes (ci-après le 'Message') sont
>> établis à l'intention exclusive des destinataires et les informations qui y
>> figurent sont strictement confidentielles. Toute utilisation de ce Message
>> non conforme à sa destination, toute diffusion ou toute publication totale
>> ou partielle, est interdite sauf autorisation expresse.
>> >
>> > Si vous n'êtes pas le destinataire de ce Message, il vous est interdit
>> de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou
>> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de
>> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace
>> sur quelque support que ce soit. Nous vous remercions également d'en
>> avertir immédiatement l'expéditeur par retour du message.
>> >
>> > Il est impossible de garantir que les communications par messagerie
>> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute
>> erreur ou virus.
>> > 
>> >
>> > This message and any attachments (the 'Message') are intended solely
>> for the addressees. The information contained in this Message is
>> confidential. Any use of information contained in this Message not in
>> accord with its purpose, any dissemination or disclosure, either whole or
>> partial, is prohibited 

Re: [petsc-users] snes_type aspin without DA example

2023-06-23 Thread Karin
Thanks Barry, I'll check it out.

Le ven. 23 juin 2023 à 18:57, Barry Smith  a écrit :

>
>   Take a look at setNestSubVecs()
> in /src/binding/petsc4py/src/petsc4py/PETSc/Vec.pyx it seems to deal with
> passing an array of PETSc objects. Likely there are additional instances
> where arrays of objects are passed between Python and C.
>
>   I do not understand this code, but the Python experts will. Perhaps this
> type of construct can be hoisted up into a petsc4py implementation utility
> useful for any place in PETSc where arrays of objects must be passed back
> and forth.
>
>   Barry
>
>
> On Jun 23, 2023, at 6:12 AM, Karin  wrote:
>
> In order to transfer from Python to C a list of int, real or bool as an
> input, there are the functions iarray_i, iarray_r and iarray_b.
> In order to transfer from C to Python a list of int, real, bool or
> pointers as an output, there are the functions oarray_i, oarray_r, oarray_b
> and oarray_p.
> Nevertheless I do not find the function iarray_p which (I think) is
> required to transfer a list of Scatter when calling SNESNASMSetSubdomains.
> Am I right ?
>
> Le ven. 23 juin 2023 à 09:40, Karin  a écrit :
>
>> Dear Barry,
>> I have started looking at the code but I miss an example using
>> SNESNASMSetSubdomains. In fact I do not even find a single use of the
>> function in PETSc.
>> Could someone provide me with an example ?
>> Thanks,
>> Nicolas
>>
>> Le ven. 23 juin 2023 à 02:17, Barry Smith  a écrit :
>>
>>>
>>>   A test would be great.
>>>
>>> On Jun 22, 2023, at 3:20 PM, Karin  wrote:
>>>
>>> Thank you Barry. I will try this.
>>> Should I provide a test in src/binding/petsc4py/test/test_snes.py ?
>>>
>>> Le jeu. 22 juin 2023 à 20:41, Barry Smith  a écrit :
>>>
>>>>
>>>>   You are not missing anything. The petsc4py stub for
>>>> SNESNASMSetSubdomains() has not been written. You could add it by adding to
>>>> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and
>>>> then make a merge request
>>>> https://petsc.org/release/developers/contributing/ to get it into
>>>> PETSc.
>>>>
>>>> On Jun 22, 2023, at 1:54 PM, Karin  wrote:
>>>>
>>>> Dear PETSc team,
>>>>
>>>> I would like to play with aspin-type nonlinear solvers. I have found
>>>> several tests like snes/tutorials/ex19.c but they all use DA, which I don't
>>>> want to use since I need to stick at the algebraic level.
>>>> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set
>>>> up things.
>>>> Unfortunately, I get the error "DM has no default decomposition
>>>> defined.  Set subsolves manually with SNESNASMSetSubdomains()" which, I
>>>> think, I do understand.
>>>> But I do not find any implementation of the SNESNASMSetSubdomains in
>>>> petsc4py.
>>>> Am I missing something ?
>>>> Thanks,
>>>> Nicolas
>>>>
>>>>
>>>>
>>>
>


Re: [petsc-users] snes_type aspin without DA example

2023-06-23 Thread Karin
In order to transfer from Python to C a list of int, real or bool as an
input, there are the functions iarray_i, iarray_r and iarray_b.
In order to transfer from C to Python a list of int, real, bool or pointers
as an output, there are the functions oarray_i, oarray_r, oarray_b and
oarray_p.
Nevertheless I do not find the function iarray_p which (I think) is
required to transfer a list of Scatter when calling SNESNASMSetSubdomains.
Am I right ?

Le ven. 23 juin 2023 à 09:40, Karin  a écrit :

> Dear Barry,
> I have started looking at the code but I miss an example using
> SNESNASMSetSubdomains. In fact I do not even find a single use of the
> function in PETSc.
> Could someone provide me with an example ?
> Thanks,
> Nicolas
>
> Le ven. 23 juin 2023 à 02:17, Barry Smith  a écrit :
>
>>
>>   A test would be great.
>>
>> On Jun 22, 2023, at 3:20 PM, Karin  wrote:
>>
>> Thank you Barry. I will try this.
>> Should I provide a test in src/binding/petsc4py/test/test_snes.py ?
>>
>> Le jeu. 22 juin 2023 à 20:41, Barry Smith  a écrit :
>>
>>>
>>>   You are not missing anything. The petsc4py stub for
>>> SNESNASMSetSubdomains() has not been written. You could add it by adding to
>>> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and
>>> then make a merge request
>>> https://petsc.org/release/developers/contributing/ to get it into PETSc.
>>>
>>> On Jun 22, 2023, at 1:54 PM, Karin  wrote:
>>>
>>> Dear PETSc team,
>>>
>>> I would like to play with aspin-type nonlinear solvers. I have found
>>> several tests like snes/tutorials/ex19.c but they all use DA, which I don't
>>> want to use since I need to stick at the algebraic level.
>>> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up
>>> things.
>>> Unfortunately, I get the error "DM has no default decomposition
>>> defined.  Set subsolves manually with SNESNASMSetSubdomains()" which, I
>>> think, I do understand.
>>> But I do not find any implementation of the SNESNASMSetSubdomains in
>>> petsc4py.
>>> Am I missing something ?
>>> Thanks,
>>> Nicolas
>>>
>>>
>>>
>>


Re: [petsc-users] snes_type aspin without DA example

2023-06-23 Thread Karin
Dear Barry,
I have started looking at the code but I miss an example using
SNESNASMSetSubdomains. In fact I do not even find a single use of the
function in PETSc.
Could someone provide me with an example ?
Thanks,
Nicolas

Le ven. 23 juin 2023 à 02:17, Barry Smith  a écrit :

>
>   A test would be great.
>
> On Jun 22, 2023, at 3:20 PM, Karin  wrote:
>
> Thank you Barry. I will try this.
> Should I provide a test in src/binding/petsc4py/test/test_snes.py ?
>
> Le jeu. 22 juin 2023 à 20:41, Barry Smith  a écrit :
>
>>
>>   You are not missing anything. The petsc4py stub for
>> SNESNASMSetSubdomains() has not been written. You could add it by adding to
>> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and
>> then make a merge request
>> https://petsc.org/release/developers/contributing/ to get it into PETSc.
>>
>> On Jun 22, 2023, at 1:54 PM, Karin  wrote:
>>
>> Dear PETSc team,
>>
>> I would like to play with aspin-type nonlinear solvers. I have found
>> several tests like snes/tutorials/ex19.c but they all use DA, which I don't
>> want to use since I need to stick at the algebraic level.
>> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up
>> things.
>> Unfortunately, I get the error "DM has no default decomposition defined.
>> Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do
>> understand.
>> But I do not find any implementation of the SNESNASMSetSubdomains in
>> petsc4py.
>> Am I missing something ?
>> Thanks,
>> Nicolas
>>
>>
>>
>


Re: [petsc-users] snes_type aspin without DA example

2023-06-22 Thread Karin
Thank you Barry. I will try this.
Should I provide a test in src/binding/petsc4py/test/test_snes.py ?

Le jeu. 22 juin 2023 à 20:41, Barry Smith  a écrit :

>
>   You are not missing anything. The petsc4py stub for
> SNESNASMSetSubdomains() has not been written. You could add it by adding to
>
> src/petsc4py/PETSc/SNES.pyx and src/petsc4py/PETSc/petscsnes.pxi and then
> make a merge request  https://petsc.org/release/developers/contributing/ to 
> get
> it into PETSc.
>
> On Jun 22, 2023, at 1:54 PM, Karin  wrote:
>
> Dear PETSc team,
>
> I would like to play with aspin-type nonlinear solvers. I have found
> several tests like snes/tutorials/ex19.c but they all use DA, which I don't
> want to use since I need to stick at the algebraic level.
> Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up
> things.
> Unfortunately, I get the error "DM has no default decomposition defined.
> Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do
> understand.
> But I do not find any implementation of the SNESNASMSetSubdomains in
> petsc4py.
> Am I missing something ?
> Thanks,
> Nicolas
>
>
>


[petsc-users] snes_type aspin without DA example

2023-06-22 Thread Karin
Dear PETSc team,

I would like to play with aspin-type nonlinear solvers. I have found
several tests like snes/tutorials/ex19.c but they all use DA, which I don't
want to use since I need to stick at the algebraic level.
Then, I started looking at petsc4py/demo/ode/heat.py and tried to set up
things.
Unfortunately, I get the error "DM has no default decomposition defined.
Set subsolves manually with SNESNASMSetSubdomains()" which, I think, I do
understand.
But I do not find any implementation of the SNESNASMSetSubdomains in
petsc4py.
Am I missing something ?
Thanks,
Nicolas


Re: [petsc-users] How to combine different element types into a single DMPlex?

2021-09-22 Thread Karin
Dear Matthew,

This is great news!
For my part, I would be mostly interested in the parallel input interface.
Sorry for that...
Indeed, in our application,  we already have a parallel mesh data structure
that supports hybrid meshes with parallel I/O and distribution (based on
the MED format). We would like to use a DMPlex to make parallel mesh
adaptation.
 As a matter of fact, all our meshes are in the MED format. We could
also contribute to extend the interface of DMPlex with MED (if you consider
it could be usefull).

Best regards,
Nicolas


Le mar. 21 sept. 2021 à 21:56, Matthew Knepley  a écrit :

> On Tue, Sep 21, 2021 at 10:31 AM Karin  wrote:
>
>> Dear Eric, dear Matthew,
>>
>> I share Eric's desire to be able to manipulate meshes composed of
>> different types of elements in a PETSc's DMPlex.
>> Since this discussion, is there anything new on this feature for the
>> DMPlex object or am I missing something?
>>
>
> Thanks for finding this!
>
> Okay, I did a rewrite of the Plex internals this summer. It should now be
> possible to interpolate a mesh with any
> number of cell types, partition it, redistribute it, and many other
> manipulations.
>
> You can read in some formats that support hybrid meshes. If you let me
> know how you plan to read it in, we can make it work.
> Right now, I don't want to make input interfaces that no one will ever
> use. We have a project, joint with Firedrake, to finalize
> parallel I/O. This will make parallel reading and writing for
> checkpointing possible, supporting topology, geometry, fields and
> layouts, for many meshes in one HDF5 file. I think we will finish in
> November.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>> Nicolas
>>
>> Le mer. 21 juil. 2021 à 04:25, Eric Chamberland <
>> eric.chamberl...@giref.ulaval.ca> a écrit :
>>
>>> Hi,
>>> On 2021-07-14 3:14 p.m., Matthew Knepley wrote:
>>>
>>> On Wed, Jul 14, 2021 at 1:25 PM Eric Chamberland <
>>> eric.chamberl...@giref.ulaval.ca> wrote:
>>>
>>>> Hi,
>>>>
>>>> while playing with DMPlexBuildFromCellListParallel, I noticed we have
>>>> to
>>>> specify "numCorners" which is a fixed value, then gives a fixed number
>>>> of nodes for a series of elements.
>>>>
>>>> How can I then add, for example, triangles and quadrangles into a
>>>> DMPlex?
>>>>
>>>
>>> You can't with that function. It would be much mich more complicated if
>>> you could, and I am not sure
>>> it is worth it for that function. The reason is that you would need
>>> index information to offset into the
>>> connectivity list, and that would need to be replicated to some extent
>>> so that all processes know what
>>> the others are doing. Possible, but complicated.
>>>
>>> Maybe I can help suggest something for what you are trying to do?
>>>
>>> Yes: we are trying to partition our parallel mesh with PETSc functions.
>>> The mesh has been read in parallel so each process owns a part of it, but
>>> we have to manage mixed elements types.
>>>
>>> When we directly use ParMETIS_V3_PartMeshKway, we give two arrays to
>>> describe the elements which allows mixed elements.
>>>
>>> So, how would I read my mixed mesh in parallel and give it to PETSc
>>> DMPlex so I can use a PetscPartitioner with DMPlexDistribute ?
>>>
>>> A second goal we have is to use PETSc to compute the overlap, which is
>>> something I can't find in PARMetis (and any other partitionning library?)
>>>
>>> Thanks,
>>>
>>> Eric
>>>
>>>
>>>
>>>   Thanks,
>>>
>>>   Matt
>>>
>>>
>>>
>>>> Thanks,
>>>>
>>>> Eric
>>>>
>>>> --
>>>> Eric Chamberland, ing., M. Ing
>>>> Professionnel de recherche
>>>> GIREF/Université Laval
>>>> (418) 656-2131 poste 41 22 42
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>> --
>>> Eric Chamberland, ing., M. Ing
>>> Professionnel de recherche
>>> GIREF/Université Laval
>>> (418) 656-2131 poste 41 22 42
>>>
>>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


Re: [petsc-users] How to combine different element types into a single DMPlex?

2021-09-21 Thread Karin
Dear Eric, dear Matthew,

I share Eric's desire to be able to manipulate meshes composed of different
types of elements in a PETSc's DMPlex.
Since this discussion, is there anything new on this feature for the
DMPlex object or am I missing something?

Thanks,
Nicolas

Le mer. 21 juil. 2021 à 04:25, Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> a écrit :

> Hi,
> On 2021-07-14 3:14 p.m., Matthew Knepley wrote:
>
> On Wed, Jul 14, 2021 at 1:25 PM Eric Chamberland <
> eric.chamberl...@giref.ulaval.ca> wrote:
>
>> Hi,
>>
>> while playing with DMPlexBuildFromCellListParallel, I noticed we have to
>> specify "numCorners" which is a fixed value, then gives a fixed number
>> of nodes for a series of elements.
>>
>> How can I then add, for example, triangles and quadrangles into a DMPlex?
>>
>
> You can't with that function. It would be much mich more complicated if
> you could, and I am not sure
> it is worth it for that function. The reason is that you would need index
> information to offset into the
> connectivity list, and that would need to be replicated to some extent so
> that all processes know what
> the others are doing. Possible, but complicated.
>
> Maybe I can help suggest something for what you are trying to do?
>
> Yes: we are trying to partition our parallel mesh with PETSc functions.
> The mesh has been read in parallel so each process owns a part of it, but
> we have to manage mixed elements types.
>
> When we directly use ParMETIS_V3_PartMeshKway, we give two arrays to
> describe the elements which allows mixed elements.
>
> So, how would I read my mixed mesh in parallel and give it to PETSc DMPlex
> so I can use a PetscPartitioner with DMPlexDistribute ?
>
> A second goal we have is to use PETSc to compute the overlap, which is
> something I can't find in PARMetis (and any other partitionning library?)
>
> Thanks,
>
> Eric
>
>
>
>   Thanks,
>
>   Matt
>
>
>
>> Thanks,
>>
>> Eric
>>
>> --
>> Eric Chamberland, ing., M. Ing
>> Professionnel de recherche
>> GIREF/Université Laval
>> (418) 656-2131 poste 41 22 42
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>
> --
> Eric Chamberland, ing., M. Ing
> Professionnel de recherche
> GIREF/Université Laval
> (418) 656-2131 poste 41 22 42
>
>


Re: [petsc-users] DMPlex and Boundary facets

2021-05-19 Thread Karin
Sure, no problem.
Thanks,
Nicolas

Le mer. 19 mai 2021 à 19:47, Matthew Knepley  a écrit :

> On Wed, May 19, 2021 at 12:46 PM Karin  wrote:
>
>> Thanks, I think I understand.
>> So at the moment, the user has to manage the link between the labels of
>> the different strata and the names.
>>
>
> Yes.
>
>
>> I am mainly willing to use DMPlex  for  AMR  and names of groups are
>> essential in our code - reading what Mat says
>> gives me the impression that this is doable.
>>
>
> I think it is. I will be able to do it in the middle of June if you do not
> have something by then. Can you make an issue?
>
>   Thanks,
>
>   Matt
>
>
>> Thanks again,
>> Nicolas
>>
>> Le mer. 19 mai 2021 à 16:02, Matthew Knepley  a
>> écrit :
>>
>>> On Wed, May 19, 2021 at 9:57 AM Karin  wrote:
>>>
>>>> Thank you very much Lawrence.
>>>> Are the names lost or are they saved somewhere ?
>>>> If I do : "./ex2 -filename /tmp/test/Cube_with_facets.msh4 -dm_view
>>>> vtk:/tmp/foo.vtk" , I only get the tets of the initial mesh.
>>>>
>>>
>>> Yes, the names are lost. Jed, Lisandro and Stefano have also asked me
>>> about this. I have not done since everything is currently done by
>>> number in Plex. It would involve either
>>>
>>> a) making a separate table in the DM which translates names to numbers
>>>
>>> or
>>>
>>> b) putting such a translation table in the DMLabel itself
>>>
>>> I am putting it off because doing some short is not hard, but getting it
>>> to work with everything else would be somewhat of a pain.
>>> It would need to distribute properly, propagate to
>>> refined/coarsened/subset meshes, etc. which labels currently do. It would
>>> not
>>> be properly passed on when you just set an int on a point. I think it
>>> will happen eventually, but I don't think I have time in the next
>>> few weeks.
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
>>>> Thanks,
>>>> Nicolas
>>>>
>>>> Le mer. 19 mai 2021 à 15:49, Lawrence Mitchell  a écrit :
>>>>
>>>>>
>>>>>
>>>>> > On 19 May 2021, at 14:46, Karin  wrote:
>>>>> >
>>>>> > Dear Matthew,
>>>>> >
>>>>> > You are (again) right. This Gmsh test file is "dirty" : some
>>>>> triangles do not belong to tests. Sorry for that.
>>>>> > I have tried with another geo file (which is clean in that sense)
>>>>> and PETSc reads with no error.
>>>>> >
>>>>> > I take this opportunity to ask my initial question : are the labels
>>>>> of the Physical Entities saved somewhere ?
>>>>> > A gmsh test file with several Physical entities is attached to this
>>>>> email.
>>>>>
>>>>> Yes, plex represents Physical entities as "labels". In particular, for
>>>>> facets, they are loaded as the "Face Sets" label, plex also loads markers
>>>>> on cells in the "Cell Sets" label.
>>>>>
>>>>> e.g. here's the DMView output of your mesh.
>>>>>
>>>>> DM Object: DM_0x8400_0 1 MPI processes
>>>>>   type: plex
>>>>> DM_0x8400_0 in 3 dimensions:
>>>>>   0-cells: 64
>>>>>   1-cells: 279
>>>>>   2-cells: 378
>>>>>   3-cells: 162
>>>>> Labels:
>>>>>   celltype: 4 strata with value/size (0 (64), 6 (162), 3 (378), 1
>>>>> (279))
>>>>>   depth: 4 strata with value/size (0 (64), 1 (279), 2 (378), 3 (162))
>>>>>   Cell Sets: 1 strata with value/size (7 (162))
>>>>>   Face Sets: 6 strata with value/size (3 (18), 6 (18), 2 (18), 5 (18),
>>>>> 1 (18), 4 (18))
>>>>>
>>>>>
>>>>> Lawrence
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


Re: [petsc-users] DMPlex and Boundary facets

2021-05-19 Thread Karin
Thanks, I think I understand.
So at the moment, the user has to manage the link between the labels of the
different strata and the names.
I am mainly willing to use DMPlex  for  AMR  and names of groups are
essential in our code - reading what Mat says
gives me the impression that this is doable.

Thanks again,
Nicolas

Le mer. 19 mai 2021 à 16:02, Matthew Knepley  a écrit :

> On Wed, May 19, 2021 at 9:57 AM Karin  wrote:
>
>> Thank you very much Lawrence.
>> Are the names lost or are they saved somewhere ?
>> If I do : "./ex2 -filename /tmp/test/Cube_with_facets.msh4 -dm_view
>> vtk:/tmp/foo.vtk" , I only get the tets of the initial mesh.
>>
>
> Yes, the names are lost. Jed, Lisandro and Stefano have also asked me
> about this. I have not done since everything is currently done by
> number in Plex. It would involve either
>
> a) making a separate table in the DM which translates names to numbers
>
> or
>
> b) putting such a translation table in the DMLabel itself
>
> I am putting it off because doing some short is not hard, but getting it
> to work with everything else would be somewhat of a pain.
> It would need to distribute properly, propagate to
> refined/coarsened/subset meshes, etc. which labels currently do. It would
> not
> be properly passed on when you just set an int on a point. I think it will
> happen eventually, but I don't think I have time in the next
> few weeks.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>> Nicolas
>>
>> Le mer. 19 mai 2021 à 15:49, Lawrence Mitchell  a écrit :
>>
>>>
>>>
>>> > On 19 May 2021, at 14:46, Karin  wrote:
>>> >
>>> > Dear Matthew,
>>> >
>>> > You are (again) right. This Gmsh test file is "dirty" : some triangles
>>> do not belong to tests. Sorry for that.
>>> > I have tried with another geo file (which is clean in that sense) and
>>> PETSc reads with no error.
>>> >
>>> > I take this opportunity to ask my initial question : are the labels of
>>> the Physical Entities saved somewhere ?
>>> > A gmsh test file with several Physical entities is attached to this
>>> email.
>>>
>>> Yes, plex represents Physical entities as "labels". In particular, for
>>> facets, they are loaded as the "Face Sets" label, plex also loads markers
>>> on cells in the "Cell Sets" label.
>>>
>>> e.g. here's the DMView output of your mesh.
>>>
>>> DM Object: DM_0x8400_0 1 MPI processes
>>>   type: plex
>>> DM_0x8400_0 in 3 dimensions:
>>>   0-cells: 64
>>>   1-cells: 279
>>>   2-cells: 378
>>>   3-cells: 162
>>> Labels:
>>>   celltype: 4 strata with value/size (0 (64), 6 (162), 3 (378), 1 (279))
>>>   depth: 4 strata with value/size (0 (64), 1 (279), 2 (378), 3 (162))
>>>   Cell Sets: 1 strata with value/size (7 (162))
>>>   Face Sets: 6 strata with value/size (3 (18), 6 (18), 2 (18), 5 (18), 1
>>> (18), 4 (18))
>>>
>>>
>>> Lawrence
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


Re: [petsc-users] DMPlex and Boundary facets

2021-05-19 Thread Karin
Thank you very much Lawrence.
Are the names lost or are they saved somewhere ?
If I do : "./ex2 -filename /tmp/test/Cube_with_facets.msh4 -dm_view
vtk:/tmp/foo.vtk" , I only get the tets of the initial mesh.

Thanks,
Nicolas



Le mer. 19 mai 2021 à 15:49, Lawrence Mitchell  a écrit :

>
>
> > On 19 May 2021, at 14:46, Karin  wrote:
> >
> > Dear Matthew,
> >
> > You are (again) right. This Gmsh test file is "dirty" : some triangles
> do not belong to tests. Sorry for that.
> > I have tried with another geo file (which is clean in that sense) and
> PETSc reads with no error.
> >
> > I take this opportunity to ask my initial question : are the labels of
> the Physical Entities saved somewhere ?
> > A gmsh test file with several Physical entities is attached to this
> email.
>
> Yes, plex represents Physical entities as "labels". In particular, for
> facets, they are loaded as the "Face Sets" label, plex also loads markers
> on cells in the "Cell Sets" label.
>
> e.g. here's the DMView output of your mesh.
>
> DM Object: DM_0x8400_0 1 MPI processes
>   type: plex
> DM_0x8400_0 in 3 dimensions:
>   0-cells: 64
>   1-cells: 279
>   2-cells: 378
>   3-cells: 162
> Labels:
>   celltype: 4 strata with value/size (0 (64), 6 (162), 3 (378), 1 (279))
>   depth: 4 strata with value/size (0 (64), 1 (279), 2 (378), 3 (162))
>   Cell Sets: 1 strata with value/size (7 (162))
>   Face Sets: 6 strata with value/size (3 (18), 6 (18), 2 (18), 5 (18), 1
> (18), 4 (18))
>
>
> Lawrence


Re: [petsc-users] DMPlex and Boundary facets

2021-05-19 Thread Karin
Dear Matthew,

You are (again) right. This Gmsh test file is "dirty" : some triangles do
not belong to tests. Sorry for that.
I have tried with another geo file (which is clean in that sense) and PETSc
reads with no error.

I take this opportunity to ask my initial question : are the labels of the
Physical Entities saved somewhere ?
A gmsh test file with several Physical entities is attached to this email.

Thank you again,
Nicolas

Le mer. 19 mai 2021 à 15:12, Matthew Knepley  a écrit :

> Okay, what Plex is saying is that Gmsh gave it a facet (a set of
> vertices), but those are not the vertices
> of any tetrahedral face in the mesh. It is hard to verify such a thing
> with 12K tets. I have definitely read in
> GMsh files before with facets, so I don't think the code is completely
> wrong.
>
> Would it be possible to make a smaller mesh (10-20 cells) that shows the
> same bug?
>
>   Thanks,
>
>  Matt
>
> On Tue, May 18, 2021 at 12:25 PM Karin  wrote:
>
>> Sure! I send you both, one with the facets and one without.
>>
>> Thanks,
>> Nicolas
>>
>> Le mar. 18 mai 2021 à 17:49, Matthew Knepley  a
>> écrit :
>>
>>> On Tue, May 18, 2021 at 8:18 AM Karin  wrote:
>>>
>>>> Dear PETSc team,
>>>>
>>>> I have tried to load a test mesh available in Gmsh' s demos directory
>>>>  (share/doc/gmsh/demos/simple_geo/filter.geo, attached to this email) as a
>>>> DMPlex.
>>>> So I produced a msh4 file by doing :
>>>> gmsh -3 filter.geo -o /tmp/test.msh4
>>>> Then I used src/dm/impls/plex/tutorials/ex2.c to load the mesh by doing
>>>> :
>>>> ./ex2 -filename /tmp/test.msh4
>>>>
>>>> Unfortunately I get the error :
>>>>
>>>> [0]PETSC ERROR: - Error Message
>>>> --
>>>> [0]PETSC ERROR: No support for this operation for this object type
>>>> [0]PETSC ERROR: Could not determine Plex facet for Gmsh element 1268
>>>> (Plex cell 12681)
>>>>
>>>> The error seems to come from the fact that the msh file contains
>>>> tets *and* facets *only on the Physical entities* (aka parts of the mesh
>>>> boundary where
>>>> the user will assign Dirichlet or Neuman conditions).
>>>> If I suppress these facets by commenting  the "Physical Surface" lines
>>>> in the geo file and regenerating the mesh, everything is fine.
>>>>
>>>> But the use of these "Physical" stuff is very common in lots of finite
>>>> element codes in order to assign boundary conditions.
>>>> How should I do to keep  these boundary groups of 2D elements (with
>>>> corresponding names) ?
>>>>
>>>
>>> Can you also send the *.msh file? I do not have Gmsh on this machine.
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
>>>> Thanks for your help,
>>>> Nicolas
>>>>
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


Cube_with_facets.msh4
Description: Binary data


[petsc-users] DMPlex and Boundary facets

2021-05-18 Thread Karin
Dear PETSc team,

I have tried to load a test mesh available in Gmsh' s demos directory
 (share/doc/gmsh/demos/simple_geo/filter.geo, attached to this email) as a
DMPlex.
So I produced a msh4 file by doing :
gmsh -3 filter.geo -o /tmp/test.msh4
Then I used src/dm/impls/plex/tutorials/ex2.c to load the mesh by doing :
./ex2 -filename /tmp/test.msh4

Unfortunately I get the error :

[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: Could not determine Plex facet for Gmsh element 1268 (Plex
cell 12681)

The error seems to come from the fact that the msh file contains tets *and*
facets *only on the Physical entities* (aka parts of the mesh boundary
where
the user will assign Dirichlet or Neuman conditions).
If I suppress these facets by commenting  the "Physical Surface" lines in
the geo file and regenerating the mesh, everything is fine.

But the use of these "Physical" stuff is very common in lots of finite
element codes in order to assign boundary conditions.
How should I do to keep  these boundary groups of 2D elements (with
corresponding names) ?

Thanks for your help,
Nicolas


filter.geo
Description: Binary data


Re: [petsc-users] Nesting splits based on IS

2021-04-27 Thread Karin
It is crystal clear when I am reading you. I will try it on my small test
and I will let you know how it goes.

Thanks for your help,
Nicolas

Le mar. 27 avr. 2021 à 16:55, Matthew Knepley  a écrit :

> On Tue, Apr 27, 2021 at 9:58 AM Karin  wrote:
>
>> I will have a look at PetscSection (that I have never used before).
>> The FE I use are P2-P1-P1-P1 Lagrange elements. The point is that the
>> numbering of the unknowns is done by the application (I do not have the
>> hand on it). At the moment, I build IS in order to define the different
>> fields and I nest them programmatically in order to set some recursive
>> preconditioner.
>> To enable the use of the command line options in order to nest the splits
>> is a matter of elegance and smoothness since the nesting of splits I have
>> programmed are fully fixed.
>>
>
> Okay, PetscSection should be completely adequate for that. You just
>
> 1) Create a Section with 4 fields and give 1 dof/vertex for each field,
> and 1dof/edge for the first field (have to also set the cumulative dofs per
> point)
>
> 2) SectionSetUp()
>
> 3) Add a permutation if necessary to the points to get the dof ordering
> you want
>
> Does this make sense?
>
>   Thanks,
>
>  Matt
>
>
>> Nicolas
>>
>> Le mar. 27 avr. 2021 à 14:15, Matthew Knepley  a
>> écrit :
>>
>>> On Tue, Apr 27, 2021 at 2:51 AM Karin  wrote:
>>>
>>>> Dear PETSc Team,
>>>>
>>>> I am coming back to you to get some clue on the (missing?) Fortran
>>>> interface to the DMShellSetCreateSubDM routine.
>>>>
>>>
>>> Yes, we will have to put that in. Fortran bindings for callbacks are
>>> somewhat involved.
>>>
>>> However, it is easier if you layout can be described by a
>>> PetscSection (which has all the Fortran bindings). Then it will
>>> work automatically from the DMShell (I think). What kind of layout were
>>> you looking for?
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
>>>> Thanks,
>>>> Nicolas
>>>>
>>>> Le dim. 25 avr. 2021 à 18:20, Karin  a
>>>> écrit :
>>>>
>>>>> Dear Petsc Team,
>>>>>
>>>>> I have begun to write a small test to illustrate the use of a DMShell
>>>>> in order to nest splits from the command line.
>>>>> I am writing it in Fortran because the targeted application is also in
>>>>> Fortran.
>>>>> => I am afraid that DMShellSetCreateSubDM lacks a Fortran interface.
>>>>> Am I wrong ? Can't the interface be generated by bfort ?
>>>>>
>>>>> Thanks,
>>>>> Nicolas
>>>>>
>>>>> Le lun. 19 avr. 2021 à 12:28, Matthew Knepley  a
>>>>> écrit :
>>>>>
>>>>>> On Sun, Apr 18, 2021 at 11:54 PM Barry Smith 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>   Matt,
>>>>>>>
>>>>>>>I agree it can be done with DMs, but I am not convinced that is
>>>>>>> necessarily a superior way when one has to make DMSHELLS and mess around
>>>>>>> with their behavior.
>>>>>>>
>>>>>>>One way to look at PCFIELDSPLIT is that its basic splitting tool
>>>>>>> is IS and DM is a way of generating IS that can be used for 
>>>>>>> PCFIELDSPLIT.
>>>>>>> In this approach PCFIELDSPLIT needs a bit more support for IS and 
>>>>>>> nesting.
>>>>>>> To also help using multiple levels of DM.
>>>>>>>
>>>>>>>The other way is to view DM and subDM as the basic splitting tool
>>>>>>> for PCFIELDSPLIT but then one has to ask the question how does DM
>>>>>>> communicate its splits to PCFIELDSPLIT?   I am too lazy to look at the 
>>>>>>> code
>>>>>>> but presumably with IS, hence the approach above.
>>>>>>>
>>>>>>>In the traditional Unix software stack approach to code, each
>>>>>>> layer uses the simpler layer below to communicate; so DM uses IS to
>>>>>>> communicate to the layer below (the linear algebra and preconditioners).
>>>>>>> DM is a hugely complicated layer and making it communicate directly with
>>>>>>> the vector leve

Re: [petsc-users] Nesting splits based on IS

2021-04-27 Thread Karin
I will have a look at PetscSection (that I have never used before).
The FE I use are P2-P1-P1-P1 Lagrange elements. The point is that the
numbering of the unknowns is done by the application (I do not have the
hand on it). At the moment, I build IS in order to define the different
fields and I nest them programmatically in order to set some recursive
preconditioner.
To enable the use of the command line options in order to nest the splits
is a matter of elegance and smoothness since the nesting of splits I have
programmed are fully fixed.

Nicolas

Le mar. 27 avr. 2021 à 14:15, Matthew Knepley  a écrit :

> On Tue, Apr 27, 2021 at 2:51 AM Karin  wrote:
>
>> Dear PETSc Team,
>>
>> I am coming back to you to get some clue on the (missing?) Fortran
>> interface to the DMShellSetCreateSubDM routine.
>>
>
> Yes, we will have to put that in. Fortran bindings for callbacks are
> somewhat involved.
>
> However, it is easier if you layout can be described by a
> PetscSection (which has all the Fortran bindings). Then it will
> work automatically from the DMShell (I think). What kind of layout were
> you looking for?
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>> Nicolas
>>
>> Le dim. 25 avr. 2021 à 18:20, Karin  a écrit :
>>
>>> Dear Petsc Team,
>>>
>>> I have begun to write a small test to illustrate the use of a DMShell in
>>> order to nest splits from the command line.
>>> I am writing it in Fortran because the targeted application is also in
>>> Fortran.
>>> => I am afraid that DMShellSetCreateSubDM lacks a Fortran interface.
>>> Am I wrong ? Can't the interface be generated by bfort ?
>>>
>>> Thanks,
>>> Nicolas
>>>
>>> Le lun. 19 avr. 2021 à 12:28, Matthew Knepley  a
>>> écrit :
>>>
>>>> On Sun, Apr 18, 2021 at 11:54 PM Barry Smith  wrote:
>>>>
>>>>>
>>>>>   Matt,
>>>>>
>>>>>I agree it can be done with DMs, but I am not convinced that is
>>>>> necessarily a superior way when one has to make DMSHELLS and mess around
>>>>> with their behavior.
>>>>>
>>>>>One way to look at PCFIELDSPLIT is that its basic splitting tool is
>>>>> IS and DM is a way of generating IS that can be used for PCFIELDSPLIT. In
>>>>> this approach PCFIELDSPLIT needs a bit more support for IS and nesting. To
>>>>> also help using multiple levels of DM.
>>>>>
>>>>>The other way is to view DM and subDM as the basic splitting tool
>>>>> for PCFIELDSPLIT but then one has to ask the question how does DM
>>>>> communicate its splits to PCFIELDSPLIT?   I am too lazy to look at the 
>>>>> code
>>>>> but presumably with IS, hence the approach above.
>>>>>
>>>>>In the traditional Unix software stack approach to code, each layer
>>>>> uses the simpler layer below to communicate; so DM uses IS to communicate
>>>>> to the layer below (the linear algebra and preconditioners).  DM is a
>>>>> hugely complicated layer and making it communicate directly with the 
>>>>> vector
>>>>> level is complex; I like having the IS level in between to simplify the
>>>>> software stack and programming model.
>>>>>
>>>>>PetscSection makes live more complicated since it is a bit disjoint
>>>>> from IS. I've never resolved in my mind what role PetscSection plays, is 
>>>>> it
>>>>> a level above IS in the software stack that generates IS, does it 
>>>>> sometimes
>>>>> "skip over" IS to directly talk to linear algebra?
>>>>>
>>>>>If you cannot make a cartoon picture of the software stack, with
>>>>> all the objects, then I think the software stack is not well designed or
>>>>> defined. I fear we cannot make such a cartoon currently.
>>>>>
>>>>
>>>> DM _does_ communicate with PCFIELDSPLIT using IS. I agree with you that
>>>> IS is a good object for communication. In PETSc, IS is just a nice way to
>>>> pass a list of integers.
>>>>
>>>> I don't think DM is hugely complicated. It does a few simple jobs. Here
>>>> it's job is to remember which field each dof belongs to. That is all we
>>>> have to worry about.
>>>>
>>>> PetscSection semantically is a linear space with some structure. We
>>>> already kno

Re: [petsc-users] Nesting splits based on IS

2021-04-27 Thread Karin
Dear PETSc Team,

I am coming back to you to get some clue on the (missing?) Fortran
interface to the DMShellSetCreateSubDM routine.

Thanks,
Nicolas

Le dim. 25 avr. 2021 à 18:20, Karin  a écrit :

> Dear Petsc Team,
>
> I have begun to write a small test to illustrate the use of a DMShell in
> order to nest splits from the command line.
> I am writing it in Fortran because the targeted application is also in
> Fortran.
> => I am afraid that DMShellSetCreateSubDM lacks a Fortran interface.
> Am I wrong ? Can't the interface be generated by bfort ?
>
> Thanks,
> Nicolas
>
> Le lun. 19 avr. 2021 à 12:28, Matthew Knepley  a
> écrit :
>
>> On Sun, Apr 18, 2021 at 11:54 PM Barry Smith  wrote:
>>
>>>
>>>   Matt,
>>>
>>>I agree it can be done with DMs, but I am not convinced that is
>>> necessarily a superior way when one has to make DMSHELLS and mess around
>>> with their behavior.
>>>
>>>One way to look at PCFIELDSPLIT is that its basic splitting tool is
>>> IS and DM is a way of generating IS that can be used for PCFIELDSPLIT. In
>>> this approach PCFIELDSPLIT needs a bit more support for IS and nesting. To
>>> also help using multiple levels of DM.
>>>
>>>The other way is to view DM and subDM as the basic splitting tool for
>>> PCFIELDSPLIT but then one has to ask the question how does DM communicate
>>> its splits to PCFIELDSPLIT?   I am too lazy to look at the code but
>>> presumably with IS, hence the approach above.
>>>
>>>In the traditional Unix software stack approach to code, each layer
>>> uses the simpler layer below to communicate; so DM uses IS to communicate
>>> to the layer below (the linear algebra and preconditioners).  DM is a
>>> hugely complicated layer and making it communicate directly with the vector
>>> level is complex; I like having the IS level in between to simplify the
>>> software stack and programming model.
>>>
>>>PetscSection makes live more complicated since it is a bit disjoint
>>> from IS. I've never resolved in my mind what role PetscSection plays, is it
>>> a level above IS in the software stack that generates IS, does it sometimes
>>> "skip over" IS to directly talk to linear algebra?
>>>
>>>If you cannot make a cartoon picture of the software stack, with all
>>> the objects, then I think the software stack is not well designed or
>>> defined. I fear we cannot make such a cartoon currently.
>>>
>>
>> DM _does_ communicate with PCFIELDSPLIT using IS. I agree with you that
>> IS is a good object for communication. In PETSc, IS is just a nice way to
>> pass a list of integers.
>>
>> I don't think DM is hugely complicated. It does a few simple jobs. Here
>> it's job is to remember which field each dof belongs to. That is all we
>> have to worry about.
>>
>> PetscSection semantically is a linear space with some structure. We
>> already know we want some structure like this, since we break all linear
>> spaces in PETSc into processes. Section allows you to break it down a
>> little finer into the pieces of the space for each "point", where you can
>> use a point to mark anything you want, like a process, cell, edge, another
>> dof, etc. Sections can make an IS when asked a question, such as "which
>> dofs lie in the closure of this cell", or "which dofs are in this field",
>> or "which dofs are owned by this process". I have written this in the
>> manual years ago.
>>
>> Matt
>>
>>
>>>  Barry
>>>
>>>
>>>
>>>
>>> On Apr 18, 2021, at 8:54 AM, Matthew Knepley  wrote:
>>>
>>> On Sat, Apr 17, 2021 at 6:13 PM Barry Smith  wrote:
>>>
>>>>
>>>>   So you would like to be able to create three IS in your code and
>>>> attach them with names to the PC.  Then have -pc_fieldsplit_XXX_fields be
>>>> able to utilize the attached IS by name and use them to define the blocks.
>>>>
>>>>
>>>>   This is all doable and could be added to PCFIELDSPLIT without too
>>>> much code, new code. The code would be largely like
>>>> PCFieldSplitSetRuntimeSplits_Private.
>>>>
>>>>The recursive part may also be doable but I think your syntax below
>>>> is not exactly right. You would need something like
>>>>
>>>> -fieldsplit_0_pc_type fieldsplit   // split the first PC into a
>>>> fieldsplit
&

Re: [petsc-users] Nesting splits based on IS

2021-04-25 Thread Karin
gt;>> There are two cases,
>>>
>>>   when building the split uses first all the entries from fieldsplit_v_,
>>> then from fieldsplit_p_ then the new ISs it needs to attach to the first
>>> split PC are two sets of integers the first from 0 to the len(v)-1 and the
>>> second from len(v) to len(v)+len(p)-1.
>>>
>>>   when building the split it interlaces the indices from v and p
>>> (interlacing only make sense if the size of v and p is the same). Then the
>>> new v would be {0,2,4,...} and p would be {1,3,...}.
>>>
>>>   If you are ambitious and would like to add this to fieldsplit.c we'd
>>> be very happy to receive an MR. It might even lead to allowing us to simply
>>> how the PCFIELDPLIT interacts with DMs. If all the split type, stride,
>>> named, etc are handle in a single consistent manner.
>>>
>>
>> Barry, this is already working with DMs, which I think is the right way
>> to do this.
>>
>> Here is the code:
>>
>>
>> https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/fieldsplit/fieldsplit.c#L420
>>
>> The DM must respond to DMCreateSubDM(). The interface is that the call
>> provides a list of fields [f_0, f_1, ...]
>> and the DM returns an IS for that combination and a subdm for it. The
>> subdm part is what allows you to do this
>> recursively, since you make the same call on the subdm.
>>
>> If you use a PetscSection to keep track of the data layout, the code to
>> do field selection is already done:
>>
>>   https://gitlab.com/petsc/petsc/-/blob/main/src/dm/interface/dmi.c#L105
>>
>> which can just be called from the DMCreateSubDM() implementation.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>>   Barry
>>>
>>>
>>>
>>> > On Apr 17, 2021, at 11:53 AM, Karin  wrote:
>>> >
>>> > Dear PETSc users,
>>> >
>>> > I use the fieldsplit PC in an application where the splits are
>>> programmatically defined by IS using PCFieldSplitSetIS. Then the user can
>>> specify its own PC at runtime using PETSc options.
>>> > My question : is it possible to define nested splits in this case as
>>> it can be done with strided splits (see snes/examples/tutorials/ex19.c with
>>> test suffix fieldsplit_4).
>>> >
>>> > In order to be perfectly clear : let's say I have a 3 fields problem :
>>> velocity (v split), pressure (p split) and temperature (t split).
>>> > What I would like to do is something like the following but it fails :
>>> >
>>> > -ksp_type fgmres
>>> > -pc_fieldsplit_type multiplicative
>>> > -pc_type fieldsplit-pc_fieldsplit_0_fields fieldsplit_v_,
>>> fieldsplit_p_-pc_fieldsplit_1_fields fieldsplit_t_
>>> >
>>> > -prefix_push  fieldsplit_0_
>>> > -ksp_type fgmres
>>> > -pc_fieldsplit_schur_factorization_type upper
>>> > -pc_type fieldsplit
>>> > -pc_fieldsplit_type schur
>>> > -pc_fieldsplit_schur_precondition a11
>>> > -prefix_pop
>>> >
>>> > -prefix_push  fieldsplit_1_
>>> > -ksp_type fgmres
>>> > -pc_type jacobi
>>> > -prefix_pop
>>> >
>>> > -prefix_push  fieldsplit_v_
>>> > -ksp_type fgmres
>>> > -pc_type gamg
>>> > -prefix_pop
>>> >
>>> > -prefix_push  fieldsplit_p_
>>> > -ksp_type fgmres
>>> > -pc_type jacobi
>>> > -prefix_pop
>>> >
>>> > I thank you for your help,
>>> > Nicolas
>>> >
>>> >
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


Re: [petsc-users] Nesting splits based on IS

2021-04-18 Thread Karin
Thanks for the advice.
But I do not find tests using DMShell and DMCreateSubDM.
So does the idea of writing a new version
of src/ksp/ksp/examples/tutorials/ex43.c make sense?

Thanks,
Nicolas

Le dim. 18 avr. 2021 à 20:38, Matthew Knepley  a écrit :

> On Sun, Apr 18, 2021 at 2:20 PM Karin  wrote:
>
>> Dear Matt and Barry,
>>
>> Unfortunately DM is not used in our PETSc interface,which is essentially
>> based on  the assembly matrix.
>> As Matt mentioned, the trick could be to define a DMShell in order to
>> expose the splits we internally build based on IS.
>> If I understand well, I have to play around with
>> DMShellSetCreateFieldDecomposition. Unfortunately, I do not find tests
>> using this method. Am I missing something ?
>>
>
> I would not use that function. I would use DMCreateSubDM().
>
>   Thanks,
>
> Matt
>
>
>> Perhaps a good starting point would be to write a test
>> like src/ksp/ksp/examples/tutorials/ex43.c in which one would consider the
>> algebraic data to be known (the matrix and the IS) and would set up a
>> DMShell  (with an appropriate call to DMShellSetCreateFieldDecomposition).
>> Does that make sense?
>>
>> Thanks,
>> Nicolas
>>
>> Le dim. 18 avr. 2021 à 15:54, Matthew Knepley  a
>> écrit :
>>
>>> On Sat, Apr 17, 2021 at 6:13 PM Barry Smith  wrote:
>>>
>>>>
>>>>   So you would like to be able to create three IS in your code and
>>>> attach them with names to the PC.  Then have -pc_fieldsplit_XXX_fields be
>>>> able to utilize the attached IS by name and use them to define the blocks.
>>>>
>>>>
>>>>   This is all doable and could be added to PCFIELDSPLIT without too
>>>> much code, new code. The code would be largely like
>>>> PCFieldSplitSetRuntimeSplits_Private.
>>>>
>>>>The recursive part may also be doable but I think your syntax below
>>>> is not exactly right. You would need something like
>>>>
>>>> -fieldsplit_0_pc_type fieldsplit   // split the first PC into a
>>>> fieldsplit
>>>>  -fieldsplit_0_pc_fieldsplit_0_fields xxx
>>>> -fieldsplit_0_fieldsplit_0_pc_type jacobi
>>>>  -fieldsplit_0_pc_fieldsplit_1_fields yyy
>>>>  etc
>>>>
>>>> this would split the first field into two fields and use jacobi on the
>>>> first field.
>>>>
>>>> The problem is what to use for labelling the xxx and the yyy?
>>>>
>>>> I think one could achieve this by having the PCFIELDPLIT attach to each
>>>> of its split PCs the appropriate modified IS with names attached to them.
>>>> There are two cases,
>>>>
>>>>   when building the split uses first all the entries from
>>>> fieldsplit_v_, then from fieldsplit_p_ then the new ISs it needs to attach
>>>> to the first split PC are two sets of integers the first from 0 to the
>>>> len(v)-1 and the second from len(v) to len(v)+len(p)-1.
>>>>
>>>>   when building the split it interlaces the indices from v and p
>>>> (interlacing only make sense if the size of v and p is the same). Then the
>>>> new v would be {0,2,4,...} and p would be {1,3,...}.
>>>>
>>>>   If you are ambitious and would like to add this to fieldsplit.c we'd
>>>> be very happy to receive an MR. It might even lead to allowing us to simply
>>>> how the PCFIELDPLIT interacts with DMs. If all the split type, stride,
>>>> named, etc are handle in a single consistent manner.
>>>>
>>>
>>> Barry, this is already working with DMs, which I think is the right way
>>> to do this.
>>>
>>> Here is the code:
>>>
>>>
>>> https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/fieldsplit/fieldsplit.c#L420
>>>
>>> The DM must respond to DMCreateSubDM(). The interface is that the call
>>> provides a list of fields [f_0, f_1, ...]
>>> and the DM returns an IS for that combination and a subdm for it. The
>>> subdm part is what allows you to do this
>>> recursively, since you make the same call on the subdm.
>>>
>>> If you use a PetscSection to keep track of the data layout, the code to
>>> do field selection is already done:
>>>
>>>   https://gitlab.com/petsc/petsc/-/blob/main/src/dm/interface/dmi.c#L105
>>>
>>> which can just be called from the DMCreateSubDM() implementation.
>>>
>>>   Thanks,
>&

Re: [petsc-users] Nesting splits based on IS

2021-04-18 Thread Karin
Dear Matt and Barry,

Unfortunately DM is not used in our PETSc interface,which is essentially
based on  the assembly matrix.
As Matt mentioned, the trick could be to define a DMShell in order to
expose the splits we internally build based on IS.
If I understand well, I have to play around with
DMShellSetCreateFieldDecomposition. Unfortunately, I do not find tests
using this method. Am I missing something ?
Perhaps a good starting point would be to write a test
like src/ksp/ksp/examples/tutorials/ex43.c in which one would consider the
algebraic data to be known (the matrix and the IS) and would set up a
DMShell  (with an appropriate call to DMShellSetCreateFieldDecomposition).
Does that make sense?

Thanks,
Nicolas

Le dim. 18 avr. 2021 à 15:54, Matthew Knepley  a écrit :

> On Sat, Apr 17, 2021 at 6:13 PM Barry Smith  wrote:
>
>>
>>   So you would like to be able to create three IS in your code and attach
>> them with names to the PC.  Then have -pc_fieldsplit_XXX_fields be able to
>> utilize the attached IS by name and use them to define the blocks.
>>
>>   This is all doable and could be added to PCFIELDSPLIT without too much
>> code, new code. The code would be largely like
>> PCFieldSplitSetRuntimeSplits_Private.
>>
>>The recursive part may also be doable but I think your syntax below is
>> not exactly right. You would need something like
>>
>> -fieldsplit_0_pc_type fieldsplit   // split the first PC into a fieldsplit
>>  -fieldsplit_0_pc_fieldsplit_0_fields xxx
>> -fieldsplit_0_fieldsplit_0_pc_type jacobi
>>  -fieldsplit_0_pc_fieldsplit_1_fields yyy
>>  etc
>>
>> this would split the first field into two fields and use jacobi on the
>> first field.
>>
>> The problem is what to use for labelling the xxx and the yyy?
>>
>> I think one could achieve this by having the PCFIELDPLIT attach to each
>> of its split PCs the appropriate modified IS with names attached to them.
>> There are two cases,
>>
>>   when building the split uses first all the entries from fieldsplit_v_,
>> then from fieldsplit_p_ then the new ISs it needs to attach to the first
>> split PC are two sets of integers the first from 0 to the len(v)-1 and the
>> second from len(v) to len(v)+len(p)-1.
>>
>>   when building the split it interlaces the indices from v and p
>> (interlacing only make sense if the size of v and p is the same). Then the
>> new v would be {0,2,4,...} and p would be {1,3,...}.
>>
>>   If you are ambitious and would like to add this to fieldsplit.c we'd be
>> very happy to receive an MR. It might even lead to allowing us to simply
>> how the PCFIELDPLIT interacts with DMs. If all the split type, stride,
>> named, etc are handle in a single consistent manner.
>>
>
> Barry, this is already working with DMs, which I think is the right way to
> do this.
>
> Here is the code:
>
>
> https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/fieldsplit/fieldsplit.c#L420
>
> The DM must respond to DMCreateSubDM(). The interface is that the call
> provides a list of fields [f_0, f_1, ...]
> and the DM returns an IS for that combination and a subdm for it. The
> subdm part is what allows you to do this
> recursively, since you make the same call on the subdm.
>
> If you use a PetscSection to keep track of the data layout, the code to do
> field selection is already done:
>
>   https://gitlab.com/petsc/petsc/-/blob/main/src/dm/interface/dmi.c#L105
>
> which can just be called from the DMCreateSubDM() implementation.
>
>   Thanks,
>
>  Matt
>
>
>>   Barry
>>
>>
>>
>> > On Apr 17, 2021, at 11:53 AM, Karin  wrote:
>> >
>> > Dear PETSc users,
>> >
>> > I use the fieldsplit PC in an application where the splits are
>> programmatically defined by IS using PCFieldSplitSetIS. Then the user can
>> specify its own PC at runtime using PETSc options.
>> > My question : is it possible to define nested splits in this case as it
>> can be done with strided splits (see snes/examples/tutorials/ex19.c with
>> test suffix fieldsplit_4).
>> >
>> > In order to be perfectly clear : let's say I have a 3 fields problem :
>> velocity (v split), pressure (p split) and temperature (t split).
>> > What I would like to do is something like the following but it fails :
>> >
>> > -ksp_type fgmres
>> > -pc_fieldsplit_type multiplicative
>> > -pc_type fieldsplit-pc_fieldsplit_0_fields fieldsplit_v_,
>> fieldsplit_p_-pc_fieldsplit_1_fields fieldsplit_t_
>> >
>> > -prefix_push  fieldsplit_0_
&g

Re: [petsc-users] Nesting splits based on IS

2021-04-17 Thread Karin
Thank you Pierre and Matt. In fact, I see how to do it programmatically but
I would like to give the user the capability to do it from the command-line
options juste it can be done with strided splits.
@Matt : can you please elaborate a little bit on the solution you proposed
please ?

Thanks for your precious help,
Nicolas

Le sam. 17 avr. 2021 à 20:07, Matthew Knepley  a écrit :

> On Sat, Apr 17, 2021 at 1:14 PM Pierre Jolivet  wrote:
>
>> Dear Nicolas,
>> Here is how I would do it:
>> - if your Pmat type is MATNEST, that’s quite easy, see
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/tutorials/ex81.c.html
>> - otherwise, you’ll need to first define a 2-way field splitting (with
>> v+p and t), then fetch the appropriate (0,0) block with
>> PCFieldSplitGetSubKSP(), and define an inner 2-way field splitting (with v
>> and p).
>>
>
> Or you can define a DMShell that gives back these splits for the fields.
> That is how Firedrake does it.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>> Pierre
>>
>> On 17 Apr 2021, at 6:53 PM, Karin  wrote:
>>
>> Dear PETSc users,
>>
>> I use the fieldsplit PC in an application where the splits are
>> programmatically defined by IS using PCFieldSplitSetIS. Then the user can
>> specify its own PC at runtime using PETSc options.
>> My question : is it possible to define nested splits in this case as it
>> can be done with strided splits (see snes/examples/tutorials/ex19.c with
>> test suffix fieldsplit_4).
>>
>> In order to be perfectly clear : let's say I have a 3 fields problem :
>> velocity (v split), pressure (p split) and temperature (t split).
>> What I would like to do is something like the following but it fails :
>>
>> -ksp_type fgmres
>> -pc_fieldsplit_type multiplicative
>> -pc_type fieldsplit-pc_fieldsplit_0_fields fieldsplit_v_,
>> fieldsplit_p_-pc_fieldsplit_1_fields fieldsplit_t_
>>
>> -prefix_push  fieldsplit_0_
>> -ksp_type fgmres
>> -pc_fieldsplit_schur_factorization_type upper
>> -pc_type fieldsplit
>> -pc_fieldsplit_type schur
>> -pc_fieldsplit_schur_precondition a11
>> -prefix_pop
>>
>> -prefix_push  fieldsplit_1_
>> -ksp_type fgmres
>> -pc_type jacobi
>> -prefix_pop
>>
>> -prefix_push  fieldsplit_v_
>> -ksp_type fgmres
>> -pc_type gamg
>> -prefix_pop
>>
>> -prefix_push  fieldsplit_p_
>> -ksp_type fgmres
>> -pc_type jacobi
>> -prefix_pop
>>
>> I thank you for your help,
>> Nicolas
>>
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


[petsc-users] Nesting splits based on IS

2021-04-17 Thread Karin
Dear PETSc users,

I use the fieldsplit PC in an application where the splits are
programmatically defined by IS using PCFieldSplitSetIS. Then the user can
specify its own PC at runtime using PETSc options.
My question : is it possible to define nested splits in this case as it can
be done with strided splits (see snes/examples/tutorials/ex19.c with test
suffix fieldsplit_4).

In order to be perfectly clear : let's say I have a 3 fields problem :
velocity (v split), pressure (p split) and temperature (t split).
What I would like to do is something like the following but it fails :

-ksp_type fgmres
-pc_fieldsplit_type multiplicative
-pc_type fieldsplit-pc_fieldsplit_0_fields fieldsplit_v_,
fieldsplit_p_-pc_fieldsplit_1_fields fieldsplit_t_

-prefix_push  fieldsplit_0_
-ksp_type fgmres
-pc_fieldsplit_schur_factorization_type upper
-pc_type fieldsplit
-pc_fieldsplit_type schur
-pc_fieldsplit_schur_precondition a11
-prefix_pop

-prefix_push  fieldsplit_1_
-ksp_type fgmres
-pc_type jacobi
-prefix_pop

-prefix_push  fieldsplit_v_
-ksp_type fgmres
-pc_type gamg
-prefix_pop

-prefix_push  fieldsplit_p_
-ksp_type fgmres
-pc_type jacobi
-prefix_pop

I thank you for your help,
Nicolas


[petsc-users] GAMG Parallel Performance

2018-11-15 Thread Karin via petsc-users
Dear PETSc team,

I am solving a linear transient dynamic problem, based on a discretization
with finite elements. To do that, I am using FGMRES with GAMG as a
preconditioner. I consider here 10 time steps.
The problem has round to 118e6 dof and I am running on 1000, 1500 and 2000
procs. So I have something like 100e3, 78e3 and 50e3 dof/proc.
I notice that the performance deteriorates when I increase the number of
processes.
You can find as attached file the log_view of the execution and the
detailled definition of the KSP.

Is the problem too small to run on that number of processes or is there
something wrong with my use of GAMG?

I thank you in advance for your help,
Nicolas
 -- PETSc Performance Summary: 
--
 
 Unknown Name on a arch-linux2-c-opt-mpi-ml-hypre named eocn0117 with 1000 
processors, by B07947 Thu Nov 15 16:14:46 2018
 Using Petsc Release Version 3.8.2, Nov, 09, 2017 
 
  Max   Max/MinAvg  Total 
 Time (sec):   1.661e+02  1.00034   1.661e+02
 Objects:  1.401e+03  1.00143   1.399e+03
 Flop: 7.695e+10  1.13672   7.354e+10  7.354e+13
 Flop/sec:4.633e+08  1.13672   4.428e+08  4.428e+11
 MPI Messages: 3.697e+05 12.46258   1.179e+05  1.179e+08
 MPI Message Lengths:  8.786e+08  3.98485   4.086e+03  4.817e+11
 MPI Reductions:   2.635e+03  1.0
 
 Flop counting convention: 1 flop = 1 real number operation of type 
(multiply/divide/add/subtract)
 e.g., VecAXPY() for real vectors of length N --> 
2N flop
 and VecAXPY() for complex vectors of length N --> 
8N flop
 
 Summary of Stages:   - Time --  - Flop -  --- Messages ---  -- 
Message Lengths --  -- Reductions --
 Avg %Total Avg %Total   counts   %Total
 Avg %Total   counts   %Total 
  0:  Main Stage: 1.6608e+02 100.0%  7.3541e+13 100.0%  1.178e+08  99.9%  
4.081e+03   99.9%  2.603e+03  98.8% 
 
 

 See the 'Profiling' chapter of the users' manual for details on interpreting 
output.
 Phase summary info:
Count: number of times phase was executed
Time and Flop: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and 
PetscLogStagePop().
   %T - percent time in this phase %F - percent flop in this phase
   %M - percent messages in this phase %L - percent message lengths in 
this phase
   %R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all 
processors)
 

 EventCount  Time (sec) Flop
 --- Global ---  --- Stage ---   Total
Max Ratio  Max Ratio   Max  Ratio  Mess   Avg len 
Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
 

 
 --- Event Stage 0: Main Stage
 
 MatMult 7342 1.0 4.4956e+01 1.4 4.09e+10 1.2 9.6e+07 4.3e+03 
0.0e+00 23 53 81 86  0  23 53 81 86  0 859939
 MatMultAdd  1130 1.0 3.4048e+00 2.3 1.55e+09 1.1 8.4e+06 8.2e+02 
0.0e+00  2  2  7  1  0   2  2  7  1  0 434274
 MatMultTranspose1130 1.0 4.7555e+00 3.8 1.55e+09 1.1 8.4e+06 8.2e+02 
0.0e+00  1  2  7  1  0   1  2  7  1  0 310924
 MatSolve 226 0.0 6.8927e-04 0.0 6.24e+04 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  090
 MatSOR  6835 1.0 3.6061e+01 1.4 2.85e+10 1.1 0.0e+00 0.0e+00 
0.0e+00 20 37  0  0  0  20 37  0  0  0 760198
 MatLUFactorSym 1 1.0 1.0800e-0390.6 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
 MatLUFactorNum 1 1.0 8.0395e-04421.5 1.09e+03 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 1
 MatScale  15 1.0 1.7925e-02 1.8 9.12e+06 1.1 6.6e+04 1.1e+03 
0.0e+00  0  0  0  0  0   0  0  0  0  0 485856
 MatResidual 1130 1.0 6.3576e+00 1.5 5.31e+09 1.2 1.5e+07 3.7e+03 
0.0e+00  3  7 13 11  0   3  7 13 11  0 781728
 MatAssemblyBegin 112 1.0 9.9765e-01 3.0 0.00e+00 0.0 2.1e+05 7.8e+04 
7.4e+01  0  0  0  3  3   0  0  0  3  3 0
 MatAssemblyEnd   112 1.0 6.8845e-01 1.1 0.00e+00 0.0 8.3e+05 3.4e+02 
2.6e+02  0  0  1  0 10   0  0  1  0 10 0
 MatGetRow 582170 1.0 8.5022e-02 1.3 0.00e+00 

[petsc-users] Multi-preconditioned Krylov

2018-02-28 Thread Karin
Dear PETSc team,

I would like to experiment multi-preconditioned Krylov methods, as
presented in the paper or Bridson and Greif (
https://www.cs.ubc.ca/~rbridson/mpcg/) and more specifically in a context
of DD like in the paper of Spillane (
https://hal.archives-ouvertes.fr/hal-01170059/document) or Gosselet (
https://hal.archives-ouvertes.fr/hal-01056928/document).

It seems to me this is not a basically supported feature of PETSc.

My question :
Is there a way of doing it by playing with the runtime options of PETSc or
do I have to implement it in the core of the KSP object? If so, what would
be the correct design?

Thanks,
Nicolas


Re: [petsc-users] [petsc-dev] Golub-Kahan bidiagonalization

2017-12-18 Thread Karin
The N matrix is not mandatory ; in some case, it can be usefull to
accelerate the convergence.
So you confirm we should look at a fieldsplit implementation with a new
-pc_fieldsplit_type gk?

Thanks,
Nicolas

2017-12-18 14:03 GMT+01:00 Matthew Knepley <knep...@gmail.com>:

> On Mon, Dec 18, 2017 at 7:42 AM, Karin <niko.ka...@gmail.com> wrote:
>
>> Dear PETSc team,
>>
>> I would like to implement and possibly commit to PETSc the Golub-Kahan
>> bidiagonalization algorithm (GK) describe in Arioli's paper :
>> http://epubs.siam.org/doi/pdf/10.1137/120866543.
>> In this work, Mario Arioli uses GK to solve saddle point problems, of the
>> form A=[A00, A01; A10, A11]. There is an outer-loop which treats the
>> constraints and an inner-loop, with its own KSP, to solve the linear
>> systems with A00 as operator. We have evaluated this algorithm on different
>> problems and have found that it exhibits very nice convergence of the
>> outer-loop (independant of the problem size).
>>
>> In order to developp a source that could be commited to PETSc, I would
>> like to have your opinion on how to implement it. Since the algorithm
>> treats saddle point problems, it seems to me that it should be implemented
>> in the fieldsplit framework. Should we add for instance a
>> new -pc_fieldsplit_type, say gk? Have you other ideas?
>>
>
> That was my first idea. From quickly looking at the paper, it looks like
> you need an auxiliary matrix N which
> does not come from the decomposition, so you will have to attach it to
> something, like we do for LSC, or
> demand that it come in as the (1,1) block of the preconditioning matrix
> which is a little hacky as well.
>
>   Thanks,
>
>  Matt
>
>
>> I look forward to hearing your opinion on the best design for
>> implementing this algorithm in PETSc.
>>
>> Regards,
>> Nicolas
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>


[petsc-users] Golub-Kahan bidiagonalization

2017-12-18 Thread Karin
Dear PETSc team,

I would like to implement and possibly commit to PETSc the Golub-Kahan
bidiagonalization algorithm (GK) describe in Arioli's paper :
http://epubs.siam.org/doi/pdf/10.1137/120866543.
In this work, Mario Arioli uses GK to solve saddle point problems, of the
form A=[A00, A01; A10, A11]. There is an outer-loop which treats the
constraints and an inner-loop, with its own KSP, to solve the linear
systems with A00 as operator. We have evaluated this algorithm on different
problems and have found that it exhibits very nice convergence of the
outer-loop (independant of the problem size).

In order to developp a source that could be commited to PETSc, I would like
to have your opinion on how to implement it. Since the algorithm treats
saddle point problems, it seems to me that it should be implemented in the
fieldsplit framework. Should we add for instance a new -pc_fieldsplit_type,
say gk? Have you other ideas?

I look forward to hearing your opinion on the best design for implementing
this algorithm in PETSc.

Regards,
Nicolas


Re: [petsc-users] Matload with given parallel layout

2017-10-25 Thread Karin
Barry, Matt,
Thank you very much for that clarification. My question is mainly motivated
by the will to transfer test matrices from an industrial software to a more
light and agile PETSc code, in order to run numerical experiments. Since we
are dealing with DDM, I would like to ensure that we keep the parallel
layout of our matrices.

Regards,
Nicolas

2017-10-25 16:55 GMT+02:00 Barry Smith <bsm...@mcs.anl.gov>:

>
> On Oct 25, 2017, at 5:13 AM, Matthew Knepley <knep...@gmail.com> wrote:
>
> On Wed, Oct 25, 2017 at 6:10 AM, Karin <niko.ka...@gmail.com> wrote:
> Thank you very much for your answer.
> Is there an example in the PETSc tests that shows how to prescribe the
> layout of the Mat that is passed to MatLoad()? I see how to specify a local
> size with MatSetSize but I do not see how to assign an entry of the Mat to
> a given process...
>
> Mat object only have contiguous division of rows on processes, so
> specifying the size of each partition is all you can do.
>
>
>   And indeed if you want the same layout as your MatViewed matrix then
> this is exactly what you need. MatView just stores from row 0 to n-1 in
> order and MatLoad reads from 0 to n-1 in order. If you want a different
> parallel ordering, like based on using a partitioning then you load the
> matrix and use MatPartitioningCreate() and then MatCreateSubMatrix() to
> redistribute the matrix (note in this case the "sub matrix" has the same
> size as the original matrix".
>
>   Note we don't recommend saving big old matrices to files and then
> reloading them for solves etc. This is not scalable, better to write your
> applications so the entire process doesn't require saving and load matrices.
>
>  Barry
>
>
>
>   Matt
>
> Thanks,
> Nicolas
>
> 2017-10-25 11:40 GMT+02:00 Matthew Knepley <knep...@gmail.com>:
> On Wed, Oct 25, 2017 at 4:32 AM, Karin <niko.ka...@gmail.com> wrote:
> Dear PETSc team,
>
> I have a code that creates a parallel matrix based on domain
> decomposition. I serialize this matrix with MatView.
> Then, I would like to relaod it with MatLoad but not leaving PETSc decide
> the parallel layout but rather use the original distribution of the degrees
> of freedom.
> Does MatLoad help to do that? Shall  I use the IS of the local/global
> mapping ?
>
> You can prescribe the layout of the Mat that is passed to MatLoad().
> However, if you are asking that the original
> layout be stored in the file, we do not do that. It could be made to work
> by writing the layout to the .info file and
> then having the option call MatSetSizes() on load. Should not be much code.
>
>  Thanks,
>
> Matt
>
> I look forward to reading you,
> Nicolas
>
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
>
>
>


Re: [petsc-users] Matload with given parallel layout

2017-10-25 Thread Karin
Thank you very much for your answer.
Is there an example in the PETSc tests that shows how to prescribe the
layout of the Mat that is passed to MatLoad()? I see how to specify a local
size with MatSetSize but I do not see how to assign an entry of the Mat to
a given process...

Thanks,
Nicolas

2017-10-25 11:40 GMT+02:00 Matthew Knepley <knep...@gmail.com>:

> On Wed, Oct 25, 2017 at 4:32 AM, Karin <niko.ka...@gmail.com> wrote:
>
>> Dear PETSc team,
>>
>> I have a code that creates a parallel matrix based on domain
>> decomposition. I serialize this matrix with MatView.
>> Then, I would like to relaod it with MatLoad but not leaving PETSc decide
>> the parallel layout but rather use the original distribution of the degrees
>> of freedom.
>> Does MatLoad help to do that? Shall  I use the IS of the local/global
>> mapping ?
>>
>
> You can prescribe the layout of the Mat that is passed to MatLoad().
> However, if you are asking that the original
> layout be stored in the file, we do not do that. It could be made to work
> by writing the layout to the .info file and
> then having the option call MatSetSizes() on load. Should not be much code.
>
>   Thanks,
>
>  Matt
>
>
>> I look forward to reading you,
>> Nicolas
>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>


[petsc-users] Matload with given parallel layout

2017-10-25 Thread Karin
Dear PETSc team,

I have a code that creates a parallel matrix based on domain decomposition.
I serialize this matrix with MatView.
Then, I would like to relaod it with MatLoad but not leaving PETSc decide
the parallel layout but rather use the original distribution of the degrees
of freedom.
Does MatLoad help to do that? Shall  I use the IS of the local/global
mapping ?

I look forward to reading you,
Nicolas


[petsc-users] PCASM and restriction operators

2017-07-26 Thread Karin
Dear PETSc team,

I would like to understand the ASM implemented in PETSc. In order to build
the link between the theory and the implementation, I would like to know if
it is possible to print or export the restriction and prolongation
operators. I have seen in the code (asm.c) that they are VecScatter objects
and I wonder if their is a way to visualize them.

Thanks in advance,
Nicolas


[petsc-users] Print local parts of MATMPIAIJ

2017-06-28 Thread Karin
Dear PETSc team,

I am building a distributed MATMPIAIJ matrix with the petsc4py interface of
PETSc. Then I would like to print, say in separate files, the local entries
of the matrix owned by the different processes. Indeed, when defining a
viewer, it prints the whole matrix whether in stdout or in a specified file.

Is there a way of doing so with petsc4py of with PETSc?

Thanks,
Nicolas


[petsc-users] "snes/examples/tutorials/ex1f -snes_type fas" fails with segfault

2017-06-05 Thread Karin
Dear PETSc team,

If I run "snes/examples/tutorials/ex1 -snes_type fas", everything is OK.
But with its Fortran version  "snes/examples/tutorials/ex1f -snes_type
fas", I get a segfault (see error below).
Do you confirm or did I miss something?

Best regards,
Nicolas

--
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Corrupt argument:
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: Fortran callback not set on this object
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016
[0]PETSC ERROR:


 on a arch-linux2-c-debug named
dsp0780450 by niko Thu Jun  1 16:18:43 2017
[0]PETSC ERROR: Configure options
--prefix=/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/Install
--with-mpi=yes --with-x=yes
--download-ml=/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz
--with-mumps-lib="-L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/lib
-lzmumps -ldmumps -lmumps_common -lpord
-L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Scotch_aster-604_aster6/MPI/lib
-lesmumps -lptscotch -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr
-lscotcherrexit
-L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Parmetis_aster-403_aster/lib
-lparmetis
-L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Metis_aster-510_aster1/lib
-lmetis -L/usr/lib -lscalapack-openmpi -L/usr/lib -lblacs-openmpi
-lblacsCinit-openmpi -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu
-lgomp "
--with-mumps-include=/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/include
--with-scalapack-lib="-L/usr/lib -lscalapack-openmpi"
--with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi
-lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas"
--with-lapack-lib="-L/usr/lib -llapack"
[0]PETSC ERROR: #1 PetscObjectGetFortranCallback() line 263 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/sys/objects/inherit.c
[0]PETSC ERROR: #2 oursnesjacobian() line 105 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/interface/ftn-custom/zsnesf.c
[0]PETSC ERROR: #3 SNESComputeJacobian() line 2312 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/interface/snes.c
[0]PETSC ERROR: #4 SNESSolve_NEWTONLS() line 228 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/ls/ls.c
[0]PETSC ERROR: #5 SNESSolve() line 4008 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/interface/snes.c
[0]PETSC ERROR: #6 SNESFASDownSmooth_Private() line 512 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/fas/fas.c
[0]PETSC ERROR: #7 SNESFASCycle_Multiplicative() line 816 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/fas/fas.c
[0]PETSC ERROR: #8 SNESSolve_FAS() line 987 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/fas/fas.c
[0]PETSC ERROR: #9 SNESSolve() line 4008 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/interface/snes.c
--


Re: [petsc-users] Using FAS with SNES

2017-06-02 Thread Karin
I emphasize that snes/examples/tutorials/ex1.c works perfectly with the
option -snes_type fas.


2017-06-02 17:07 GMT+02:00 Karin <niko.ka...@gmail.com>:

> Dear All,
>
> In order to ease the investigation, I reproduced this segfault by running
> snes/examples/tutorials/ex1f.F with the option -snes_type fas.
> I have the feeling that this is due to the nullity of the context object
> of FormJacobian (PETSC_NULL_OBJECT in Fortran).
>
> Best regards,
> Nicolas
>
> 2017-06-01 16:30 GMT+02:00 Karin <niko.ka...@gmail.com>:
>
>> Dear PETSc team,
>>
>> I have interfaced our fortran legacy code with PETSC SNES. I mainly
>> followed the examples you provide. What I conceptually used is :
>>
>> 
>> 
>> --
>>
>> call SNESCreate(PETSC_COMM_WORLD,snes,ierr)
>>
>> call SNESSetFromOptions(snes,ierr)
>>
>> call SNESSETFunction(snes, pf, nonlinFormFunction, PETSC_NULL_OBJECT,
>> ierr)
>>
>> call SNESSetJacobian(snes, mat, mat, nonlinFormJacobian,
>> PETSC_NULL_OBJECT, ierr)
>>
>> call SNESSetKSP(snes, myksp, ierr)
>> 
>> 
>> --
>>
>> The code runs fine with -snes_type newtonls or newtontr. But when using
>> -snes_type fas, it complains with the message :
>>
>> 
>> 
>> --
>> [0]PETSC ERROR: - Error Message
>> --
>> [0]PETSC ERROR: Corrupt argument: http://www.mcs.anl.gov/petsc/d
>> ocumentation/faq.html#valgrind
>> [0]PETSC ERROR: Fortran callback not set on this object
>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> [0]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016
>> [0]PETSC ERROR:
>>
>>
>>
>> on a arch-linux2-c-debug named dsp0780450 by niko Thu Jun  1 16:18:43 2017
>> [0]PETSC ERROR: Configure options --prefix=/home/niko/dev/codeas
>> ter-prerequisites/petsc-3.7.2/Install --with-mpi=yes --with-x=yes
>> --download-ml=/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz
>> --with-mumps-lib="-L/home/niko/dev/codeaster-prerequisites/
>> v13/prerequisites/Mumps-502_consortium_aster1/MPI/lib -lzmumps -ldmumps
>> -lmumps_common -lpord -L/home/niko/dev/codeaster-pre
>> requisites/v13/prerequisites/Scotch_aster-604_aster6/MPI/lib -lesmumps
>> -lptscotch -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr
>> -lscotcherrexit -L/home/niko/dev/codeaster-pre
>> requisites/v13/prerequisites/Parmetis_aster-403_aster/lib -lparmetis
>> -L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Metis_aster-510_aster1/lib
>> -lmetis -L/usr/lib -lscalapack-openmpi -L/usr/lib -lblacs-openmpi
>> -lblacsCinit-openmpi -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu
>> -lgomp " --with-mumps-include=/home/niko/dev/codeaster-prerequisites/
>> v13/prerequisites/Mumps-502_consortium_aster1/MPI/include
>> --with-scalapack-lib="-L/usr/lib -lscalapack-openmpi"
>> --with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi
>> -lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas"
>> --with-lapack-lib="-L/usr/lib -llapack"
>> [0]PETSC ERROR: #1 PetscObjectGetFortranCallback() line 263 in
>> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/sys/
>> objects/inherit.c
>> [0]PETSC ERROR: #2 oursnesjacobian() line 105 in
>> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/
>> interface/ftn-custom/zsnesf.c
>> [0]PETSC ERROR: #3 SNESComputeJacobian() line 2312 in
>> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/
>> interface/snes.c
>> [0]PETSC ERROR: #4 SNESSolve_NEWTONLS() line 228 in
>> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/ls/ls.c
>> [0]PETSC ERROR: #5 SNESSolve() line 4008 in /home/niko/dev/codeaster-prere
>> quisites/petsc-3.7.2/src/snes/interface/snes.c
>> [0]PETSC ERROR: #6 SNESFASDownSmooth_Private() line 512 in
>> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/
>> impls/fas/fas.c
>> [0]PETSC ERROR: #7 SNESFASCycle_Multiplicative() line 816 in
>> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/
>> impls/f

Re: [petsc-users] Using FAS with SNES

2017-06-02 Thread Karin
Dear All,

In order to ease the investigation, I reproduced this segfault by running
snes/examples/tutorials/ex1f.F with the option -snes_type fas.
I have the feeling that this is due to the nullity of the context object of
FormJacobian (PETSC_NULL_OBJECT in Fortran).

Best regards,
Nicolas

2017-06-01 16:30 GMT+02:00 Karin <niko.ka...@gmail.com>:

> Dear PETSc team,
>
> I have interfaced our fortran legacy code with PETSC SNES. I mainly
> followed the examples you provide. What I conceptually used is :
>
> 
> --
>
> call SNESCreate(PETSC_COMM_WORLD,snes,ierr)
>
> call SNESSetFromOptions(snes,ierr)
>
> call SNESSETFunction(snes, pf, nonlinFormFunction, PETSC_NULL_OBJECT,
> ierr)
>
> call SNESSetJacobian(snes, mat, mat, nonlinFormJacobian,
> PETSC_NULL_OBJECT, ierr)
>
> call SNESSetKSP(snes, myksp, ierr)
> 
> --
>
> The code runs fine with -snes_type newtonls or newtontr. But when using
> -snes_type fas, it complains with the message :
>
> 
> --
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Corrupt argument: http://www.mcs.anl.gov/petsc/
> documentation/faq.html#valgrind
> [0]PETSC ERROR: Fortran callback not set on this object
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016
> [0]PETSC ERROR:
>
>
>
> on a arch-linux2-c-debug named dsp0780450 by niko Thu Jun  1 16:18:43 2017
> [0]PETSC ERROR: Configure options --prefix=/home/niko/dev/
> codeaster-prerequisites/petsc-3.7.2/Install --with-mpi=yes --with-x=yes
> --download-ml=/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz
> --with-mumps-lib="-L/home/niko/dev/codeaster-prerequisites/v13/
> prerequisites/Mumps-502_consortium_aster1/MPI/lib -lzmumps -ldmumps
> -lmumps_common -lpord -L/home/niko/dev/codeaster-prerequisites/v13/
> prerequisites/Scotch_aster-604_aster6/MPI/lib -lesmumps -lptscotch
> -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr -lscotcherrexit
> -L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Parmetis_aster-403_aster/lib
> -lparmetis -L/home/niko/dev/codeaster-prerequisites/v13/
> prerequisites/Metis_aster-510_aster1/lib -lmetis -L/usr/lib
> -lscalapack-openmpi -L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi
> -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu -lgomp "
> --with-mumps-include=/home/niko/dev/codeaster-prerequisites/v13/
> prerequisites/Mumps-502_consortium_aster1/MPI/include
> --with-scalapack-lib="-L/usr/lib -lscalapack-openmpi"
> --with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi
> -lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas"
> --with-lapack-lib="-L/usr/lib -llapack"
> [0]PETSC ERROR: #1 PetscObjectGetFortranCallback() line 263 in
> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/
> sys/objects/inherit.c
> [0]PETSC ERROR: #2 oursnesjacobian() line 105 in /home/niko/dev/codeaster-
> prerequisites/petsc-3.7.2/src/snes/interface/ftn-custom/zsnesf.c
> [0]PETSC ERROR: #3 SNESComputeJacobian() line 2312 in
> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/
> snes/interface/snes.c
> [0]PETSC ERROR: #4 SNESSolve_NEWTONLS() line 228 in
> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/ls/ls.c
> [0]PETSC ERROR: #5 SNESSolve() line 4008 in /home/niko/dev/codeaster-
> prerequisites/petsc-3.7.2/src/snes/interface/snes.c
> [0]PETSC ERROR: #6 SNESFASDownSmooth_Private() line 512 in
> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/
> snes/impls/fas/fas.c
> [0]PETSC ERROR: #7 SNESFASCycle_Multiplicative() line 816 in
> /home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/
> snes/impls/fas/fas.c
> [0]PETSC ERROR: #8 SNESSolve_FAS() line 987 in /home/niko/dev/codeaster-
> prerequisites/petsc-3.7.2/src/snes/impls/fas/fas.c
> [0]PETSC ERROR: #9 SNESSolve() line 4008 in /home/niko/dev/codeaster-
> prerequisites/petsc-3.7.2/src/snes/interface/snes.c
> 
> --
>
> When exploring a little bit with a debugger, it seems that the object
> snes->vec_rhs which is used is fas.c is a null pointer.
>
> What is this object vec_rhs and how to set it?
>
> Thank you in advance,
> Nicolas
>
>


[petsc-users] Using FAS with SNES

2017-06-01 Thread Karin
Dear PETSc team,

I have interfaced our fortran legacy code with PETSC SNES. I mainly
followed the examples you provide. What I conceptually used is :


--

call SNESCreate(PETSC_COMM_WORLD,snes,ierr)

call SNESSetFromOptions(snes,ierr)

call SNESSETFunction(snes, pf, nonlinFormFunction, PETSC_NULL_OBJECT,
ierr)

call SNESSetJacobian(snes, mat, mat, nonlinFormJacobian,
PETSC_NULL_OBJECT, ierr)

call SNESSetKSP(snes, myksp, ierr)

--

The code runs fine with -snes_type newtonls or newtontr. But when using
-snes_type fas, it complains with the message :


--
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Corrupt argument:
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: Fortran callback not set on this object
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016
[0]PETSC
ERROR:
on a arch-linux2-c-debug named dsp0780450 by niko Thu Jun  1 16:18:43 2017
[0]PETSC ERROR: Configure options
--prefix=/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/Install
--with-mpi=yes --with-x=yes
--download-ml=/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz
--with-mumps-lib="-L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/lib
-lzmumps -ldmumps -lmumps_common -lpord
-L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Scotch_aster-604_aster6/MPI/lib
-lesmumps -lptscotch -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr
-lscotcherrexit
-L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Parmetis_aster-403_aster/lib
-lparmetis
-L/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Metis_aster-510_aster1/lib
-lmetis -L/usr/lib -lscalapack-openmpi -L/usr/lib -lblacs-openmpi
-lblacsCinit-openmpi -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu
-lgomp "
--with-mumps-include=/home/niko/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/include
--with-scalapack-lib="-L/usr/lib -lscalapack-openmpi"
--with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi
-lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas"
--with-lapack-lib="-L/usr/lib -llapack"
[0]PETSC ERROR: #1 PetscObjectGetFortranCallback() line 263 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/sys/objects/inherit.c
[0]PETSC ERROR: #2 oursnesjacobian() line 105 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/interface/ftn-custom/zsnesf.c
[0]PETSC ERROR: #3 SNESComputeJacobian() line 2312 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/interface/snes.c
[0]PETSC ERROR: #4 SNESSolve_NEWTONLS() line 228 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/ls/ls.c
[0]PETSC ERROR: #5 SNESSolve() line 4008 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/interface/snes.c
[0]PETSC ERROR: #6 SNESFASDownSmooth_Private() line 512 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/fas/fas.c
[0]PETSC ERROR: #7 SNESFASCycle_Multiplicative() line 816 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/fas/fas.c
[0]PETSC ERROR: #8 SNESSolve_FAS() line 987 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/impls/fas/fas.c
[0]PETSC ERROR: #9 SNESSolve() line 4008 in
/home/niko/dev/codeaster-prerequisites/petsc-3.7.2/src/snes/interface/snes.c

--

When exploring a little bit with a debugger, it seems that the object
snes->vec_rhs which is used is fas.c is a null pointer.

What is this object vec_rhs and how to set it?

Thank you in advance,
Nicolas


[petsc-users] Using SNES in a legacy code

2017-05-05 Thread Karin
Dear PETSc team,

I am part of the development team of legacy fortran code with a tailored
Newton's method. The software is already using PETSc's linear solvers and
we enjoy it. Now I would like to evaluate the SNES solver.
I have already extracted a function in order to compute the Jacobian and
another one to compute the residual.
But there is something I cannot figure out : at each Newton's iteration,
our solver needs to know the unknowns value in order to compute the
Jacobian. But the increment vector is computed within the SNES.
How can I synchronize PETSc's vector of unknowns  and mine? Is there some
kind of SNESSetPostSolveShell ?

Thanks for developping PETSc,
Nicolas


Re: [petsc-users] Fieldsplit with sub pc MUMPS in parallel

2017-01-05 Thread Karin
Dear Barry, dear Dave,

THANK YOU!
You two pointed out the right problem.By using the options you provided
(-fieldsplit_0_ksp_type gmres -fieldsplit_0_ksp_pc_side right
-fieldsplit_1_ksp_type gmres -fieldsplit_1_ksp_pc_side right), the solver
converges in 3 iterations whatever the size of the communicator.
All the trick is in the precise resolution of the Schur complement, by
using a Krylov method (and not only preonly) *and* applying the
preconditioner on the right (so evaluating the convergence on the
unpreconditioned residual).

@Barry : the difference you see on the nonzero allocations for the
different runs is just an artefact : when using more than one proc, we
slighly over-estimate the number of non-zero terms. If I run the same
problem with the -info option, I get extra information :
[2] MatAssemblyEnd_SeqAIJ(): Matrix size: 110 X 110; storage space: 0
unneeded,5048 used
[1] MatAssemblyEnd_SeqAIJ(): Matrix size: 271 X 271; storage space: 4249
unneeded,26167 used
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 307 X 307; storage space: 7988
unneeded,31093 used
[2] MatAssemblyEnd_SeqAIJ(): Matrix size: 110 X 244; storage space: 0
unneeded,6194 used
[1] MatAssemblyEnd_SeqAIJ(): Matrix size: 271 X 233; storage space: 823
unneeded,9975 used
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 307 X 197; storage space: 823
unneeded,8263 used
And 5048+26167+31093+6194+9975+8263=86740 which is the number of exactly
estimated nonzero terms for 1 proc.


Thank you again!

Best regards,
Nicolas


2017-01-05 1:36 GMT+01:00 Barry Smith <bsm...@mcs.anl.gov>:

>
>There is something wrong with your set up.
>
> 1 process
>
>total: nonzeros=140616, allocated nonzeros=140616
>   total: nonzeros=68940, allocated nonzeros=68940
> total: nonzeros=3584, allocated nonzeros=3584
> total: nonzeros=1000, allocated nonzeros=1000
> total: nonzeros=8400, allocated nonzeros=8400
>
> 2 processes
> total: nonzeros=146498, allocated nonzeros=146498
>   total: nonzeros=73470, allocated nonzeros=73470
> total: nonzeros=3038, allocated nonzeros=3038
> total: nonzeros=1110, allocated nonzeros=1110
> total: nonzeros=6080, allocated nonzeros=6080
> total: nonzeros=146498, allocated nonzeros=146498
>   total: nonzeros=73470, allocated nonzeros=73470
> total: nonzeros=6080, allocated nonzeros=6080
>   total: nonzeros=2846, allocated nonzeros=2846
> total: nonzeros=86740, allocated nonzeros=94187
>
>   It looks like you are setting up the problem differently in parallel and
> seq. If it is suppose to be an identical problem then the number nonzeros
> should be the same in at least the first two matrices.
>
>
>
> > On Jan 4, 2017, at 3:39 PM, Karin <niko.ka...@gmail.com> wrote:
> >
> > Dear Petsc team,
> >
> > I am (still) trying to solve Biot's poroelasticity problem :
> >  
> >
> > I am using a mixed P2-P1 finite element discretization. The matrix of
> the discretized system in binary format is attached to this email.
> >
> > I am using the fieldsplit framework to solve the linear system. Since I
> am facing some troubles, I have decided to go back to simple things. Here
> are the options I am using :
> >
> > -ksp_rtol 1.0e-5
> > -ksp_type fgmres
> > -pc_type fieldsplit
> > -pc_fieldsplit_schur_factorization_type full
> > -pc_fieldsplit_type schur
> > -pc_fieldsplit_schur_precondition selfp
> > -fieldsplit_0_pc_type lu
> > -fieldsplit_0_pc_factor_mat_solver_package mumps
> > -fieldsplit_0_ksp_type preonly
> > -fieldsplit_0_ksp_converged_reason
> > -fieldsplit_1_pc_type lu
> > -fieldsplit_1_pc_factor_mat_solver_package mumps
> > -fieldsplit_1_ksp_type preonly
> > -fieldsplit_1_ksp_converged_reason
> >
> > On a single proc, everything runs fine : the solver converges in 3
> iterations, according to the theory (see Run-1-proc.txt [contains
> -log_view]).
> >
> > On 2 procs, the solver converges in 28 iterations (see Run-2-proc.txt).
> >
> > On 3 procs, the solver converges in 91 iterations (see Run-3-proc.txt).
> >
> > I do not understand this behavior : since MUMPS is a parallel direct
> solver, shouldn't the solver converge in max 3 iterations whatever the
> number of procs?
> >
> >
> > Thanks for your precious help,
> > Nicolas
> >
> > <1_Warning.txt>
>
>


Re: [petsc-users] FieldSplit and Biot's poroelasticity

2016-12-15 Thread Karin
Thank you very much Matt.
I have given selfp a try and I am even more convienced that the pressure
mass matrix must be implemented!

Regards,
Nicolas

2016-12-14 15:24 GMT+01:00 Matthew Knepley <knep...@gmail.com>:

> On Wed, Dec 14, 2016 at 2:17 AM, Karin <niko.ka...@gmail.com> wrote:
>
>> Lawrence, Matt,
>>
>> I really do share your point.
>> Nevertheless there are sometimes good reasons to do things  "not the best
>> way they should be done", at least in a first time (here PETSc is used
>> within a huge fortran-based general purpose finite element solver and build
>> and extract the pressure mass matrix is not a straightforward task).
>> In the present case, I am looking for "the less worst approach" out of
>> the fieldsplit built-in preconditioners.
>> And I consider this is not an uninteresting question.
>>
>
> Depending on how diagonally dominant things are, 'selfp' could be an
> acceptable replacement for using the mass matrix:
>
>   http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/
> PCFieldSplitSetSchurPre.html#PCFieldSplitSetSchurPre
>
> Matt
>
>
>> Best regards,
>> Nicolas
>>
>> 2016-12-13 19:41 GMT+01:00 Matthew Knepley <knep...@gmail.com>:
>>
>>> On Tue, Dec 13, 2016 at 10:50 AM, Karin <niko.ka...@gmail.com>
>>> wrote:
>>>
>>>> Dear Petsc-gurus,
>>>>
>>>> I am solving Biot's poroelasticity problem :
>>>>  [image: Images intégrées 1]
>>>>
>>>> I am using a mixed P2-P1 finite element discretization.
>>>>
>>>> I am using the fieldsplit framework to solve the linear systems. Here
>>>> are the options I am using :
>>>> -pc_type fieldsplit
>>>> -pc_field_split_type schur
>>>> -fieldsplit_0_pc_type gamg
>>>> -fieldsplit_0_pc_gamg_threshold -1.0
>>>> -fieldsplit_0_ksp_type gmres
>>>> -fieldsplit_0_ksp_monitor
>>>> -fieldsplit_1_pc_type sor
>>>> -fieldsplit_1_ksp_type gmres
>>>> -pc_fieldsplit_schur_factorization_type upper
>>>>
>>>>
>>>> By increasing the mesh size, I get increasing numbers of outer
>>>> iterations.
>>>>
>>>> According to your own experience, among all the features of fieldsplit,
>>>> was is the "best" set of preconditioners for this rather classical problem
>>>>  in order to get an extensible solver (I would like to solve this problem
>>>> on some tens millions of unknowns of some hundreds of procs)?
>>>>
>>>
>>> Lawrence is right that you should construct the right preconditioner
>>> matrix for the Schur complement, and its probably just something like I +
>>> \Delta with
>>> the correct multipliers. Without the mass matrix, it will likely be
>>> quite bad. It should not take much time to code that up since you already
>>>  have the mass
>>> matrix from your c_0 p term.
>>>
>>>   Matt
>>>
>>>
>>>> Thanks,
>>>> Nicolas
>>>>
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


Re: [petsc-users] FieldSplit and Biot's poroelasticity

2016-12-14 Thread Karin
Lawrence, Matt,

I really do share your point.
Nevertheless there are sometimes good reasons to do things  "not the best
way they should be done", at least in a first time (here PETSc is used
within a huge fortran-based general purpose finite element solver and build
and extract the pressure mass matrix is not a straightforward task).
In the present case, I am looking for "the less worst approach" out of the
fieldsplit built-in preconditioners.
And I consider this is not an uninteresting question.

Best regards,
Nicolas

2016-12-13 19:41 GMT+01:00 Matthew Knepley <knep...@gmail.com>:

> On Tue, Dec 13, 2016 at 10:50 AM, Karin <niko.ka...@gmail.com> wrote:
>
>> Dear Petsc-gurus,
>>
>> I am solving Biot's poroelasticity problem :
>>  [image: Images intégrées 1]
>>
>> I am using a mixed P2-P1 finite element discretization.
>>
>> I am using the fieldsplit framework to solve the linear systems. Here are
>> the options I am using :
>> -pc_type fieldsplit
>> -pc_field_split_type schur
>> -fieldsplit_0_pc_type gamg
>> -fieldsplit_0_pc_gamg_threshold -1.0
>> -fieldsplit_0_ksp_type gmres
>> -fieldsplit_0_ksp_monitor
>> -fieldsplit_1_pc_type sor
>> -fieldsplit_1_ksp_type gmres
>> -pc_fieldsplit_schur_factorization_type upper
>>
>>
>> By increasing the mesh size, I get increasing numbers of outer
>> iterations.
>>
>> According to your own experience, among all the features of fieldsplit,
>> was is the "best" set of preconditioners for this rather classical problem
>>  in order to get an extensible solver (I would like to solve this problem
>> on some tens millions of unknowns of some hundreds of procs)?
>>
>
> Lawrence is right that you should construct the right preconditioner
> matrix for the Schur complement, and its probably just something like I +
> \Delta with
> the correct multipliers. Without the mass matrix, it will likely be quite
> bad. It should not take much time to code that up since you already  have
> the mass
> matrix from your c_0 p term.
>
>   Matt
>
>
>> Thanks,
>> Nicolas
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


Re: [petsc-users] FieldSplit and Biot's poroelasticity

2016-12-13 Thread Karin
Thank you very much for this preprint, Lawrence. I have also planned to use
the pressure mass matrix for the A11 block.

Unfortunately, at this time, I have no time for implementing things. What I
would like to do is to get the best out of the built-in methods of
fieldsplit/PETSc.

Any hint is welcome!

Nicolas

2016-12-13 19:02 GMT+01:00 Lawrence Mitchell <
lawrence.mitch...@imperial.ac.uk>:

>
>
> On 13/12/16 16:50, Karin wrote:
> > Dear Petsc-gurus,
> >
> > I am solving Biot's poroelasticity problem :
> >  Images intégrées 1
> >
> > I am using a mixed P2-P1 finite element discretization.
> >
> > I am using the fieldsplit framework to solve the linear systems. Here
> > are the options I am using :
> > -pc_type fieldsplit
> > -pc_field_split_type schur
> > -fieldsplit_0_pc_type gamg
> > -fieldsplit_0_pc_gamg_threshold -1.0
> > -fieldsplit_0_ksp_type gmres
> > -fieldsplit_0_ksp_monitor
> > -fieldsplit_1_pc_type sor
> > -fieldsplit_1_ksp_type gmres
> > -pc_fieldsplit_schur_factorization_type upper
> >
> >
> > By increasing the mesh size, I get increasing numbers of outer
> > iterations.
> >
> > According to your own experience, among all the features of
> > fieldsplit, was is the "best" set of preconditioners for this rather
> > classical problem  in order to get an extensible solver (I would like
> > to solve this problem on some tens millions of unknowns of some
> > hundreds of procs)?
>
> Here's a recent preprint that develops a three-field formulation of
> the problem that gets reasonably mesh and parameter-independent
> iteration counts using block-diagonal preconditioning.
>
> https://arxiv.org/abs/1507.03199
>
> (No need for schur complements)
>
> If you can create the relevant blocks it should be implementable with
> -pc_fieldsplit_type additive
>
>
> Lawrence
>
>


[petsc-users] FieldSplit and Biot's poroelasticity

2016-12-13 Thread Karin
Dear Petsc-gurus,

I am solving Biot's poroelasticity problem :
 [image: Images intégrées 1]

I am using a mixed P2-P1 finite element discretization.

I am using the fieldsplit framework to solve the linear systems. Here are
the options I am using :
-pc_type fieldsplit
-pc_field_split_type schur
-fieldsplit_0_pc_type gamg
-fieldsplit_0_pc_gamg_threshold -1.0
-fieldsplit_0_ksp_type gmres
-fieldsplit_0_ksp_monitor
-fieldsplit_1_pc_type sor
-fieldsplit_1_ksp_type gmres
-pc_fieldsplit_schur_factorization_type upper


By increasing the mesh size, I get increasing numbers of outer iterations.

According to your own experience, among all the features of fieldsplit, was
is the "best" set of preconditioners for this rather classical problem  in
order to get an extensible solver (I would like to solve this problem on
some tens millions of unknowns of some hundreds of procs)?

Thanks,
Nicolas


Re: [petsc-users] FieldSplit, multigrid and blocksize

2016-12-07 Thread Karin
Thank you all.
These are the answers I was looking for!

Best regards,
Nicolas

2016-12-07 15:08 GMT+01:00 Mark Adams <mfad...@lbl.gov>:

> Note, for best performance with ML and GAMG you want to give it the near
> kernel for the 00 block. These are the 6 "rigid body modes" or zero energy
> modes. PETSc provides some tools to do that (eg,
> MatNullSpaceCreateRigidBody).
>
> On Wed, Dec 7, 2016 at 4:45 PM, Lawrence Mitchell <
> lawrence.mitch...@imperial.ac.uk> wrote:
>
>>
>>
>> On 07/12/16 13:43, Karin wrote:
>> > Thanks Barry.
>> > I must emphasize that my unknowns are not numbered in a regular way :
>> > I am using a P2-P1 finite element and the middle nodes do not carry a
>> > pressure DOF. So the global numbering is somewhat like :
>> > 
>> ---
>> > u1x, u1y, u1z, p, u2x, u2y, u2z, p2, u3x, u3y, u3z, u4x, u4y, u4z, p4,
>> > .
>> >  node 1 DOF |  node 2 DOF   | node 3 DOF  |node 4 DOF |
>> > 
>> ---
>> >
>> > So my global matrix does not have a block-size of 4. Nevertheless the
>> > A00 matrix has a block size of 3!
>> > Is there a way to specify that only on the A00 sub-matrix?
>>
>> I presume you are defining the splits by providing ISes.  You need to
>> set the block size on the IS that defines the A00 block appropriately,
>> then the submatrix will have it.
>>
>> Lawrence
>>
>>
>


Re: [petsc-users] FieldSplit, multigrid and blocksize

2016-12-07 Thread Karin
Thanks Barry.
I must emphasize that my unknowns are not numbered in a regular way : I am
using a P2-P1 finite element and the middle nodes do not carry a pressure
DOF. So the global numbering is somewhat like :
---
u1x, u1y, u1z, p, u2x, u2y, u2z, p2, u3x, u3y, u3z, u4x, u4y, u4z, p4, .
 node 1 DOF |  node 2 DOF   | node 3 DOF  |node 4 DOF |
---

So my global matrix does not have a block-size of 4. Nevertheless the A00
matrix has a block size of 3!
Is there a way to specify that only on the A00 sub-matrix?

Nicolas



2016-12-07 14:22 GMT+01:00 Barry Smith <bsm...@mcs.anl.gov>:

>
> > On Dec 7, 2016, at 7:06 AM, Karin <niko.ka...@gmail.com> wrote:
> >
> > Dear PETSc gurus,
> >
> > I am using FieldSplit to solve a poro-mechanics problem. Thus, I am
> dealing with 3 displacement DOF and 1 pressure DOF.
> > In order to precondition the 00 block (aka the displacement block), I am
> using a multigrid method (ml or gamg). Nevertheless, I have the feeling
> that the multigrids performance is much lower than in the case where they
> are used on pure displacement problems (say elasticity). Indeed, I do not
> know how to set the block size of the 00 block when using FieldSplit!
> > Could you please give me some hint on that?
>
>In your case you can use a block size of 4. The first field is defined
> by "components" 0, 1, and 2 and the second field (the pressure) is defined
> by component 3. Use PCFieldSplitSetFields() to set the fields and set the
> matrix block size to 4 (use AIJ matrix).
>
>   If the displacement block corresponds to a true displacement problem
> then one should expect similar convergence of the multigrid. BUT note that
> usually with PCFIELDSPLIT one just does a single V-cycle of multigrid (KSP
> type of preonly) on the 00 block in each iteration. Run with -ksp_view to
> see what the solve is actually doing.
>
> > (the phrase "The fieldsplit preconditioner cannot currently be used with
> the BAIJ or SBAIJ data formats if the blocksize is larger than 1." is not
> clear enough for me...).
>
>To use fieldsplit you should use AIJ matrix, not BAIJ or SBAIJ (don't
> worry about impacting performance the fieldsplit pulls apart the blocks
> anyways so there would be no advantage to BAIJ or SBAIJ).
> >
> > Thanks in advance,
> > Nicolas
>
>


[petsc-users] FieldSplit, multigrid and blocksize

2016-12-07 Thread Karin
Dear PETSc gurus,

I am using FieldSplit to solve a poro-mechanics problem. Thus, I am dealing
with 3 displacement DOF and 1 pressure DOF.
In order to precondition the 00 block (aka the displacement block), I am
using a multigrid method (ml or gamg). Nevertheless, I have the feeling
that the multigrids performance is much lower than in the case where they
are used on pure displacement problems (say elasticity). Indeed, I do not
know how to set the block size of the 00 block when using FieldSplit!
Could you please give me some hint on that? (the phrase "The fieldsplit
preconditioner cannot currently be used with the BAIJ or SBAIJ data formats
if the blocksize is larger than 1." is not clear enough for me...).

Thanks in advance,
Nicolas


Re: [petsc-users] Set saddle-point structure in parallel

2016-12-02 Thread Karin
Thank you Barry.
If I understand well, each process needs to provide the IS of the global
number of each local row of the considered field. Right?
This is what I tried to code. I am gonna check my implementation.

Nicolas

2016-12-02 18:34 GMT+01:00 Barry Smith <bsm...@mcs.anl.gov>:

>
>Each process needs to provide the IS that contain only local entries
> for that process.
>
>   It looks like you might be doing the opposite.
>
>
> > On Dec 2, 2016, at 10:36 AM, Karin <niko.ka...@gmail.com> wrote:
> >
> > Dear all,
> >
> > Thanks to Matt's help, I have been able to set up a fieldsplit
> preconditioner for a Stokes-like problem. But it was in sequential! Now I
> am facing new issues when trying to set up the saddle-point structure in
> parallel.
> >
> > Well, I have a matrix with 38 DOF. In the global numbering, the pressure
> DOF are numbered : 2,5,8,11,14,17 and the velocity DOF are the others. The
> matrix is distributed on 2 procs, the rows 0 to 18 on proc0, the rows from
> 19 to 38 on procs1.
> > I have set the following IS in order to pass them to the PCFieldSplit :
> > call ISCreateGeneral(PETSC_COMM_SELF, nbddl0, vec_ddl0,
> PETSC_COPY_VALUES, is0, ierr)
> > call ISCreateGeneral(PETSC_COMM_SELF, nbddl1, vec_ddl1,
> PETSC_COPY_VALUES, is1, ierr)
> >
> > This is what they contain :
> >
> > is0 on proc0 :
> > ---
> > IS Object: 1 MPI processes
> >   type: general
> > Number of indices in set 19
> > 0 19
> > 1 20
> > 2 21
> > 3 22
> > 4 23
> > 5 24
> > 6 25
> > 7 26
> > 8 27
> > 9 28
> > 10 29
> > 11 30
> > 12 31
> > 13 32
> > 14 33
> > 15 34
> > 16 35
> > 17 36
> > 18 37
> >
> > is1 on proc0 :
> > ---
> > IS Object: 1 MPI processes
> >   type: general
> > Number of indices in set 0
> >
> > is0 on proc1 :
> > ---
> > IS Object: 1 MPI processes
> >   type: general
> > Number of indices in set 13
> > 0 0
> > 1 1
> > 2 3
> > 3 4
> > 4 6
> > 5 7
> > 6 9
> > 7 10
> > 8 12
> > 9 13
> > 10 15
> > 11 16
> > 12 18
> >
> > is1 on proc1 :
> > ---
> > IS Object: 1 MPI processes
> >   type: general
> > Number of indices in set 6
> > 0 2
> > 1 5
> > 2 8
> > 3 11
> > 4 14
> > 5 17
> >
> > Then I pass them to the FieldSplit :
> > call PCFieldSplitSetIS(pc,'0',is0, ierr)
> > call PCFieldSplitSetIS(pc,'1',is1, ierr)
> >
> >
> > But when the PC is set up, PETSc complains about :
> >
> > [1]PETSC ERROR: - Error Message
> --
> > [1]PETSC ERROR: Nonconforming object sizes
> > [1]PETSC ERROR: Local column sizes 32 do not add up to total number of
> columns 19
> > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> > [1]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016
> > [1]PETSC ERROR:
>
>
>\C0\E3o on a
> arch-linux2-c-debug named dsp0780450 by B07947 Fri Dec  2 17:07:54 2016
> > [1]PETSC ERROR: Configure options --prefix=/home/B07947/dev/
> codeaster-prerequisites/petsc-3.7.2/Install --with-mpi=yes --with-x=yes
> --download-ml=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz
> --with-mumps-lib="-L/home/B07947/dev/codeaster-prerequisites/v13/
> prerequisites/Mumps-502_consortium_aster1/MPI/lib -lzmumps -ldmumps
> -lmumps_common -lpord -L/home/B07947/dev/codeaster-prerequisites/v13/
> prerequisites/Scotch_aster-604_aster6/MPI/lib -lesmumps -lptscotch
> -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr -lscotcherrexit
> -L/home/B07947/dev/codeaster-prerequisites/v13/
> prerequisites/Parmetis_aster-403_aster/lib -lparmetis
> -L/home/B07947/dev/codeaster-prerequisites/v13/
> prerequisites/Metis_aster-510_aster1/lib -lmetis -L/usr/lib
> -lscalapack-openmpi -L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi
> -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu -lgomp "
> --with-mumps-include=/home/B07947/dev/codeaster-prerequisites/v13/
> prerequisites/Mumps-502_consortium_aster1/MPI/include
> --with-scalapack-lib="-L/usr/lib -lscalapack-openmpi"
> --with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi
> -lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas"
> --with-lapack-lib="-L/usr/lib -llapack"
> > [1]PETSC ERROR: #1 Mat

[petsc-users] Set saddle-point structure in parallel

2016-12-02 Thread Karin
Dear all,

Thanks to Matt's help, I have been able to set up a fieldsplit
preconditioner for a Stokes-like problem. But it was in sequential! Now I
am facing new issues when trying to set up the saddle-point structure in
parallel.

Well, I have a matrix with 38 DOF. In the global numbering, the pressure
DOF are numbered : 2,5,8,11,14,17 and the velocity DOF are the others. The
matrix is distributed on 2 procs, the rows 0 to 18 on proc0, the rows from
19 to 38 on procs1.
I have set the following IS in order to pass them to the PCFieldSplit :
call ISCreateGeneral(PETSC_COMM_SELF, nbddl0, vec_ddl0, PETSC_COPY_VALUES,
is0, ierr)
call ISCreateGeneral(PETSC_COMM_SELF, nbddl1, vec_ddl1, PETSC_COPY_VALUES,
is1, ierr)

This is what they contain :

is0 on proc0 :
---
IS Object: 1 MPI processes
  type: general
Number of indices in set 19
0 19
1 20
2 21
3 22
4 23
5 24
6 25
7 26
8 27
9 28
10 29
11 30
12 31
13 32
14 33
15 34
16 35
17 36
18 37

is1 on proc0 :
---
IS Object: 1 MPI processes
  type: general
Number of indices in set 0

is0 on proc1 :
---
IS Object: 1 MPI processes
  type: general
Number of indices in set 13
0 0
1 1
2 3
3 4
4 6
5 7
6 9
7 10
8 12
9 13
10 15
11 16
12 18

is1 on proc1 :
---
IS Object: 1 MPI processes
  type: general
Number of indices in set 6
0 2
1 5
2 8
3 11
4 14
5 17

Then I pass them to the FieldSplit :
call PCFieldSplitSetIS(pc,'0',is0, ierr)
call PCFieldSplitSetIS(pc,'1',is1, ierr)


But when the PC is set up, PETSc complains about :

[1]PETSC ERROR: - Error Message
--
[1]PETSC ERROR: Nonconforming object sizes
[1]PETSC ERROR: Local column sizes 32 do not add up to total number of
columns 19
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[1]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016
[1]PETSC
ERROR:
\C0\E3o on a arch-linux2-c-debug named dsp0780450 by B07947 Fri Dec  2
17:07:54 2016
[1]PETSC ERROR: Configure options
--prefix=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/Install
--with-mpi=yes --with-x=yes
--download-ml=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz
--with-mumps-lib="-L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/lib
-lzmumps -ldmumps -lmumps_common -lpord
-L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Scotch_aster-604_aster6/MPI/lib
-lesmumps -lptscotch -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr
-lscotcherrexit
-L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Parmetis_aster-403_aster/lib
-lparmetis
-L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Metis_aster-510_aster1/lib
-lmetis -L/usr/lib -lscalapack-openmpi -L/usr/lib -lblacs-openmpi
-lblacsCinit-openmpi -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu
-lgomp "
--with-mumps-include=/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/include
--with-scalapack-lib="-L/usr/lib -lscalapack-openmpi"
--with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi
-lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas"
--with-lapack-lib="-L/usr/lib -llapack"
[1]PETSC ERROR: #1 MatGetSubMatrix_MPIAIJ_Private() line 3181 in
/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/impls/aij/mpi/mpiaij.c
[1]PETSC ERROR: #2 MatGetSubMatrix_MPIAIJ() line 3100 in
/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/impls/aij/mpi/mpiaij.c
[1]PETSC ERROR: #3 MatGetSubMatrix() line 7825 in
/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/interface/matrix.c
[1]PETSC ERROR: #4 PCSetUp_FieldSplit() line 560 in
/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c
[1]PETSC ERROR: #5 PCSetUp() line 968 in
/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/pc/interface/precon.c
[1]PETSC ERROR: #6 KSPSetUp() line 390 in
/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/ksp/interface/itfunc.c


I am doing something wrong but I cannot see how I should specify the layout
of my fields.

Thanks in advance,
Nicolas





[image: Images intégrées 1]


Matrix38.ascii
Description: Binary data


[petsc-users] Check PETSc's saddle point structure

2016-11-30 Thread Karin
Dear PETSc team,

I am using the "-pc_fieldsplit_detect_saddle_point" feature of fieldsplit
in order to solve a Stokes-like problem.

I would like to check the saddle point structure automatically detected by
PETSc.
o Is there a way of printing the automatically built IS?
o Are there some printings with "-ksp_view" that could give some
insight on the fields built by PETSc?

Thanks in advance,
Nicolas


Re: [petsc-users] Command lines to reproduce the tests of "Composing scalable nonlinear algebraic solvers"

2016-08-25 Thread Karin
Dear PETSc gurus,

Thanks to the help of Matthew, I have been able to reproduce in PETSc some
tests of the paper of Peter Brune et al. entitled "Composing scalable
nonlinear algebraic solvers", with special attention to the elasticity test.
I have also tried to reproduce it in a widely used mechanics finite element
solver and I cannot obtain the same results, mainly because of my lack of
undestanding of the boundary conditions and of the loading.

According to the paper, I undestand that the edges in red in the attached
image are fully clamped. If I do that, I do observe a rotation of the grey
 face (see attached image).

If I clamp the over-mentioned edges plus I forbid the normal displacement
of the grey face, I get a quite similar warped shape but the details of the
deformation near the clamped faces are very different

In the paper, the loading is defined as a volume force applied to the whole
structure whereas in Wriggers' book, it is defined as a nodal force.

Could you please give me some details on these points?

Best regards,
Nicolas

2016-08-23 20:25 GMT+02:00 Matthew Knepley <knep...@gmail.com>:

> On Tue, Aug 23, 2016 at 10:54 AM, Karin <niko.ka...@gmail.com> wrote:
>
>> Dear PETSc team,
>>
>> I have read with high interest the paper of Peter Brune et al. entitled
>> "Composing scalable nonlinear algebraic solvers".
>> Nevertheless I would like to be able to reproduce the tests that are
>> presented within (mainly the elasticity problem, ex16).
>>
>> Could you please provide us with the command lines of these tests?
>>
>
> I believe Peter used the attached script.
>
>   Thanks,
>
>  Matt
>
>
>> Best regards,
>> Nicolas
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


[petsc-users] Command lines to reproduce the tests of "Composing scalable nonlinear algebraic solvers"

2016-08-23 Thread Karin
Dear PETSc team,

I have read with high interest the paper of Peter Brune et al. entitled
"Composing scalable nonlinear algebraic solvers".
Nevertheless I would like to be able to reproduce the tests that are
presented within (mainly the elasticity problem, ex16).

Could you please provide us with the command lines of these tests?

Best regards,
Nicolas